The use of Artificial Intelligence has gained momentum over the past few years, and it is clear that we are just at the beginning of a technological revolution with great significance for our lives. Alongside the numerous benefits of this phenomenon, there are also more than a few dangers, especially in terms of human rights violations by corporations without democratic accountability. If so, what exactly is Artificial Intelligence? In what fields is it applied and want challenges arise due to its upturn? How can the State of Israel maintain a technological advantage and promote innovation, while protecting principles such as equality and transparency with respect to a technology about which much remains unknown. Dr. Elad Gil and Dr. Tal Mimran answer these critical questions.
What is Artificial Intelligence?
Artificial Intelligence (AI) is a tool that enables “machine learning” or “computational learning” with a high level of automation that does not require human intervention. The machine learning process is a procedure of data processing and learning from context – for example, using a database of face shots which teaches the AI to identify a personality or reproduce the structure of the human face upon request. AI’s implementation is very diverse, it increases with every passing day, and this phenomenon impacts numerous areas of knowledge (language, culture and more).
Numerous software programs currently use AI tools. One program which has become particularly popular, and represents the AI revolution in the eyes of the broad market, is ChatGPT. This tool has the capability to answer questions in almost every area, as well as follow-up questions, to summarize long texts in just a few seconds, to challenge incorrect assumptions, and even admit mistakes and refuse inappropriate requests. The leap in AI capabilities makes it an auxiliary tool which is steadily gaining importance, yet like all technological surges – alongside its benefits are more dangerous aspects which must be considered.
Another complexity stems from ideological gaps on the international level, which lead to disparities in the degree of willingness to oversee the development and implementation of AI, even at the cost of human rights violations. Thus, while in states such as China devices that operate using AI are used to examine minors’ moods, or their level of concentration in class, it seems the Western world is becoming aware of the possible dangers, and accordingly, some are calling to place restrictions on AI use.
AI applications – artistic tool or weapon?
The rising use of AI deeply impacts our lives, data analysis and management, and even influences decision-making processes in a way that could lead to human rights violations. One of AI’s main flaws is the tool’s inability to explain itself, which hinders the promotion of certainty and transparency, alongside the fact that even such an advanced system can make mistakes. In fact, as a rule, the more one demands explanations from AI, the more its efficiency and level of precision drop.
There are several primary arenas where we are witnessing developments in this area. In this document, we wish to focus on two representative examples – the field of art and the battlefield.
The artistic field
A significant breakthrough was achieved in the creative field, after several companies unveiled to the public AI tools that can create art in a wide range of forms and shades (pictures, music, complex texts such as stories and even Japanese haiku). While these tools block requests for inappropriate use, such as sexual content for minors, there do not appear to be many other limitations on artistic freedom – to the extent that raises questions about the legality of activity in terms of intellectual property rights. Thus, in order to create art, AI must learn from existing art, and rely on it to create new materials. Accordingly, the boundary of inspiration becomes blurrier with every passing day, and significant questions arise in terms of intellectual property rights violations.
In fact, there are already legal proceedings, initiated against several companies (Stability, Midjourney, AI, and more) regarding violations of the intellectual property rights of artists whose works were used without permission to train AI. Similarly, legal proceedings have been initiated with respect to the interesting tool Copilot, which enables writing computer code (based on learning from existing code, but without giving credit to the original programmers). It will be interesting to follow up and see where these types of proceedings lead in terms of restrictions on development and usage in this field.
AI on the battlefield
In February 2023, the Commander of the IDF’s AI Center, part of the 8200 Unit, revealed that the IDF has been using AI on the battlefield for several years. AI use, it was claimed, increases both defensive (particularly border defense or improved use of systems such as the Iron Dome) and offensive (intelligence breakthroughs, alongside identifying targets for attack) capabilities.
Without a doubt, revealing these types of capabilities is significant in terms of strengthening Israel’s deterrence. Yet, at the present stage, with little clarity regarding the norms that apply to AI use on the battlefield, inter alia due to ideological disagreements between states, it appears that the decision to push ahead and publicize may be somewhat hasty. Additionally, admitting use of a particular tool on the battlefield frees the reins on Israel’s foes, such as Iran which has quite substantial technological capabilities, and invites them to experiment with using AI-based tools against Israel.
Finally, the State of Israel is obligated to test new weapons, as well as ammunition, which it introduces into use. It is unclear whether a process of prior inspection, or retroactive oversight, was executed regarding such a far-reaching tool. In fact, there are voices that call for strict prohibitions against the use of AI in military applications until norms are defined in the field. Thus, Israel’s choice to make public, just as the international dialogue on restrictions intensifies, is bewildering.
READ MORE
AI regulation
Global regulation
On the inter-state level, there has not been sufficient progress in terms of defining clear norms for AI development, and its application in various areas of life. There are more than a few “soft” tools that propose standards, but there are no international conventions or declarations that are binding on the universal level.
Yet nevertheless, there has been some progress on the matter. The OECD, for example, published principles for sustainable development and AI use while promoting fairness, equality, safety, accountability, and in a broader sense protection of human rights. Based on these principles, the organization encourages investment in AI R&D, molding a suitable policy, executing preparations in the relevant markets and in the occupational world on the whole, and promoting international cooperation.
In addition, we might mention the adoption in the United States of an Executive Order on maintaining leadership in artificial intelligence, the establishment in Canada of an Advisory Council on the matter, and the establishment in the EU of a special committee on the development, design, and application of AI based on democratic principles, particularly the protection of human rights.
Regulation in Israel
Developing a fixed policy with respect to the regulation of AI technology, at the stage of new technologies, and at an early stage, is of utmost importance. A good illustration of the danger inherent in allowing a free hand to the companies themselves, can be found in the area of social media. While social media has contributed in terms of providing a voice for more people, today it is clear that it causes substantial harm to society (exacerbation of polarity and expressions of hatred, influence on significant events such as elections or social unrest, and even psychological harm to users). While the legislator is now becoming aware of the importance of regulating social media, it seems that great damage had already been done. Happily, it seems the lesson has been learned and at this stage there is already in-depth discussion on the need to regulate AI use in Israel. In 2022, the Ministry of Innovation, Science and Technology published principles of policy, regulation, and ethics in the area of AI. The document aims to define policy in AI areas without adopting wide-ranging legislation, but rather through sectorial regulators in every field and “soft” regulatory tools. In addition, the document adopted ethical principles for AI use, for example: promoting sustainable growth, maintaining safety in production and use, reinforcing the system’s reliability and transparency, as well as explainability vis a vis AI decision-making. At the next stage, the State of Israel will establish a governmental center for know-how and coordination.
What must be done next?
The Ministry of Innovation, Science and Technology’s document of principles constitutes very important progress, yet there is still room to strengthen and tighten the State of Israel’s policy on this subject. In particular, Tachlith – Institute for Israeli Public Policy recommends two main steps: (a) instituting a national regulator whose role is to guide and create harmony between sectorial regulators, and (b) adoption of a uniform methodology for examining the legal and regulatory response to AI’s challenges in various sectors. The position that there is no need to promote overarching legislation, as presented by the Ministry of Innovation, seeks to promote entrepreneurship and creativity in the private market. While these goals are highly important, this cannot provide a sufficient response to the dangers inherent in the widespread, unregulated use of AI. The reason is that, as an eclectic menu of proposals and tools, the document of principles enables different regulators to interpret the recommendations differently and prioritize the narrow interests of the content worlds with which they are entrusted, while ignoring any possible impact on other sectors. An additional concern is that making do with soft regulatory tools and an ethical commitment only with respect to AI systems that have a fundamental impact on human rights, could create a democratic deficit stemming from a lack of accountability on the part of AI operators towards users.
By comparison, in the legislative bill proposed on the topic of AI in the EU, preference was given to a regulatory approach that stringent overarching legislation is needed for high-risk AI, over sectorial regulation, while creating a less-binding behavioral code for low-risk AI. This approach seeks to promote human rights in an optimal manner, according to the risk level, to raise legal certainty, to increase the efficiency of the enforcement tool, and to create an environment that enables the development of an AI system that is safe for users.
[Diagram] The Tachlith Institute recommends two main steps: 1. Instituting a national regulator whose role is to guide and create harmony between sectorial regulators; (2) Adoption of a uniform methodology for examining the legal and regulatory response to AI’s challenges in various sectors.
We propose a model that falls somewhere between what the Ministry of Innovation has proposed and the European approach, by establishing a national regulator which would fulfill two main functions: (1) The regulator would guarantee harmonization and a shared pursuit by all sectors of the economy towards realizing national policy goals in the field of AI (as the “regulator of regulators”; (2) The regulator would develop regulatory policy and advise the government about the steps needed for continued development and progress. We propose the national regulator be authorized to guide the various authorities on how to realize the national regulatory policy and how to resolve legal interpretation matters to comply with the existing law. In addition, a mechanism could be put into place so that those who consider themselves harmed by decisions by sectorial regulators could appeal to the national regulator to reexamine the decision. Our sense is that this type of supreme regulator would constitute the desired middle-of-the-road model between the alternative of broad, economy-wide regulation and the alternative of sectorial regulation, making it possible to benefit from a significant part of the advantages of each model.
Another important step is the implementation of a uniform thinking methodology for handling common problems that arise among regulators, legal advisors and other figures that are supposed to oversee and regulate AI use. One main reason for this is that AI is expected to have a disruptive impact on markets and companies, which should be examined and solved in a harmonious fashion. For example, occupational structures will undergo dramatic changes with the appearance of advanced technologies, which in turn will create new safety risks, in a way that will impact many fields (the financial, medical, educational sectors and more). A set methodology is necessary to realize a legal and regulatory policy that relies on a uniform organizing idea, instead of creating isolated islands from a legal and regulatory standpoint, which do not correspond with each other and may even lead to conflicting conclusions (according to the different interests of each sector).
In conclusion, the world of AI piques the imagination just as it elicits concerns. Hopefully, AI’s exceptional capabilities will serve humanity and support its development, despite the great risk that these tools could be misused and could even drag humanity into a downward spiral of uncontrollable self-destruction (should states focus on technological superiority at all costs). While it is important for the Israeli legislator to encourage innovation and economic interests, it must also remain vigilant about the risks, and institutionalize mechanisms which can help minimize possible harm to one sector at the expense of other sectors, security or individual rights.
Dr. Elad Gil
Senior fellow and Head of Research at Tachlith – Institute for Israeli Public Policy.
Dr. Tal Mimran
Head of the “Social Pact for the Digital Age” at Tachlith Institute, researcher and lecturer on international law and cybersecurity.
Comments