Written by: Haim Ravia, Dotan Hammer, and Or Cohen
Today, the European Union (EU) enacted the most ambitious law governing Artificial Intelligence (AI). The EU AI Act spans more than 200 pages and applies both to providers of AI-driven technology and to private sector and public sector users of AI-driven technology. As in other data-related legislation in the EU, the AI Act also applies extraterritorially to companies and organizations outside the EU.
Most of the provisions of the AI Act are set to take effect in 2026, yet it remains to be seen how relevant this legislation will be at that point to the super-rapidly evolving world of AI. For instance, since its preliminary legislative phases in 2021, the world has witnessed the evolution of Generative AI, quintessentially demonstrated by ChatGPT. This warranted the suspension of the legislative efforts and the introduction of significant changes in the bill to cover this significant development. Also, companies in the EU fear that the AI Act will stifle the development of AI innovation and implementation in the continent.
Notably, Europe is also pushing for an international treaty on AI, Human Rights, Democracy & the Rule of Law, and is advancing an EU Directive governing questions of burdens of proof and evidentiary rules in lawsuits involving AI.
Key aspects of the AI Act are presented below, though it is impossible to comprehensively cover the entire AI Act in a brief client update.
Definitions and Scope. The AI Act applies in various cases, such as to providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the EU or in a country outside the EU. It also applies to users of AI systems located within the EU, and to providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU. The AI Act does not apply, however, to AI systems used for the sole purpose of scientific research and development, and most of the AI Act does not apply to those using AI in the course of a purely personal non-professional activity.
Risk-Based Approach. The AI Act follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) certain other AI systems.
- Unacceptable-Risk AI systems. The AI Act includes a list of prohibited practices for those AI systems whose use is considered unacceptable as contravening EU values, such as cognitive behavioral manipulation or deception, exploiting vulnerabilities, untargeted scraping of facial images from the internet or CCTV footage, social behavior classification, social scoring, biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing of individuals.
- High-Risk AI systems. The AI Act contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of a natural person. These high-risk AI systems are permitted in the EU market subject to compliance with certain mandatory requirements and an after-the-fact conformity assessment. These measures have been clarified and adjusted to be more technically feasible and less burdensome.
- Other AI systems. Certain AI systems that are not classified as unacceptable or high-risk, presenting only limited risks to the health, safety, or fundamental rights of natural persons, will still be subject to transparency obligations. For example, disclosing that the system is AI-driven or that the content was AI-generated so users can make informed decisions on use.
Transparency and Protection of Fundamental Rights. The AI Act requires a fundamental rights impact assessment before a high-risk AI system is put on the market. Furthermore, Systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’) are required to keep users informed of their automated nature, such as to inform natural persons when they are being exposed to an emotion recognition system, or to effectively watermark AI-generated content in compliance with technical standards.
General Purpose AI Systems. The AI Act addresses the specific cases of general-purpose AI systems that can be used for many different purposes (“GPAI”), including where general-purpose AI technology is subsequently integrated into another high-risk system, such as the requirement to maintain documentation and cooperate with authorities.
Foundation Models. The AI Act includes specific rules for foundation models. foundation models are large systems capable of competently performing a wide range of distinctive tasks, such as generating video, text, and images, conversing in lateral language, computing, or generating computer code. A clear example is LLMs – Large Language Models which are the foundation of products such as ChatGPT and Google Gemini. Foundation models must comply with specific transparency obligations before they are placed in the market. This includes watermarking and disclosing AI-generated content or AI-modified content, and informing users that they are exposed to an emotion recognition system or are classified for social categories based on biometric data. A stricter regime applies to ‘High impact’ foundation models, trained with a large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.
Law Enforcement Exceptions. Subject to appropriate safeguards, law enforcement authorities can retain their ability to use AI. For example, law enforcement agencies can deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency. Real-time remote biometric identification systems in publicly accessible spaces are only permitted when strictly necessary for law enforcement purposes (such as prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes).
Innovation Support. The AI Act introduces an AI Regulatory sandbox, which would allow companies and AI developers to develop, test, and validate innovative AI systems in real-world conditions, under specific conditions and safeguards.
EU Database for High-Risk AI Systems. Providers of stand-alone high-risk AI systems and certain users of a high-risk AI system which are public entities must register in the EU database for high-risk AI systems. Providers of AI systems are also required to abide by the monitoring and reporting obligations regarding post-market monitoring and reporting and investigating AI-related incidents and malfunctions.
Governance Architecture. An AI Office within the Commission will oversee the most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models. An AI Board will act as a coordination platform and an advisory body to the Commission, and an advisory forum for stakeholders, such as industry representatives, SMBs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
Penalties. The fines for violating the AI Act are set as a percentage of the violating company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher: €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI Act’s obligations and €7.5 million or 1.5% for the supply of incorrect information. Fines for SMBs and start-ups are capped proportionally. A natural or legal person may file a complaint with the relevant authority concerning non-compliance with the AI Act.
Timeframe. The AI Act will apply two years after it enters into force, with some exceptions for specific provisions. We expect that the AI Act will enter into force next month and become applicable by the 2nd quarter of 2026.
Considering the scope and complexity of the regulation, we strongly recommend that businesses examine and prepare for the AI Act well ahead of its applicability in 2026. From our experience implementing European legislation such as the GDPR, advance preparation is the key to success. We will be happy to assist you.
This client update is intended for purposes of general knowledge only, does not fully cover the intricacies of the subject matter discussed, does not constitute legal advice and should not be relied on for such purposes.