Written by: Haim Ravia, Dotan Hammer
The EU AI Act began its phased implementation on February 2, 2025. Provisions that came into force in February govern AI literacy and ban certain harmful AI practices.
Under the EU AI Act, organizations must now promote AI literacy, ensuring their personnel have the skills and knowledge to deploy AI responsibly. These skills include understanding risks and ethical concerns with AI development and use. AI literacy requirements vary by company size and use case, but companies must track internal training and assessments, subject to regulatory audits.
Additionally, the following AI practices are now deemed to violate EU values and fundamental rights and are prohibited under the EU AI Act:
- Biometric categorization to infer race, religion, or political views;
- Subliminal manipulation and emotion recognition (except for medical or safety reasons);
- Social scoring and indiscriminate scraping of facial images;
- Real-time biometric identification by law enforcement (except in narrowly defined cases);
- Predictive policing; and
- Exploiting vulnerable individuals.
Violations can attract fines of up to €35 million or 7% of the violating company’s global annual turnover.
May 2, 2025, marks the next phase of the AI Act’s implementation. Then, industry codes of practice are to be finalized by national regulators, civil societies, or industry groups. Then, on August 2, 2025, various rules for general-purpose AI systems take effect, including obligations to evaluate model performance, track and report serious incidents, and ensure adequate levels of cybersecurity protections.
Click here for the full text of the AI Act.
Click here for our previous client update regarding the AI Act.