The world’s first major law regulating AI has been passed. On March 13, the European Parliament greenlit the EU AI Act, ushering in a new era of regulation aimed at safeguarding fundamental rights while fostering innovation. The Act, negotiated with member states in December 2023, received overwhelming support from MEPs, with 523 votes in favor, 46 against, and 49 abstentions.
At its core, the Act seeks to protect fundamental rights, democracy, the rule of law, and environmental sustainability from the potential risks posed by high-risk AI technologies. Simultaneously, it aims to bolster innovation, positioning Europe as a global leader in the AI landscape. By introducing clear obligations tailored to the risks and impacts of AI applications, the regulation sets a precedent for responsible AI development and deployment.
The Act prohibits several AI applications deemed threatening to citizens’ rights. These include biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling, and AI manipulation of human behavior or vulnerabilities.
Stringent safeguards govern the use of biometric identification systems (RBI) by law enforcement, permitting “real-time” RBI only under strict conditions such as limited time and geographic scope and specific judicial or administrative authorization. Post-facto use of such systems, termed “post-remote RBI,” requires judicial authorization linked to a criminal offense.
High-risk AI systems, spanning critical infrastructure, education, healthcare, and law enforcement, are subject to comprehensive obligations to assess and mitigate risks, ensure transparency, accuracy, and human oversight. Citizens retain the right to lodge complaints about AI systems affecting their rights.
The Act mandates transparency requirements for general-purpose AI systems, necessitating compliance with EU copyright law and detailed publication of training data summaries. Stricter measures apply to powerful AI models to mitigate systemic risks, including model evaluations and reporting incidents. Moreover, deepfakes must be clearly labeled.
To encourage innovation, the regulation mandates the establishment of regulatory sandboxes and real-world testing accessible to SMEs and startups. This provision aims to facilitate the development and training of innovative AI technologies before market deployment.
During the plenary debate, co-rapporteurs Brando Benifei and Dragos Tudorache underscored the Act’s significance in enhancing EU competitiveness, protecting citizens’ rights, and setting a new governance model for technology. Following a final legal review, the Act is expected to be formally adopted by the Council. It will enter into force twenty days after publication in the official journal and become fully applicable within 24 months.
Need help?
If you’re wondering how the EU AI Act, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.