The world’s first AI regulations in Europe are scheduled for final approval

14th March, 2024
The world’s first AI regulations in Europe are scheduled for final approval

Five years after they were first presented, members of the European Parliament are about to vote in favor of the Artificial Intelligence Act. It is anticipated that the AI Act will serve as a global reference point for other nations debating how to control the rapidly advancing technology.

The European Union lawmakers are poised to grant final approval to the artificial intelligence law of the 27-nation bloc on Wednesday, paving the way for the world-leading regulations to come into effect later this year.

After five years since its initial proposal, lawmakers in the European Parliament are set to endorse the Artificial Intelligence Act. This act is anticipated to serve as a global benchmark for governments worldwide grappling with the challenge of regulating the rapidly advancing technology.

According to Dragos Tudorache, a Romanian lawmaker who played a key role in negotiating the draft law within the Parliament, the AI Act steers the future of AI towards a human-centric direction. It emphasizes human control over the technology, aiming to leverage its potential for new discoveries, economic growth, societal progress, and the unlocking of human potential.

While major tech companies generally acknowledge the necessity of AI regulation, they have also lobbied to ensure that any regulations align with their interests. Notably, OpenAI CEO Sam Altman stirred some controversy last year by suggesting that OpenAI might withdraw from Europe if it couldn’t comply with the AI Act, although he later clarified that there were no such plans.

Here’s an overview of the world’s first comprehensive set of AI regulations:

How Does the AI Act Operate?

Similar to many EU regulations, the AI Act adopts a “risk-based approach” towards products or services employing artificial intelligence. The level of scrutiny depends on the perceived risk associated with an AI application.

  • Low-risk AI systems, such as content recommendation systems or spam filters, will face lighter regulations, such as disclosing their AI-powered nature.
  • High-risk uses of AI, like in medical devices or critical infrastructure, will be subject to stricter requirements, including the use of high-quality data and transparent information provision to users.
  • Certain AI applications are outright banned due to their deemed unacceptable risk, such as social scoring systems, certain predictive policing methods, and emotion recognition systems in educational and workplace settings.
  • Additional prohibitions include police employment of AI-powered remote “biometric identification” systems for public face scanning, except in cases of serious crimes like kidnapping or terrorism.

What About Generative AI?

Initially focused on AI systems with narrowly defined tasks, the AI Act now encompasses provisions for generative AI models, such as OpenAI’s ChatGPT, which can produce lifelike responses, images, and more.

  • Developers of general-purpose AI models must provide detailed summaries of the data used to train their systems, adhering to EU copyright law.
  • AI-generated deepfakes must be labeled as artificially manipulated.
  • Stringent scrutiny is applied to the largest and most powerful AI models deemed to pose “systemic risks,” with requirements for risk assessment, incident reporting, cybersecurity measures, energy usage disclosure, and mitigation strategies.

Influence on Global Regulation

The EU’s proposal for AI regulation in 2019 marked a significant step in global efforts to scrutinize emerging industries. Other nations, including the United States, China, and various international organizations, are also developing or implementing AI regulations.

What’s Next?

The AI Act is expected to officially become law by May or June, following final formalities and approval from EU member countries. Provisions will be phased in gradually, with enforcement mechanisms established at both national and EU levels to ensure compliance and oversight.

EU member countries will establish their own AI watchdogs to handle complaints of rule violations, while Brussels will establish an AI Office dedicated to supervising the law’s implementation, particularly concerning general-purpose AI systems.