By Kelvin Chan, Associated Press
LONDON (AP) — European Union lawmakers gave final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year.
Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology. “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote. Major tech companies have generally supported the need to regulate AI while working to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor commotion last year when he suggested the ChatGPT maker
could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.
Here's a look at the world’s first comprehensive set of AI rules: Big tech companies usually agreed that AI needs regulation. while lobbying to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor stir last year when he suggested the ChatGPT maker might withdraw from Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.
Here’s a look at the world’s first comprehensive set of AI rules:
HOW DOES THE AI ACT WORK?
Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, using a “risk-based approach” to products or services that use artificial intelligence.
The riskier an AI application is, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.
Some AI uses are banned because they pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.
Some AI uses are banned because they pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces. The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications. The astonishing rise of general purpose AI models
, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up. They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems
that can produce unique and seemingly lifelike responses, images and more.
Developers of general purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other information on the internet used to train the systemsand also comply with EU copyright law.
fake images created by AI , video or audio of real people, places or events must be marked as artificially manipulated. There’s extra scrutiny for the largest and most powerful AI models that present “systemic risks,” such as
OpenAI’s GPT4 — its most advanced system — and Google’s Gemini. The EU is concerned that these powerful AI systems could “cause serious accidents or be misused for large-scale cyberattacks.” They are also worried that generative AI could spread “harmful biases” across many applications, affecting many people.
Companies that offer these systems will need to evaluate and reduce the risks; report any serious incidents, such as malfunctions that lead to someone’s death or serious harm to health or property; implement cybersecurity measures; and disclosehow much energy their models use
DO EUROPE’S RULES INFLUENCE THE REST OF THE WORLD? initially proposed AI regulations in 2019, taking a familiar global role in increasing scrutiny of emerging industries, while other governments struggle to keep up.
In the U.S., President Joe Biden signed a
comprehensive executive order on AI in October that is expected to be supported by legislation and global agreements. At the same time, lawmakers in at least seven U.S. states are.
working on their own AI legislation
Brussels Chinese President Xi Jinping has proposed his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “ interim measures
” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China. Other countries, from Brazil to Japan, as well as global groupings like the United Nations and Group of Seven industrialized nations, are moving to draw up AI guardrails. WHAT HAPPENS NEXT? The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks..
Rules for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force. When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems. Violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.
This isn’t Brussels’ last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation could be ahead after summer elections, including in areas like AI in the workplace that the new law partly covers, he said.
The move puts the world-leading set of rules on track to take effect later this year.
The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.
Rules for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.
When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems.
Violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.
This isn’t Brussels’ last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation could be ahead after summer elections, including in areas like AI in the workplace that the new law partly covers, he said.