The EU voted to end the Artificial Intelligence Enforcement (AIA) war: What will we learn from the last EU AI Act of December 5?
The explosion in general-purposeAI tools like Openai’s GPT-4 large language model, which became a difficult sticking point in last-minute discussions, was the start of the Act. The European Union said that the higher the level of risk, the stricter the rules.
Daniel Leufer, senior policy analyst at Access Now, a digital human rights organization says that at one point it looked like tensions over how to regulate the company could derail the entire negotiation process. “There was a huge push from France, Germany, and Italy to completely exclude these systems from any obligations under the AI Act.”
It appears that a large amount of the legislation stayed unresolved days before the deal was made. The European communications and transport ministers discussed the regulation of Artificial Intelligence at their meeting on December 5th, in which the German Digital Minister said the regulation is not mature yet.
The draft that was approved on Friday has exceptions that allow limited use of automated facial recognition. Specific law enforcement use cases involving national security threats may be approved, but only under certain conditions. Human rights organizations have expressed anger at the decision since France has pushed for use of the technology to monitor terrorism, as well as the upcoming Olympics in Paris.
It’s a good idea for policymakers to work out how these rules will be enforced. AI companies can also use that time to ensure their products and services will be compliant with the rules when provisions come into effect. Ultimately, that means we might not see everything within the AI Act regulated until mid-2026. By the time we get to that point, there may be a whole new set of issues to deal with.
Singh tells The Verge that regulators on both sides of the Atlantic need to help organizations of every size in safe design, development and deployment of Artificial Intelligence. She adds there’s still a lack of standards and benchmarking processes, particularly around transparency.
There is still much work to be done, according to Navrina Singh who is a member of the national advisory committee for artificial intelligence.
Even though the US will not take the same risk based approach, it may try to expand data transparency rules or allow more discretion when it comes to models.
The AI Act also won’t apply its potentially stiff fines to open-source developers, researchers, and smaller companies working further down the value chain — a decision that’s been lauded by open-source developers in the field. GitHub chief legal officer Shelley McKinley said it is “a positive development for open innovation and developers working to help solve some of society’s most pressing problems.” A popular open-source development hub is a subsidiary of Microsoft.
Aaronson points out that the AI Act still hasn’t clarified how companies should treat copyrighted material that’s part of model training data, beyond stating that developers should follow existing copyright laws (which leave lots of gray areas around AI). So it offers no incentive for AI model developers to avoid using copyrighted data.
As they navigate regulatory uncertainty in the US, major artificial intelligence players like OpenAI, Microsoft and others will likely continue to fight for dominance.
Transparency Summaries aren’t Going to Change the Behavior of Data Manufacturers, Aronson re-visited
Companies may have to put transparency summaries on their data nutrition labels under the rules, according to Susan Aronson, director of the Digital Trade and Data Governance Hub. It isn’t going to change the behavior of companies around data.