Uncategorized

Europe reached a deal on the first comprehensive artificial intelligence rules

Fines and Rules for the Use of Artificial Intelligence in General-Purpose AI Systems as outlined in the EU AI Act

The EU AI Act is the first of its kind in the world, said Ursula von der Leyen. And for the safety and fundamental rights of people and businesses.”

The press release says that negotiators set obligations for high- impact general-purpose Artificial Intelligence systems that meet certain benchmarks. It also mandates transparency by the systems that include creating technical documents, and provides detailed summaries about the content used for training, something companies have so far refused to do.

There are many exemptions and loopholes in the Artificial Intelligence Act, including lack of protection for systems used for migration and border control, as well as the option for developers not to have their systems classified as high risk.

The press release did not provide details on how all that will work, or what benchmarks are, but it did provide a framework for fines if companies break the rules. They vary based on the violation and size of the company, and can range from 7 to 1.5 percent of global revenue.

The EU wants to ban the use of artificial intelligence for any purpose, however some governments have sought an exemption for law enforcement or national security. Late proposals from France, Germany, and Italy to allow makers of generative AI models to self-regulate are also believed to have contributed to the delays.

The delay of the EU General Purpose Artificial Intelligence and Live Biometrics Monitoring Agreement had to be declared by the end of the legislative term

It’s expected that a final deal will be reached before the end of the year. The law will likely not come into force until 2025, at the earliest.

Now that a provisional agreement has been reached, more negotiations will still be required, including votes by Parliament’s Internal Market and Civil Liberties committees.

Some people are opposed to the rules governing livebiometrics monitoring and “general-purpose” foundation Artificial Intelligence models. A press conference announcing the agreement to be delayed was caused by these being debated this week.

After months of debate about how to regulate companies like OpenAI, lawmakers from the EU’s three branches of government—the Parliament, Council, and Commission—spent more than 36 hours in total thrashing out the new legislation between Wednesday afternoon and Friday evening. Lawmakers needed to get a deal done before the EU parliament election campaign starts.

Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.

Measures designed to make it easier to protect copyright holders from generative AI and require general purpose AI systems to be more transparent about their energy use were also included.

“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said European Commissioner Thierry Breton in a press conference on Friday night.

The European Parliament reaches a deal on the world’s first comprehensive AI rules: Implications for foundation models and human-like production

The European office of the Computer and Communications Industry Association, a tech industry lobby group, said the political deal marks the beginning of important and necessary technical work on crucial details of the Artificial Intelligence Act.

The European Parliament will still need to vote next year on the act, but with that in the past, it’s simply a formality, said Brando Boisi, an Italian lawmaker who co-leads the negotiations.

Artificial intelligence has exploded into the consciousness of the world, with the ability to produce human-like text, photos and songs but raising fears about the risks of the new technology and even human life.

Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it.”

She said that certain obligations outside of the EU will likely be extended by companies subject to the rules. “After all, it is not efficient to re-train separate models for different markets,” she said.

One of the biggest issues for Europe would be foundation models. Despite opposition from France and others, the negotiations were successful in reaching a tentative compromise early in the talks.

Source: Europe reaches a deal on the world’s first comprehensive AI rules

Privacy and Security Concerns in Large Language Models: A Comment on Access Now’s Dan Leufer’s Digital Rights Analyser

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Foundation models, built by a handful of tech companies, have been warned that could be used to create cyberattack, fake news, or bioweapons.

Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.

Lawmakers wanted to prohibit public use of face scanning and other remote identification systems because of privacy concerns. But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.

Daniel Leufer, a senior policy analyst with the digital rights group Access Now, said that there are still huge flaws in the final text.