Uncategorized

Rage against profit-driven machine learning

Open access in Artificial Intelligence: How we’re putting people in the driver’s seat for the commercial AI ecosystem and what we need to do to help them

The current boom in artificial intelligence is probably not possible without the work that began in academia. Artificial neural networks that date back decades are at the center of many of the techniques that are being used on an everyday basis. Most of the cutting-edge and high-profile research in Artificial Intelligence is being done behind the closed doors of private companies.

Whatever approach is taken, keeping publicly funded, independent academic researchers at the forefront of AI progress is crucial for the safe development of the technology, says Vallor. “AI is a technology that has the potential to be very dangerous if it’s misused, if it doesn’t have the right guardrails and governance, and if it’s not developed in responsible ways,” she says. “We should be concerned about any AI ecosystem where the commercial incentives are the only ones driving the bus.”

She believes that companies that develop and deploy AI in a responsible way could face a lighter tax burden. Vallor says that those who don’t want to adopt standards that are responsible should pay to compensate the public who they endanger and their livelihoods.

For that scrutiny to happen, however, it is imperative that academics have open access to the technology and code that underpins commercial AI models. “Nobody, not even the best experts, can just look at a complex neural network and figure out exactly how it works,” says Hoos. We don’t know a lot about these systems, so we have to know a lot about how they are created.

Theis believes that companies want more people to be able to work with their models, so they’re moving towards open access. “It’s a core interest for industry to have people trained on their tools,” he says. Meta, the parent company of Facebook, for example, has been pushing for more open models because it wants to better compete with the likes of OpenAI and Google. A computer scientist at the University of Colorado Boulder said giving peopleaccess to its models will allow an inflow of new ideas.

Even though they are free to explore ideas, academics can still get help from the industry to solve interesting problems. “It’s very common for trainees from my and other labs to go to big tech, or pharma, to learn about the industry experience,” says Theis. “There’s actually a back and forth and diffusion between the two.”

Much of this work is not published in leading peer-reviewed scientific journals. In the year 2018, research by corporations accounted for only 3.84% of the total Nature Index output in the United States. But data from other sources show the increasingly influential role that companies play in research. In a paper published in Science1 last year, Nur Ahmed, who studies innovation and AI at the Massachusetts Institute of Technology in Cambridge, and his colleagues, found that research articles with one or more industry co-author grew from 22% of the presentations at leading AI conferences in 2000 to 38% in 2020. Industry’s share of the biggest, and therefore most capable, AI models went from 11% in 2010 to 96% in 2021. And on a set of 20 benchmarks used to evaluate the performance of AI models — such as their capabilities in image recognition, sentiment analysis and machine translation — industry alone, or in collaboration with universities, had the leading model 62% of the time before 2017, a share that has grown to 91% since 2020. “Industry is increasingly dominating the field,” says Ahmed.

To make the most of that freedom, however, academics will need support — most importantly in the form of funding. Theis believes that a strong investment in basic research is needed so it is not just happening in a few places.

Recruitment programmes such as the Canada Excellence Research Chairs initiative, which offers up to Can$8 million over eight years to entice top researchers in various fields to move to, or remain in, Canada, and Germany’s Alexander von Humboldt Professorships in AI, worth €5 million over five years, have both helped to shore up AI research in the countries. One of the professorships is held by Hoos.

The confederation of laboratories for Artificial Intelligence Research in Europe is co-founded by one of the inventors of the internet. The plan is inspired by the approach in physical sciences of sharing large, expensive facilities across institutions and even countries. The particle physicists have the right idea, according to Hoos. “They build big machines funded by public money.”

Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. “When it comes to training state-of-the-art large language models for natural-language processing, academia is going to be hard-pressed to keep up,” says Fabian Theis, a computational biologist at Helmholtz Munich, in Germany.

What is AI? The EU and US AI laws are different, and what we can do about it: Why we should not worry about it. What are we missing?

Some cases where complying with EU rules makes sense for US firms, but not others, and it will mean the United States is left overall less regulated than before, meaning fewer protections for individuals. Although Brussels faced its fair share of lobbying and compromises, the core of the AI Act remained intact. If US state laws are not revised, we will see.

For these reasons, lobbying groups claim to prefer national, unified AI regulation over state-by-state fragmentation, a line that has been parroted by big tech companies in public. In private, some advocates for voluntary, light-touch rules that show their dislike for both state and national legislation. If the two differing regulatory environments in the EU and US are not changed, the status quo will remain: they favour the companies more than the benefits of a light touch regime.

The state bills are smaller. Both Colorado and Connecticut have legislation that is more limited in scope and uses a risk-based framework. The framework covers similar areas — including education, employment and government services — but only systems that make ‘consequential decisions’ impacting consumer access to those services are deemed ‘high risk’, and there are no bans on specific AI use cases. (The Connecticut bill would ban the dissemination of political deepfakes and non-consensual explicit deepfakes, for example, but not their creation.) Additionally, definitions of AI vary between the US bills and the AI Act.

There is a big difference between the state bills and the AI Act. The Act protects fundamental rights and sets a risk-based system for assessing people based on their family ties, which is not allowed in cases of education or family ties. Lower risk systems have fewer or no obligations and high-risk systems are subject to the most stringent requirements.

A United Nations report released today proposes having the international body oversee the first truly global effort for monitoring and governing artificial intelligence.

The remarkable abilities demonstrated by large language models and chatbots in recent years have sparked hopes of a revolution in economic productivity but have also prompted some experts to warn that AI may be developing too rapidly and could soon become difficult to control. Not long after ChatGPT appeared, many scientists and entrepreneurs signed a letter calling for a six-month pause on the technology’s development so that the risks could be assessed.

More immediate concerns include the potential for AI to automate disinformation, generate deepfake video and audio, replace workers en masse, and exacerbate societal algorithmic bias on an industrial scale. “There is a sense of urgency, and people feel we need to work together,” Nelson says.

“AI is part of US-China competition, so there is only so much that they are going to agree on,” says Joshua Meltzer, an expert at the Brookings Institute, a Washington, DC, think tank. There are some differences between the key differences, one being protections for privacy and personal data.