Water is the Elephant in the Room: How Artificial Intelligence Environmental Impacts are Soaring in the West Des Moines, Iowa, Data Centre
And it’s not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAI’s most advanced model, GPT-4. A lawsuit states that during the month before Openai finished training, the cluster used a small amount of the district’s water. In a year, the water use for Bard and Bing models went up 20% and 34%, respectively, according to the companies. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. The environmental effects of the industry’s pursuit of scale is called the elephant in the room.
Legislators are taking notice. The Artificial Intelligence Environmental Impacts Act of 2024 was introduced on 1 February by the US Democrats. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry and civil society to establish standards for assessing AI’s environmental impact, and to create a voluntary reporting framework for AI developers and operators. Whether the legislation will pass remains uncertain.
So what energy breakthrough is Altman banking on? Nuclear fusion is the design and deployment of more sustainable artificial intelligence systems. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.
There’s no reason this can’t be done. The data centres industry could make use of less energy, build more efficient models, and rethink their designs. In France the Big Science project demonstrated that it is possible to build a model of the BLOOM model with a much lower carbon footprint. But that’s not what’s happening in the industry at large.
Pattern-Learning AI for Conservation Laws and Impact Assessment: Application to Hunting and Beaver Detection on 100 Livelihoods
Voluntary measures depend on goodwill, so a long culture of accountability and consistent adoption is rare. It is urgent and more needs to be done.
Neural network architectures may be improved if researchers collaborate with social and environmental scientists.
Legislators should now offer both carrots and sticks. They could at the very beginning set standards for energy and water use, encourage the use of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start but the clock is running so much more needs to be done.
Data from a forthcoming satellite will help create the most comprehensive global methane map. Plus, OpenAI’s video generator Sora could open the deepfake floodgates and what the EU’s tough AI law means for research and ChatGPT.
The pattern of scales on their tails is what can be seen by a pattern-learning artificial intelligence system. The program was able to identify individuals with almost 96% accuracy after training on hundreds of pictures from 100 animals that were previously hunted. The method could make it faster and easier to track beaver populations, which is usually done by capturing individual animals and giving them ear tags or radio collars.
Sora, an Artificial Intelligence-powered system to detect fakes and manipulation in a real-life video game (MIT Technology Review | 4 min read)
OpenAI, creator of ChatGPT, has unveiled Sora, a system that can generate highly realistic videos from text prompts. “This technology, if combined with AI-powered voice cloning, could open up an entirely new front when it comes to creating deepfakes of people saying and doing things they never did,” says digital forensics researcher Hany Farid. Sora’s mistakes, such as swapping a walking person’s left and right legs, make it possible to detect its output — for now. In the long run “we will need to find other ways to adapt as a society”, says computer scientist Arvind Narayanan. The implementation of watermarks for video created using artificial intelligence is something this could include.
The graphic shows some of the things that can go wrong when a system called OK-Robot is told to move objects around a room it has never encountered. OK-Robot has a wheeled base, tall pole, and retractable arm, all of which can be run on open-source models. The tasks it was given such as moving the soda can to the box took almost 60% of the time, and less than 80% of the time in cluttered rooms. (MIT Technology Review | 4 min read)
The European Union’s new Act for Artificial Intelligence will impose strict rules on the riskiest programs but exempt models developed solely for research. “They really don’t want to stop innovation, so I’d be astounded if this is going to be a problem.” Some scientists suggest that the act could bolster open-source models, while others worry that it could hinder small companies that drive research. Powerful general-purpose models, such as the one behind ChatGPT, will face separate and stringent checks. Critics say that regulating models on their capabilities doesn’t have a scientific basis. More capable does not mean more harm, says Jenia Jitsev.
Scientific publishers are starting to use tools such as ImageTwin, Imacheck and Proofig to help detect questionable images. The tools make it faster and easier to find rotated, stretched, cropped, spliced or duplicated images. They aren’t as good at spotting the more complex manipulation or fakery. “The existing tools are at best showing the tip of an iceberg that may grow dramatically, and current approaches will soon be largely obsolete,” says Bernd Pulverer, chief editor of EMBO Reports.
An AI system built by the start-up Quantiphi tests drugs by analysing their impact on donated human cells. According to the company’s co-founders, the method can achieve almost the same results as animal tests while cutting the time and cost of drug development by 45%. Another start-up, VeriSIM Life, creates AI simulations to predict how a drug would react in the body. Eliminating the pharmaceutical industry’s reliance on animal experiments might relieve ethical concerns and help with risks to humans. A scientist says that animals do not look like humans. There are many adverse events discovered when you go for a clinical trial.