Artificial Intelligence, Bots and AI for Politics: The Rise of the Dead and a Rapt Raptor in South Africa
For the first time, the availability of generative artificial intelligence is going to cause conflict between political campaigns and elections. 2024 is already an unprecedented year for democracy: More than 2 billion people—the largest number ever—will vote in national, local, and regional elections in over 60 countries.
The global electorate now has to contend with this new tech. Deepfakes can be used for everything from sabotage to satire to the seemingly mundane: Already, we’ve seen AI chatbots write speeches and answer questions about a candidate’s policy. But we’ve also seen AI used to humiliate female politicians and make world leaders appear to promote the joys of passive-income scams. Artificial intelligence has been used to automate text messages for voters.
Hi! I’m Vittoria Elliott. I take over for Makena this week as WIRED Politics reporter, and I will talk about politicians rising from the dead in India, and a rapper endorsing the opposition in South Africa.
The OpenAI Threat Report: Robotic Influence on Social Media is Running Up against the Limits of Generative AI, and Why Robotic AI Can Do It Better
A lot of Americans are thinking about November but the world is already voting for an election in four years. India, the world’s largest democracy, is wrapping up its vote; South Africa and Mexico are both heading to the polls this week; and the EU is ramping up for its parliamentary elections in June. The election year is the largest in history, with more people online than before.
The network would use real-seeming Facebook profiles to post articles on divisive political topics. She says that the articles are written by generativeai. What they’re trying to do is see what will fly, what will be difficult to catch, and how far away from the goal it will be.
Russia, Iran, China and Israel have tried to take advantage of Openai’s technology to influence the world according to a threat report released today. Five different networks were identified and shut down over the course of two years. Russia and China are experimenting with artificial intelligence to automate their operations, according to the report. They’re not very good at it.
It was a small relief that these actors did not become unstoppable forces for disinformation, but that alone should be worrying.
The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It struggles with idioms—which make language sound more reliably human and personal—and also sometimes with basic grammar (so much so that OpenAI named one network “Bad Grammar.”) The network was so sloppy that it once admitted its true identity, as an artificial intelligence language model: “I am here to assist and provide the desired comment.”
The Telegram app has long been favored by extremists and influence networks, so a network used chatGatt to try to automate posts on it. Sometimes it worked, but other times the account posting as two separate characters gave away the game.
Influence campaigns on social media can often be more innovative than the employees of the platforms themselves, because they learn from the platforms and use their tools better. While these initial campaigns may be small or ineffective, they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.
Taken altogether, the report paints a picture of several relatively ineffective campaigns with crude propaganda, seemingly allaying fears that many experts have had about the potential for this new technology to spread mis- and disinformation, particularly during a crucial election year.