Extremism and AI in Domestic Media: a Brief History from Tech Against Terrorism, a Project Against Extremists
Researchers at the Domestic Terrorism Threat Monitor, a group within the institute which specifically tracks US-based extremists, lay out in stark detail the scale and scope of the use of AI among domestic actors, including neo-Nazis, white supremacists, and anti-government extremists.
“The biggest trend we’ve noticed [in 2024] is the rise of video,” says Purdue. “Last year, AI-generated video content was very basic. This year, with the release of OpenAI’s Sora, and other video generation or manipulation platforms, we’ve seen extremists using these as a means of producing video content. We’ve seen a lot of excitement about this as well, a lot of individuals are talking about how this could allow them to produce feature length films.”
Extremists have already used this technology to make videos that include a President using racial slurs during a speech and an actress reading aloud from a Nazi book.
Last year, WIRED reported on how extremists linked to Hamas and Hezbollah were leveraging generative AI tools to undermine the hash-sharing database that allows Big Tech platforms to quickly remove terrorist content in a coordinated fashion, and there is currently no available solution to this problem
Adam Hadley, the executive director of Tech Against Terrorism, says he and his colleagues have already archived tens of thousands of AI-generated images created by far-right extremists.
There are two ways in which this technology is being utilized. “Firstly, generative AI is used to create and manage bots that operate fake accounts, and secondly, just as generative AI is revolutionizing productivity, it is also being used to generate text, images, and videos through open-source tools. Both of these uses show how violent content can be produced and disseminated on a large scale.
Big Tech Is Giving Campaigns Both the Venom and the Antidote for GenAI: How Can We Leverage AI for Democracy Forward Campaigns?
The Biden campaign is facing its first major cheapfake scandal this week. There are doctored clips of Biden at the G7 Summit and a Hollywood Fundraiser on platforms such as X that have been spread to show Biden wandering off, mumbling unintelligibly or even pooping his pants. The right-wing media apparatus likes to show Biden as old as possible so they edited clips to make it look like he was drunk.
And while we’re all starting to get stressed over simple editing and cropping techniques again, Big Tech is training political campaigns on their generative AI tools. It could be helped if a little direction could be taken. Maybe. Could it make it worse? Yeah, probably.
Microsoft says it tailored the training sessions to help national campaigns save time and money. The company demonstrates how Copilot, its AI chatbot, could be used to quickly write and edit fundraising emails and text messages.
“Just like any small business could leverage AI, we believe a campaign could too,” Ginny Badanes, general manager for Microsoft’s Democracy Forward program, said in an interview earlier this month.
Source: Big Tech Is Giving Campaigns Both the Venom and the Antidote for GenAI
Microsoft World Machines in Computer Science: Two Trainings in 20 Countries and Five continents, and the US Workshops began in February 2005 – Information from Microsoft
In a statement to WIRED last week, Microsoft said that it’s completed 90 trainings with more than 2,300 participants in 20 countries and five continents, including Africa, Asia, Europe, North America, and South America. There have been more than 40 trainings held in the US this year with over 600 participants, the company said. European workshops began late last year and the US trainings began in February.