Detecting Artificial Intelligence-Generated Videos and Audio in the Running-Up to Elections: What Do We Need to Know About Video and Audio?
Meta is testing the use of large language models that were trained on its Community Standards for its tens of thousands of human reviewers, he said. “It appears to be a highly effective and rather precise way of ensuring that what is escalated to our human reviewers really is the kind of edge cases for which you want human judgment.”
There are already plenty of examples of viral, AI-generated posts of politicians, but Clegg downplayed the chances of the phenomena overrunning Meta’s platform in an election year. “I think it’s really unlikely that you’re going to get a video or audio which is entirely synthetic of very significant political importance which we don’t get to see pretty quickly,” he said. I do not think that it is going to play out that way.
Users on TikTok andYouTube must tell when they post realistic Artificial Intelligence-generated content. Last fall, TikTok said it would start testing automatically applying labels to whatever it discovers was created or edited with artificial intelligence.
He thinks that companies should be prepared for bad actors to use any method they can to target certain types of content. Farid suspects that multiple forms of identification might need to be used in concert to robustly identify AI-generated images, for example by combining watermarking with hash-based technology used to create watch lists for child sex abuse material. And watermarking is a less developed concept for AI-generated media other than images, such as audio and video. Meta can detect signals in images, but it can’t see signals in audio or videogenerated at the same scale, because companies haven’t begun to include them in their image generators. We’re adding a feature for people to reveal when they share artificial intelligence-generated video and audio so we can add a label to it.
“For those who are worried about video, audio content being designed to materially deceive the public on a matter of political importance in the run-up to the election, we’re going to be pretty vigilant,” he said. “Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”
Towards a Better Understanding of the Role of Artificial Intelligence in Social Media Ads: A Meta View on the Challenges of Digital Deception
Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it is acknowledged that it has to respond to the technology’s dangers by adding new policies of warnings on images posted to Facebook, instagram, and Threads.
Political ads that contain altered images, video or audio should include a disclosure if they were generated with a digital camera.
“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Clegg said.
Only once those companies startincluding watermarks and other technical Metadata in images created by their software will the labels apply. Images created with Meta’s own AI tools are already labeled “Imagined with AI.”
There are still gaps. These kinds of markers are never included in other image generators. Meta said it’s working on tools to automatically detect AI content, even if that content doesn’t have watermarks or metadata.
Soon, similar images posted on Instagram, Facebook or Threads may carry a label disclosing they were the product of sophisticated AI tools, which can generate highly plausible images, videos, audio and text from simple prompts.
Meta, which owns all of the platforms, announced on Tuesday that it will start labeling images created with leading artificial intelligence tools. Tech companies that build and host Artificial Intelligence software are coming under increased pressure to address the potential for misleading people due to the rapid advancement of technology.
How to Beware Watermarks to Mislead Voters: The C2 PA Project and a Complaint from Hany Farid
Those concerns are particularly acute as millions of people vote in high-profile elections around the world this year. Experts and regulators say that deepfakes could be used to amplify efforts to misguide voters.
The C2 PA initiative has been advised by Professor Hany Farid from the UC Berkeley School of Information who says that anyone interested in using generativeai will turn to tools that don’t watermark their output or betray its nature. For example, the creators of the fake robocall using President Joe Biden’s voice targeted at some New Hampshire voters last month didn’t add any disclosure of its origins.