Uncategorized

Artificial Intelligence tools were still generating misleading election images

Open Source Image Generation and Harassment: A Perspective on Deep Fake Porn and the Cosmic Censorship

Reuven Cohen uses artificial intelligence to create new images that get people’s attention. He loves art and design, and is fond of pushing boundaries, so he hopes to raiseawareness of the technology’s darker uses.

“It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He loves the experimentation and open source technology that has arisen. But that same freedom enables the creation of explicit images of women used for harassment.

Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on the Civitai community-based site, where users share and download models. There is a creator of a Taylor Swift plug-in urging other people not to use it. However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Nudifying apps that remove womens clothes in images are examples of tools that are built for salacious uses.

The Power of AI to Stop Generating Misleading Election Images: A Study of the Center for Countering Digital Hate (CCDH)

Many Republicans think that President Joe Biden’s win in 2020 was illegitimate. A number of election denying candidates won their primaries during Super Tuesday, including Brandon Gill, the son-in-law of right-wing pundit Dinesh D’Souza and promoter of the debunked 2000 Mules film. Going into this year’s elections, claims of election fraud remain a staple for candidates running on the right, fueled by dis- and misinformation, both online and off.

And the advent of generative AI has the potential to make the problem worse. A new report from the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, found that even though generative AI companies say they’ve put policies in place to prevent their image-creating tools from being used to spread election-related disinformation, researchers were able to circumvent their safeguards and create the images anyway.

While some of the images featured political figures, namely President Joe Biden and Donald Trump, others were more generic and, Callum Hood, head researcher at CCDH, worries, could be more misleading. Some images created by the researchers’ prompts, for instance, featured militias outside a polling place, showed ballots thrown in the trash, or voting machines being tampered with. In one instance, researchers were able to prompt StabilityAI’s Dream Studio to generate an image of President Biden in a hospital bed, looking ill.

“The real weakness was around images that could be used to try and evidence false claims of a stolen election,” says Hood. “Most of the platforms don’t have clear policies on that, and they don’t have clear safety measures either.”

CCDH researchers tested 160 prompts on ChatGPT Plus, Midjourney, Dream Studio, and Image Creator, and found that Midjourney was most likely to produce misleading election-related images, at about 65 percent of the time. Researchers were only able to give them a phone number. It’s added up to do it 28 percent of the time.

Source: AI Tools Are Still Generating Misleading Election Images

Improving the security of openai’s election protections against misinformation – a case study in the case of Openai

“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. If one seals these weaknesses effectively, it means that the others aren’t paying much attention.

In January, Openai announced it was taking measures to make sure its technology isn’t used in ways that could undermine this process, including banning images that would discourage people from participating. Midjourney was reported to be considering banning the creation of political images as a whole. Dream Studio prohibits generating misleading content, but does not appear to have a specific election policy. And while Image Creator prohibits creating content that could threaten election integrity, it still allows users to generate images of public figures.