Uncategorized

Microsoft wants Congress to act against fraud created by artificial intelligence

A Digital Forgery Act to Protect From Artificial Intelligence-Generated Content and its Implications for Technology and the Laws of the Internet

The Microsoft wants a fake fraud statute which will give law enforcement officials a legal framework to prosecute artificial intelligence-generated fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”

While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easy to create fake audio, images, and video — something we’re already seeing during the run up to the 2024 presidential election. The video that Musk shared was a fake one, and it appears to have violated X’s own policies against synthetic and manipulated media.

Microsoft has had to implement more safety controls for its own AI products, after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Taylor Swift. Smith says that the private sector needs to innovate and make sure that the misuse of artificial intelligence does not happen.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

A bipartisan pair of House lawmakers are going to introduce a bill that protects tech companies from being sued if they do not remove intimate Artificial Intelligence deepfakes from their platforms.

Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA) unveiled the Intimate Privacy Protection Act, Politico first reported, “to combat cyberstalking, intimate privacy violations, and digital forgeries,” as the bill says. The bill amends Section 230 of the Communications Act of 1934, which currently shields online platforms from being held legally responsible for what their users post on their services. Under the Intimate Privacy Protection Act, that immunity could be taken away in cases where platforms fail to combat the kinds of harms listed. It creates aDuty of care for platforms, a legal term that basically means that they are expected to actResponsible, which includes having aReasonable process to address cyberstalking, intimate privacy violations, and digital forgeries.

Lawmakers on both sides of the aisle have long wished to narrow Section 230 protection for platforms they fear have abused a legal shield created for the industry when it was made up of much smaller players. Republicans and Democrats can’t agree on how to change the statute. One notable exception was when Congress passed FOSTA-SESTA, carving out sex trafficking charges from Section 230 protection.