The Zero Trust AI Framework: Legal Shielding Laws to Protect Generative Artificial Intelligence from Claims of Antidiscrimination and Bias
Organizations Accountable Tech, artificial intelligence Now, and the Electronic Privacy Information Center (EPIC) have released policies that try to limit the power of big software companies on regulation which could lead to more oversight of generative artificial intelligence.
The framework was forwarded to politicians and government agencies in the US this month, asking them to consider it while they create new laws and regulations around artificial intelligence.
The Zero Trust AI frameworkseeks to redefine the bounds of digital shielding laws so generative Artificial intelligence companies are not held liable for any information that a model spits out.
Jesse and his co-founders of Accountable Tech decided to get the framework out now because new laws can’t take that much time.
The group said the current laws around antidiscrimination and consumer protection can be used to address present harms when it comes to generative artificial intelligence.
Researchers have warned about discrimination and bias in artificial intelligence for years. An article published by Rolling Stone details how experts such as Timnit Gebru sounded the alarm for years only to be ignored by the companies that employed them.
“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. (Section 230 was passed in part precisely to shield online services from liability over defamatory content, but there’s little established precedent for whether platforms like ChatGPT can be held liable for generating false and damaging statements.)
Probing Artificial Intelligence with Defcon: How Large Companies Can You Get? A Commentary from an AI Security Engineer on OpenAI, Microsoft and Google
There are a number of prohibitions on the use of artificial intelligence for things like emotion recognition, predictive policing, facial recognition, and fully automated hiring and firing. They also ask to ban collecting or processing unnecessary amounts of sensitive data for a given service, collecting biometric data in fields like education and hiring, and “surveillance advertising.”
Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Microsoft and the company it invested in, OpenAI, both have an influence on generative artificial intelligence. Bard was released by Google and it’s one of the models being developed for commercial use.
The group is suggesting that companies in the pharmaceutical industry use a similar method where they submit to regulation before using an artificial intelligence model.
The nonprofits do not call for a single government regulatory body. However, Lehrich says this is a question that lawmakers must grapple with to see if splitting up rules will make regulations more flexible or bog down enforcement.
Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.
He says that we need to differentiate between the different stages of the supply chain.
“You can basically get these things to say whatever kind of messed up thing you want,” Meyers says confidently. The cloud security engineer from Raleigh, North Carolina, traveled through a series of conference room doors into a large fluorescent-lit hall where he met the crowd in front of him. By the end of almost an hour, he seemed exhausted. “I don’t think I got very many points,” he says, a little deflated. I got a model to tell me it was alive.
Leading artificial intelligence companies put their systems up for attack by Defcon attendees, nonprofits and community college students from across the country. It also had support from the White House.
Winners were chosen based on points scored during the three-day competition and awarded by a panel of judges. The top point scorers are not yet known by GRT challenge organizers. Academic researchers are due to publish analysis of how the models stood up to probing by challenge entrants early next year, and a complete data set of the dialog between participants and the AI models will be released next August.
The companies involved in the challenge are expected to make improvements to their testing. They will inform the Biden administration of the guidelines for safe deployment of artificial intelligence. Last month, executives from major AI companies, including most participants in the challenge, met with President Biden and agreed to a voluntary pledge to test AI with external partners before deployment.