The Zero Trust AI Framework, a New Framework to Refine Digital Shielding Laws to Protect Machine-Aided Intelligent Services from Defamation
Accountable Tech and its partners suggested bright-line rules, or policies that are clearly defined and leave no room for subjectivity, as lawmakers continued to meet with artificial intelligence companies.
The group sent the framework to politicians and government agencies mainly in the US this month, asking them to consider it while crafting new laws and regulations around AI.
The Zero Trust AI framework also seeks to redefine the limits of digital shielding laws like Section 230 so generative AI companies are held liable if the model spits out false or dangerous information.
“We want to get the framework out now since the technology is evolving quickly, but new laws can’t move at that Speed.” says Jesse Lehrich, co-founded of Accountable Tech.
As the government continues to figure out how to regulate generative AI, the group said current laws around antidiscrimination, consumer protection, and competition help address present harms.
For many years researchers have warned about bias in the use of computer-aided intelligence. A recent article by Rolling Stone shows how experts such as Timnit Gebru were ignored by companies after they sounded the alarm on this issue.
“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. (Section 230 was passed in part precisely to shield online services from liability over defamatory content, but there’s little established precedent for whether platforms like ChatGPT can be held liable for generating false and damaging statements.)
The AI Ecosystem: How Big Tech Companies can rely on Big Tech to enforce their own rules? An Expert Comment on “The Challenge of Regulation and Compliance in Artificial Intelligence”
These include prohibiting AI use for emotion recognition, predictive policing, facial recognition used for mass surveillance in public places, social scoring, and fully automated hiring, firing, and HR management. They also ask to ban collecting or processing unnecessary amounts of sensitive data for a given service, collecting biometric data in fields like education and hiring, and “surveillance advertising.”
Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Microsoft invested in Open AI, which is one of the most well-known generative machine learning companies. Google released its large language model Bard and is developing other AI models for commercial use.
The proposal is similar to the pharmaceutical industry’s method where companies submit to regulation before creating or releasing an Artificial Intelligence model.
The nonprofits don’t want a single regulatory body. However, Lehrich says this is a question that lawmakers must grapple with to see if splitting up rules will make regulations more flexible or bog down enforcement.
Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.
He says we need to differentiate between the different stages of the supply chain.
Content Self-Determination for the 21st Century: A Wired Opinion Analysis of User Choices in the EU’s Digital Services Act
TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.
Nita is the author of The Battle for Your Brain. The Right to Think was defended in the Age of Neurological Technology by a Duke University professor.
A well-structured plan requires a combination of regulations, incentives, and commercial redesigns focusing on cognitive liberty. Regulatory standards must govern user engagement models, information sharing, and data privacy. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence.
design principles that embody cognitive liberty should be adopted by technology companies. There are options for greater control over notifications on Apple devices that are steps in the right direction. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.
The opinions of outside contributors can be found in WIRED Opinion. You can read more opinions here. Submit an op-ed at [email protected].