Uncategorized

The nonprofit group says AI companies need to show that their machine is safe

Zero Trust AI Governance: A Framework for Proving Artificial Intelligence is Safe, says Nonprofit Group [AI Companies must prove their AI is safe, says nonprofit group]

Nonprofits Accountable Tech, AI Now, and the Electronic Privacy Information Center (EPIC) released policy proposals that seek to limit how much power big AI companies have on regulation that could also expand the power of government agencies against some uses of generative AI.

The framework was sent to politicians in the US and asked if they would consider it while crafting new laws and regulations about artificial intelligence.

The framework, which they call Zero Trust AI Governance, rests on three principles: enforce existing laws; create bold, easily implemented bright-line rules; and place the burden on companies to prove AI systems are not harmful in each phase of the AI lifecycle. Its definition of AI encompasses both generative AI and the foundation models that enable it, along with algorithmic decision-making.

“We wanted to get the framework out now because the technology is evolving quickly, but new laws can’t move at that speed,” Jesse Lehrich, co-founder of Accountable Tech, tells The Verge.

The Federal Trade Commission is investigating OpenAI in order to discover potential consumer harm. Other government agencies have also warned AI companies that they will be closely monitoring the use of AI in their specific sectors.

Discrimination and bias in AI is something researchers have warned about for years. A recent Rolling Stone article showed how Timnit Gebru sounded the alarm for years but was ignored by the companies that employed him.

Source: AI companies must prove their AI is safe, says nonprofit group

Do Online Services Have a Safe AI? AI Companies Must Proof Their AI Is Safe, Says Nonprofit Group [AI Companies, says The Verge

“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. The section was passed to shield online services from liability for defamation, but there isn’t any precedent for it being applied to platforms that generate false and damaging statements.

These include prohibiting any use of Artificial Intelligence for emotion recognition, predictive policing, facial recognition, and fully automated hiring, firing, and HR management. They want to ban the use of unnecessary amounts of sensitive data for a given service, as well as the collection and use of “biometrics in fields like education and hiring,” and “surveillance advertising.”

Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Microsoft and OpenAI are two companies that work with each other on generative artificial intelligence. Bard, a large language model, was released by the internet giant.

The group proposes a method similar to one used in the pharmaceutical industry, where companies submit to regulation even before deploying an AI model to the public and ongoing monitoring after commercial release.

The nonprofits do not call for a single government regulatory body. However, Lehrich says this is a question that lawmakers must grapple with to see if splitting up rules will make regulations more flexible or bog down enforcement.

Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.

Source: AI companies must prove their AI is safe, says nonprofit group

OpenAI wants GPT-4 to solve the content moderation dilemma: A review by J’ore-Wilff and Emp’ai

“Realistically, we need to differentiate between the different stages of the AI supply chain and design requirements appropriate for each phase,” he says.

OpenAI says it is a new approach that is revolutionary but has been used for moderation for years. Meta uses an automated system to moderate the vast majority of harmful and illegal content, even though a vision of a perfect automated system hasn’t panned out yet. Smaller companies might be able to find appeal in Openai’s technology, because they don’t have the resources to develop their own technology.

Second, GPT-4 can allegedly help develop a new policy within hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. Openai mentions how harmful content, such as videos of child abuse, can affect workers’ well-being.

It is clear that perfect content moderation at scale is impossible. Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.

Source: OpenAI wants GPT-4 to solve the content moderation dilemma

The Challenge of Labelling Negligible and Irregular Content Posted by Clickworkers and Human Workers in the OpenAI Era

Clickworkers and human work is important to OpenAI. Thousands of people, many of them in African countries such as Kenya, annotate and label content. The texts can be disturbing, the job is stressful, and the pay is poor.

In particular, the gray area of misleading, wrong, and aggressive content that isn’t necessarily illegal poses a great challenge for automated systems. Even human experts struggle to label such posts, and machines frequently get it wrong. The same applies to satire and images that are used to document crimes.