Uncategorized

There is not enough protection for whistle blowers on artificial intelligence safety, say former Openai employees

OpenAI and the Safety of AI: Comment on a recent OpenAI Worker & OpenAI’s Anomalous Claims

Several former OpenAI employees warned in an open letter that advanced AI companies like OpenAI stifle criticism and oversight, especially as concerns over AI safety have increased in the past few months.

The letter says that while they believe in AI’s potential to benefit society, they also see risks, such as the entrenchment of inequalities, manipulation and misinformation, and the possibility of human extinction. While there are important concerns about a machine that could take over the planet, today’s generative AI has more down-to-earth problems, such as copyright violations, the inadvertent sharing of problematic and illegal images, and concerns it can mimic peoples’ likenesses and mislead the public.

The signees say that current protections for whistle blowers are insufficient because they focus on illegal activity rather than concerns that are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of time off are protected by whistleblower protection laws, which means employers cannot fire, lay off, reduce hours, or demote whistleblowers. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues,” the letter reads.

OpenAI: A No-No-go Theorem for Employees Using Nonwork Phones/Computers or Mobile Devices

Are you an employee at OpenAI? We’d like to hear from you. Will Knight can be reached using a nonwork phone or computer.

Last November, Altman was fired by OpenAI’s board for allegedly failing to disclose information and deliberately misleading them. After a very public tussle, Altman returned to the company and most of the board was ousted.