Uncategorized

Sam has reached agreement to return to OpenAI

How OpenAI Found It, and Why OpenAI Has Done It: A Conversation with Sutskever about the Disruption of Sam Altman

In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported out WIRED’s October cover story. The structure of the company was one of the topics we discussed. OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way. The company found a path in large language models that gave strikingly fluid text, but it took a lot of money to develop and implement that path. This led OpenAI to create a commercial entity to draw outside investors, and it netted a major partner: Microsoft. Virtually everyone in the company worked for this new for-profit arm. Limits were put on the company’s commercial life. After OpenAI became a pure nonprofit, the profit that was to be delivered to investors would be capped at 100 times what they had invested. The original nonprofit board answered only to the goals of their original mission and maybe God, which governed the whole shebang.

The vacuum has left room for rumors that include that Altman was too deferential to Microsoft, or that he wasvoting too much time to side projects. It has also nurtured conspiracy theories, like the idea that OpenAI had created artificial general intelligence (AGI), and the board had flipped the kill switch on the advice of chief scientist, cofounder, and board member Ilya Sutskever.

There have been two signal events recently. One of them you’ve heard about. Sam Altman, the company’s chief executive, was fired by the nonprofit that governs Openai. The decision was unexpected and largely unexplained. It was stated in a cryptic statement that Mr. Altman had left due to the fact that he was not continually candid in his communications with the board.

Artificial intelligence researchers have long suspected the machine can’t be turned off. A powerful A.I. is developed in the story. The designers are thrilled, then frightened. They go to pull the plug, only to learn the A.I. has copied its code elsewhere, perhaps everywhere.

Reversal of Altman’s Board Policy in the OpenAI / Microsoft AI Corruption Saga: A press release on the CEO’s resignation

During the whole saga, the board members who opposed Altman withheld an actual explanation for why they fired him, even under the threat of lawsuits from investors. On Sunday, a key member of the board, Ilya Sutskever, flipped back to Altman’s camp, leaving the remaining three board members more vulnerable.

Helen Toner, who was reportedly the key board member in the move to oust Altman, tweeted “and now, we all get some sleep,” which will be very funny at a later time, we’re sure.

According to the statement shared with The Verge, Kelly Sims said that OpenAI had the potential to be a consequential company. “Sam and Greg possess a profound commitment to the company’s integrity, and an unmatched ability to inspire and lead. We couldn’t be more excited for them to come back to the company they founded and helped build into what it is today.”

All key parties have posted about the deal to return, which appears to be a done deal minus some last minute paperwork. Altman said that “everything I’ve done over the past few days has been in service of keeping this team and its mission together.”

A person with direct knowledge of the negotiations says that the sole job of this small, initial board is to vet and appoint a new formal board of up to 9 people that will reset the governance of OpenAI. The expanded board will most likely have a seat on it for Microsoft, as well as for Altman himself. The company didn’t want any more surprises, said the CEO during a press tour.

The dramatic reversal of openai’s policy came late Tuesday night after five days of turmoil in the artificial intelligence community.

The company, maker of the popular ChatGPT, said it would also create a new board of directors. This comes after the former board voted to fire Altman as CEO late last week.

What was the story of ChatGPT’s CEO? How the AI and the Board of Directors could not have a fixed profit structure without a capped profit structure

I joked that the chart that mapped out the relationship looked like something a future GPT might come up with when asked to design a tax dodge. He said the company was the only one with a capped profit structure. If you believe like we do, that if we succeed really well, these graphics cards are going to take my job and everyone’s jobs, it makes sense if that company does not make truly unlimited amounts of returns. In the meantime, to make sure that the profit-seeking part of the company doesn’t shirk its commitment to making sure that the AI doesn’t get out of control, there’s that board, keeping an eye on things.

Even ChatGPT might have struggled to dream up such a convoluted story of corporate intrigue. The questions about what led to his dismissal are not over.

“It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity.

“What I know with certainty is we don’t have AGI,” says David Shrier, professor of practice, AI, and innovation, at Imperial College Business School in London. I know that there was a huge failure of governance.