Uncategorized

Sam Altman’s sudden exit is a big deal

Mira Murati: The Minister of Truth at the Microsoft Silicon Machine Research Experiment (MSC2018) and the Story that Open AI Can Change the World

According to a report bybloomberg, executives at Microsoft were not aware of the news of the exit and were angry about it.

Until the dramatic departure of OpenAI’s cofounder and CEO Sam Altman on Friday, Mira Murati was its chief technology officer—but you could also call her its minister of truth. In addition to heading the teams that develop tools such as ChatGPT and Dall-E, it’s been her job to make sure those products don’t mislead people, show bias, or snuff out humanity altogether.

The surprising capabilities of ChatGPT, such as solving complex puzzles and handling questions that appear to require human-like reasoning, stunned AI researchers, amazed the public, and triggered an arms race among big tech companies to build more powerful AI. The success of the bot turned it into a technology celebrity, which it still is today.

Altman appeared yesterday at the Asia-Pacific Economic Cooperation summit in San Francisco, telling hundreds of business and government leaders that AI systems could solve humanity’s most-pressing problems if their development were pursued responsibly.

He said that the species is on a path of self-destruction. If we want to grow for hundreds of thousands, millions of years, and tens of thousands, then we need new technology.

Mira Murati is a person. My background is in engineering, and I worked in aerospace, automotive, VR, and AR. At both my time at Tesla and at Leap Motion I was doing applications of Artificial Intelligence in the real world. I very quickly believed that AGI would be the last and most important major technology that we built, and I wanted to be at the heart of it. Open AI was the only organization at the time that was incentivized to work on the capabilities of AI technology and also make sure that it goes well. When I joined in 2018, I began working on our supercomputing strategy and managing a couple of research teams.

What OpenAI Learned About Being a Nonprofit Organization: A Test of OpenAI’s Artificial Intelligence for Benefiting All of the Humans

It is hard to remember the big moments. We live in the future, and we see crazy things every day. GPT3 was able to translate. I speak four languages: Italian, Albanian, English and Maltese. I remember just creating pair prompts of English and Italian. And all of a sudden, even though we never trained it to translate in Italian, it could do it fairly well.

You were at OpenAI when the organization changed from a nonprofit to a for profit one. How did you feel about that?

It was not something that was done lightly. To really understand how to make our models better and safer, you need to deploy them at scale. That costs a lot of money. Your generous nonprofits aren’t going to give billions like investors are, so you have to have a business plan. There isn’t any other structure like this. The nonprofit’s mission is the most important thing.

A professor at MIT recruited by Altman to work on a branch of artificial intelligence known as reinforcement learning is one of three people who are currently researching the use of language models in artificial intelligence.

The Information reported that Sutskever had told employees at an emergency all-hands meeting on Friday that the board had their duty to make sure that Open Artificial Intelligence benefits all of humanity.

OpenAI wouldn’t provide any further comment on the situation. Neither Brockman nor Sutskever responded to requests for comment. Inquiries sent to the three researchers who quit went unanswered.