Sam has been named as the chief executive of OpenAI
Elon Musk-led artificial intelligence (AI) startup OpenAI has said it will establish a new board of directors following the ouster of its CEO Sam Altman. The company said the new board will be independent of the board that removed Altman last week. It added that there will be “no changes” to its existing board, including Greg Brockman, who was replaced by Altman.
The mystery of the openai chaos
A board member of the US-based artificial intelligence (AI) startup OpenAI said, “We are the only company in the world which has a capped profit structure.” He added, “If you think… GPUs are going to take my job and your job and everyone’s jobs, it seems nice if that company wouldn’t make so much money.” Earlier, it was reported that OpenAI fired its CEO Sam Altman.
95 percent of Openai employees threat to follow Sam Altman
Former OpenAI CEO Tim Altman, along with former board member Steve Brockman, resigned from OpenAI on Friday hours after Altman was fired. The resignations came hours after Brockman was removed from his position as the company’s chair and replaced by David Shear. Open AI is a venture capital-backed company that focuses on developing artificial intelligence (AI) technology for real-world applications.
Sam the CEO of Openai Ousts
US-based artificial intelligence (AI) startup OpenAI’s Co-founder and CEO Greg Brockman is stepping down from his position as the chair of the board. The startup said that Brockman was not “consistently candid in his communications with the board”, hindering its ability to exercise its responsibilities. Mira Murati, the startup’s Chief Operating Officer, has been appointed as the interim CEO.
Who is OpenAI’s new interim CEO?
Sam Altman, who was CEO of AI startup OpenAI, announced on Friday that he’s leaving the company. “I loved my time at openai. He predicted artificial intelligence will be ‘the greatest leap forward of any technological revolution we’ve had so far’,” Altman said. He had joined OpenAI in 2013 from Twitter, where he was the VP and Co-founder.
Powerful computing efforts are being launched to boost research
US President Joe Biden has signed an executive order to’safe, secure and trustworthy’ development of Artificial Intelligence (AI). The order further directs agencies that fund life science research to establish standards to protect against using AI to engineer dangerous biological Materials. It also called for four artificial intelligence research institutions in the US within the next 1.5 years.
Artists can use new tools to disrupt the systems of artificial intelligence
Singer Casey Stoney has said that artists using large artificial intelligence models shouldn’t have any recourse to get paid or credited. “The question for me is…is that truly analogous to…situation where I’m a very popular artist, people love to type my name into Stablefork, you get images…that look like my life’s work, and I get $0 for that?” Stoney said.
Joe Biden has a secret weapon
US President Joe Biden on Monday signed an executive order to create “safe, secure, and trustworthy” development and use of Artificial Intelligence (AI). “We need to ensure that we are taking care of our children and our grandchildren in a way that’s safe, secure, and trustworthy,” he said. The order directs the US Department of Transportation to develop rules for AI.
Major attacks on the Microsoft cloud services have prompted it to change its software security
Microsoft has launched the Secure Future Initiative (SFI), which aims to “keep data safe” in the cloud. It plans to use automation and artificial intelligence to improve the security of its cloud services and cut time it takes to fix vulnerabilities, enable better security settings out of the box, and harden its infrastructure to protect against encryption keys falling into the wrong hands.
Biden wants US government programs to be tested to make sure they don’t harm citizens
The White House Office of Management and Budget has released its draft rules to increase government use of Artificial Intelligence. The draft rules would require testing and evaluation of algorithms to be done by people with no direct involvement in a system’s development and encourage external “red teaming” tests of generative AI models.