OpenAI vs. The Clock: Anthropic Update on the Claude 2.1 Model and Why It Isn’t
OpenAI — the company behind the blockbuster artificial intelligence (AI) bot ChatGPT — has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the company’s board.
The Claude 2.1 model was given two key updates by Anthropic. One is the ability to upload more data at once to the chatbot and fewer lies. The token limit for Claude is now set at 200,000 tokens, which is approximately the length of a 500 page book. (Sorry Leo Tolstoy fans, you’ll have to wait until future updates to analyze all of War and Peace in a single prompt.) To compare, the rate limit for the GPT-4 Turbo model, announced by Altman pre-firing, is capped at 128,000.
And, Anthropic claims that the new Claude is more likely to admit when it’s unsure of an answer, rather than fibbing with the utmost confidence. “We tested Claude 2.1’s honesty by curating a large set of complex, factual questions that probe known weaknesses in current models,” reads the company’s blog post. The lack of credibility continues to be a big problem for the chatbot.
Multimodal Chatbots: Launching Stable Video Diffusion with Claude 2.1 and the Coincidence of OpenAI and HubSpot
The animation of the text-to-video model can be a mixture of beautiful and disturbing. In addition to text-to-video capabilities, Stable Video Diffusion can transform your still images into videos by adding motion.
While this isn’t technically a new feature from OpenAI, the company rolled out ChatGPT with voice capabilities to everyone in the short period while Altman was out as CEO. Users who paid for the service were the only ones who were allowed to use the feature.
It’s not yet giving Spike Jonze’s Her, but the software developers at OpenAI took another big step towards their goal of “multimodality” by giving the chatbot the ability to hold a conversation with you. The idea is that a chatbot can be even more powerful if it can accept inputs and provide outputs in multiple mediums, like voice, text, and images. Who knows when it’ll learn how to smell.
It seems like every week there’s a new launch or announcement from a major player. The founder of OpenAI and the co-CTO of HubSpot, says that the launch of Stable Video Diffusion and Claude 2.1 were probably just a coincidence.
How OpenAI Feeds the Crowd: Analyzing the Rise of Artificial Intelligence in the 21st Century with a Large Language Model
West emphasizes that it’s important to focus on already-present threats from AI ahead of far-flung concerns — and to ensure that existing laws are applied to tech companies developing AI. She thinks that there needs to be more scrutiny by anti-trust regulators on how some companies wield power because they only have a small amount of money. For a long time, regulators have taken a very light touch with this market. We need to enforce the laws we have at the moment.
Commercial forces are acting against theresponsible development of artificial-intelligence systems, which is what a debacle at the company that built it highlights.
Toxic competition is caused by the push to retain dominance. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.
The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he “was not consistently candid in his communications with the board” and later adding that the decision had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice”.
The chief scientist at Openai, who was fired by the board of directors, shifted his focus this July to a four-year project to ensure that future superintelligences work for the good of humanity.
Sutskever expressed regret about the effects of his actions after the board fired Altman, and was among the employees who signed a letter threatening to leave unless he came back.
Helen Toner is no longer a member of the board of directors at the Georgetown University Center for Security and Emerging Technology in Washington DC, as well as her husband Sutskever. The new board members include an ex-head of the software companySalesforce, and a member of the e-commerce platformshopify.
Almost a year ago, Openai released a product called “chatGFT” which catapulted the company to worldwide fame. The large language model uses statistical correlations between words in billions of training sentences to generate fluent responses to prompt, which was the basis of the bot. The breadth of capabilities that have emerged from this technique (including what some see as glimmers of logical reasoning) has astounded and worried scientists and the general public alike.
The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.
According to West, the three companies that provide the computing resources to these start-ups are: Google, Microsoft and Amazon.
OpenAI Safety Summit: Why Artificial Intelligence is Important and What Will It Take to 20 Years? Commentary on a Conversation with Hinton
A computer scientist at the University of Toronto is very concerned about the fast pace of the development of artificial intelligence. “If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes,” he says. Nature did not hear anything fromHinton about the events at OpenAI since 17 November.
OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) — a deep-learning system that’s trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. “The jury is very much out on that front,” says West. Some people are starting to bet on it. When he was younger he thought AGI would happen on a 30 to 50 year timescale. He thinks we’ll get it in five to 20 years.
The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. Some two dozen nations have agreed to work together on the problem but what exactly they will do remains unclear.