Llama 3 isn’t Open Source, but it’s Definitely Going to be a Major Product of AI: Scaling Up the Social Media Landscape
Meta has yet to make the final call on whether to open source the 400-billion-parameter version of Llama 3 since it’s still being trained. Zuckerberg downplays the possibility of it not being open source for safety reasons.
He says that he doesn’t think the level of work we or others in the field are doing in the next year is appropriate for those kinds of risks. I think we’ll be able to open source it.
A key focus for Llama 3 was meaningfully decreasing its false refusals, or the number of times a model says it can’t answer a prompt that is actually harmless. If that is what it is asked to make, then it’s a good example. Another is one I gave him during an interview last year, when the earliest version of Meta AI wouldn’t tell me how to break up with someone.
There is a good example of how quickly these models can be scaled. The biggest version of Llama 2, released last year, had 70 billion parameters, whereas the coming large version of Llama 3 will have over 400 billion, Zuckerberg says. Llama 2 trained on 2 trillion tokens (essentially the words, or units of basic meaning, that compose a model), while the big version of Llama 3 has over 15 trillion tokens. The number of parameters is yet to be publicly confirmed by OpenAI.
“I don’t think that today many people really think about Meta AI when they think about the main AI assistants that people use,” he admits. “But I think that this is the moment where we’re really going to start introducing it to a lot of people, and I expect it to be quite a major product.”
There’s a comparison to be made here to Stories and Reels, two era-defining social media formats that were both pioneered by upstarts — Snapchat and TikTok, respectively — and then tacked onto Meta’s apps in a way that made them even more ubiquitous.
The Meta AI Assistant: A Open-Source General-Purpose Chatbot with Real-Time Search and Image Generative Actions
While it has only been available in the US to date, Meta AI is now being rolled out in English to Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe, with more countries and languages coming. It is a far cry from the philosophy of a truly global artificial intelligence assistant, but it is closer to becoming a reality.
The Meta AI assistant is the only chatbot I know of that now integrates real-time search results from both Bing and Google — Meta decides when either search engine is used to answer a prompt. Its image generation has also been upgraded to create animations (essentially GIFs), and high-res images now generate on the fly as you type. The panel of prompt suggestions when you first open a chat window is meant to demystify what a general-purpose chatbot can do.
The evaluation set contains 1,800 questions and they cover 12 use cases including asking for advice, brainstorming, classification, closed question answering, coding, creative writing, interpretation, and summing up.
It should also be noted that benchmark testing AI models, though helpful in understanding just how powerful they are, is imperfect. The model already knows the answers to the questions evaluators will ask it even if it is a benchmark model.