Openai and ChatGPT: The Future of Artificial Intelligence as a Transitory Stage in the Emerging Dialogue of Gemini
That vision of the future of Artificial Intelligence shows a lot of similarities to that by OpenAI on Monday. Openai revealed a new interface that will allow users to converse with one another via voice, and talk about what they see through a phone or computer screen. That version of ChatGPT, powered by a new AI model called GPT-4o, also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness.
It feels so wonderful, and we will be giving everyone the ability to use it over the next few weeks.
At another point in the demo, ChatGpst asked how he could make his day better. When Zoph asked the chatbot to look at a selfie of him and say what emotions he was showing, ChatGPT responded, “I’ll put my emotional detective hat on” and warmly said “It looks like you’re feeling pretty happy and cheerful … whatever’s going on, it looks like you’re in a great mood.”
The new interface was highlighted by the CEO of OpenAI in a Monday post. It feels like it is from the movies and it is still a bit surprising that it is real. “Getting to human-level response times and expressiveness turns out to be a big change.”
Dave Burke, vicepresident of engineering on Android, said that he thinks there is now technology to build really exciting assistants. “We need to be able to have a computer system that understands what it sees and I don’t think we had the technology back then to do it well. Now we do.”
In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices’ cameras, and converse about them in natural language. It remembered where a person left a pair of glasses, read and analyzed code from a computer screen, answered questions about its components, identified a computer speaker, and took a picture of a London neighborhood.
In an interview ahead of the event, Hassabis said he thinks text-only chatbot will be a transitory stage in the march towards more sophisticated artificial intelligence helpers. This was the vision behind Gemini, according to Hassabis. “That’s why we made it multimodal.”
It’s not clear what place they’ll find in their personal and workplace lives after seeing, hear, and speak the new versions ofGemini and chatGPT.
Circle to Search: Why Google is Going to the Graveyard? A Conversation with Samat, Google Assistant, and Meissner at I/O Developer Conference
There was a feature called Now onTap that was introduced by the company nearly 10 years ago, and it helps you find information related to what you’re looking at. Talking about a movie with a friend over text? Without leaving the texting app, you can get details about the title on Now on Tap. Looking at a restaurant in Yelp? Just a tap would surface OpenTable recommendations on the phone.
I was fresh out of college, and these improvements felt exciting and magical—its ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.
Today, at Google’s I/O developer conference in Mountain View, California, the new features Google is touting in its Android operating system feel like the Now on Tap of old—allowing you to harness contextual information around you to make using your phone a bit easier. These features are powered by a decade of improvements in large language models.
Circle to Search is the new way of looking at Search on mobile. Circle to Search is more than just a search box, it’s more like a game, and it’s similar to Now on Tap. The person says, “You circle what you want to search on the screen.” It’s fun and modern to use, which makes it skew younger as well.
A clear message was given by Samat that Gemini was not just giving answers but was showing how to solve the problems. Later this year, Circle to Search will be able to solve more complex problems like diagrams and graphs. The models are fine-tuned for education, so this is powered by them.
The way that Gemini is overshadowing the way thatGoogle Assistant is. Really—when you fire up Google Assistant on most Android phones these days, there’s an option to replace it with Gemini instead. I asked if this meant assistant was going to the graveyard.