Uncategorized

The OpenAI gave it a memory

Personalization of GPTs: A Mind-Breaking Strategy from Learning to Embrace Memory (with an Empty Appendix by Martin Fedus)

It is a feature ChatGPT desperately needs. It’s also a total minefield. OpenAI’s strategy here sounds a lot like the way other internet services learn about you — they watch you operate their services, learn about what you search for or click on or like or whatever else, and develop a profile of you over time.

But that approach, of course, makes a lot of people feel uncomfortable! The idea of Openai fed back into the system as training data to help personalize the bot is both cool and sleazy, as many users are already wary of this.

Each custom GPT has its own memory. OpenAI uses the Books GPT as an example: with memory turned on, it can automatically remember which books you’ve already read and which genres you like best. There are lots of places in the GPT Store you can imagine memory might be useful, for that matter. The Tutor Me could offer a much better long-term course load once it knows what you know; Kayak could go straight to your favorite airlines and hotels; GymStreak could track your progress over time.

The model was trained to stay away from proactive remembering and we think there are more useful cases for that example.

OpenAI is also not the first entity to toy with memory in generative AI. Google has emphasized “multi-turn” technology in Gemini 1.0, its own LLM. This means you can interact with Gemini Pro using a single-turn prompt—one back-and-forth between the user and the chatbot—or have a multi-turn, continuous conversation in which the bot “remembers” the context from previous messages.

Tech consumers have been told for the past 10 years that the hypervigilant virtual assistant is going to be a good thing, and that there is more to it than that. Possibly both, though OpenAI might not put it that way. “I think the assistants of the past just didn’t have the intelligence,” Fedus said, “and now we’re getting there.”