Custom GPT-4o model with memory features

It’s great that we have this newer, faster model, but it would be nice if the new GPT-4o model and the memory were integrated into our custom GPT builds or allowed us to use them in new custom GPTs.

8 Likes

here says it is available also for custom gpt’s but it depends on the settings of the user too…
https://openai.com/index/memory-and-new-controls-for-chatgpt/

Yep… the new model and the new features seem nice… I cant wait to try the desktop version they said on stream…

I m curius, if on an existing chat seesion that is on v4, changing it to v4o works? I think yes… yes?

I’m not referring to custom instructions; I’m discussing the process of creating a GPT.

Can someone confirm that GPTs are still not using GPT-4o? My GPT is still extremely slow and still ignores the custom instructions.

2 Likes

I was wondering about the same.

GPTs are presumably still using Turbo, according to the stated knowledge cutoff date. If you ask it, GPT 4 Turbo and GPTs consistently answer December 2023.

GPT 4o’s cutoff is Oct 2023.

I cannot rule out that they just forgot (!) to update the cutoff date in the GPT system prompt of course, but the speed also seems to be more Turbo than 4o.

Same question, also tried to find explicit information what model is used in specific custom GPTs, and cannot find anywhere such an info.

I’m pretty certain they didn’t “forget” about custom GPTs. :sweat_smile:

Custom GPTs have their own individual setup, separate from our personal ChatGPTs. OpenAI has already said that custom GPTs will eventually get a checkbox setting that will enable memory for that GPT – once memory is widely rolled out to everybody, being the catch.

So while this is just assumption and speculation, they’re probably gauging the rollout of all of these features and upgrading them progressively as more people begin to use their services. And that GPT-4o AND memory will be Coming Soon :tm: to custom GPTs.

For the record, I am on team “please drop some official info so we’re not left speculating.”

1 Like

I expect the switch to GPT-4o to break many of the more complex GPTs, so we’ll be aware of it soon enough. I hope there will be a toggle switch for builders to decide the version; but not sure if it will be there. Free-tier users will only have access to GPT-4o and OpenAI will surely want to push every builder to make GPT-4o GPTs, so probably there won’t be a choice.

BTW I hope that GPT-4o is better at following complex, lengthy instructions. The early vibe from Twitter/X from people running local tests seems to point to the direction that it might be slightly worse at that than the latest GPT-4-Turbo (and of course better in other ways, as reported by the various ELO improvements). We’ll see.

It doesn’t talk about custom GPTs. What your referring to is the custom instructions, but no, GPT-4o and Memory are not implemented for Custom GPTS in ChatGPT. I’m not talking about API and the Playground and assistants.

I’m working on Memory Mate GPT, a custom GPT aimed at tracking long-term user journeys—career growth, wellness, and personal milestones. Without stateful memory, it can’t reach its full potential.

Imagine Memory Mate remembering key moments like a user’s promotion goal, emotional struggles, or wellness improvements over time, offering relevant, proactive advice weeks or months later. This kind of personalized, evolving support could transform how users engage with AI.

Given the recent memory developments, it would be exciting to prioritize stateful memory for custom GPTs, not just for continuity but for enabling persistent, long-term insights that users can rely on. Enhancing this with additional memory storage would give custom GPTs a revolutionary capability, unlocking deeper, more meaningful user relationships.

How can we help prioritize this on the roadmap for custom GPTs? Would love to see memory storage be expanded to support these cases!

Custom GPTs still only use GPT-4. I’m not working with any custom GPT until they have it utilize 4o, or 1o.

1 Like

Here’s what I got from Perplexity…it seems like the custom gpt to persistent memory is independent on which model is being used.

Based on the information provided in the search results and our previous discussion, there are a few key points to consider regarding persistent memory for custom GPTs:

  1. Current Availability: ChatGPT itself has a persistent memory feature for paid users (Plus plan subscribers), which allows it to retain information across multiple conversations[1][2].

  2. Custom GPTs Limitation: As of now, custom GPTs do not have access to the same persistent memory feature that’s available in the standard ChatGPT[3].

  3. Future Plans: OpenAI has indicated that they plan to extend memory capabilities to custom GPTs in the future. Specifically, it’s been mentioned that custom GPTs will eventually get a checkbox setting to enable memory once the feature is widely rolled out[3].

  4. Model Version Independence: The lack of persistent memory in custom GPTs is not related to whether they use GPT-4 or GPT-4o. It’s a platform-level feature that hasn’t been implemented for custom GPTs yet, regardless of the underlying model.

  5. Potential Reasons for Delay:

    • OpenAI may be gauging the rollout of these features progressively as more people use their services.
    • There might be technical challenges in implementing memory for custom GPTs while maintaining their specific behaviors and instructions.
    • OpenAI could be assessing the impact of memory on custom GPT performance and user experience before rolling it out.
  6. User Control: When implemented, it’s likely that builders of custom GPTs will have some control over whether to enable memory features, similar to how users can currently manage memory in standard ChatGPT[1][2].

In summary, while ChatGPT itself has persistent memory capabilities, this feature hasn’t been extended to custom GPTs yet. OpenAI has plans to implement this in the future, but they’re likely taking a cautious approach to ensure it integrates well with the custom GPT ecosystem. The delay isn’t specifically related to the GPT-4 vs GPT-4o distinction, but rather to the overall implementation of memory features in the custom GPT framework.

Sources
[1] OpenAI rolls out memory in ChatGPT for all paid users — here’s what it means OpenAI rolls out memory in ChatGPT for all paid users — here's what it means | Tom's Guide
[2] OpenAI is Adding Memory Capabilities to ChatGPT to Improve Conversations OpenAI is Adding Memory Capabilities to ChatGPT to Improve Conversations - InfoQ
[3] Custom GPT-4o model with memory features Custom GPT-4o model with memory features

Here’s a workaround someone came up with: Upgrading GPT with Persistent Memory Using Cloudflare Workers – Saadia's Blog