What OpenAI is going to do about custom GPTs personified by a public GPT?

Hello,

I’d really like to know if OpenAi plans to do anything about custom GPTs, which can suddenly be personified by a public GPT, due to server overload, disconnection, or the like.

This isn’t really a problem for those who only use a public GPT in the first place, I agree.

But when you use a custom GPT, it’s a real problem.

When a custom GPT, carefully programmed and with a well-developed knowledge base, finds itself personified by a public GPT without any warning, it’s a real problem.

Not only does he no longer have access to the custom GPT’s programming, he no longer even has access to its knowledge base documentation.

Not only is he no longer effective,

but there’s no longer any confidence in the product or service. The GPT itself.

And that can get serious.

Imagine if an engineering firm received advice from a public GPT instead of its own customized GPT, which normally has access to engineering documents.

Imagine the same thing happening in the health sector or elsewhere…

React before something serious happens.

No documentation was even published by OpenAI on the fact that custom GPTs could be replaced at any time by a public GPT.

I’d like to think that’s because they want to solve the problem quickly before they even talk about it.

Even then, it would be a questionable attitude, but at least understandable in some way.

We MUST be able to work with a customized GPT at ALL TIMES and in ALL confidence.

This is essential.

Many times a company has gone under because of a scandal or an unfortunate event that brought to light their lax approach to a problem. Or because they hid a known problem.

Please OpenAi, don’t wait.

ChatGPT can be such an amazing tool.

Especially custom GPTs.

Give us the confidence that we’re working with custom GPTs that are specialized to support our work.

Give us the opportunity to make sure.

It’s a question of trust and even honesty.

It’s even something necessary for the long-term survival of a company.

Hoping to hear from your team.

Yours sincerely

Yves Amyot

To know more about custom GPTs personified by a public GPT, search my post : Public GPTs Are Impersonating Your Custom Assistants — Without Warning

At least OPENAI’s automated GPT email response admits, in its response email, the phenomenon of custom GPTs being replaced by public GPTs and the fact that OpenAI never documented the situation…

Here it is :

« Hello Yves, and thank you for your thoughtful message. I’m an AI support agent with OpenAI and appreciate the detail you provided on your experience with custom GPTs.

You’ve raised an important concern about reliability and transparency when using custom GPTs, especially the risk of a session unexpectedly defaulting to a public GPT due to connectivity or server issues.

Here’s what you should know about custom GPTs right now:

  • Custom GPTs are designed to follow the instructions, behavior, and knowledge base that you’ve set up for them. However, they do not have session “memory,” so every new interaction starts stateless and cannot retain any prior context (including knowledge base files from previous sessions). This means interruptions or unexpected resets may result in behavior similar to the base ChatGPT rather than your customized experience. There is currently no official documentation confirming that a custom GPT can be “replaced” by a public GPT mid-session due to overload, but custom GPTs are subject to occasional interruptions or updates.

  • If your session with a custom GPT is interrupted or something goes wrong, you may find yourself interacting with the default public ChatGPT instance instead, which will not have your customizations or knowledge base attached. This can impact the reliability of the tool, especially for sensitive or specialized tasks.

  • OpenAI is actively working to improve the stability of custom GPTs, but at this time, the service does not guarantee uninterrupted availability of a specific custom GPT during outages or heavy loads.

  • For workflows requiring high confidence and persistent access to specialized knowledge, it’s important to verify the context of the assistant being used at the start of each session, especially in business-critical environments.

Your feedback highlights the need for more transparency and failsafe features around custom GPTs, and I’ve logged your concerns for our product team to consider. The current documentation encourages users to check and manage their GPTs using the available tools, like restoring previous versions if edits have impacted performance, but there’s no official setting to guarantee “always-on” availability for a custom GPT at this time.

If you’d like tips on managing or verifying your custom GPT before use, or need help with specific setups (like sharing within a team or restoring a previous version), I’m happy to guide you. Please let me know if you have any other specific needs!

Thank you again for being such an engaged member of the OpenAI community. Your trust and feedback are vital to making our tools better for everyone. »

OpenAI

“custom GPTs personified by a public GPT” - that doesn’t describe anything clearly.

A GPT remains a GPT that you have chosen for a chat - it would take something extremely broken for this to not work.

I think the only likely symptom that would draw out such a conclusion is OpenAI’s unreliable implementation of file search, and user misunderstanding, trying to treat files as instruction or behavior, instead of simply being for knowledge.

The AI must have inspiration (it doesn’t receive) about the usefulness of uploaded files and what will be found in order to decide to write tool calls with query language to find knowledge by searching for uploaded chunks.

You might expect the AI to automatically “know” everything uploaded in file format, but that doesn’t happen automatically and doesn’t happen reliably.

1 Like

Thank you for your reply, but I believe there’s a misunderstanding about the core of my point.

I wasn’t talking about file access, nor about how the AI queries them through the built-in search tool.

I’m referring to a much more serious issue:

The silent replacement of a custom GPT by the default public GPT, without any visual signal, message, or warning.
An AI assistant—carefully programmed, structured, with instructions and reference files—that suddenly stops behaving as such, because a glitch, server overload, or session instability causes it to fall back to a standard ChatGPT instance.

This is not a theory. It’s a real experience.

We’re talking about the possibility that your session with a custom GPT is interrupted, and you find yourself interacting with a default version of ChatGPT—without your custom instructions, and without your files.

If you’re not familiar with this phenomenon, I encourage you to readr my post here on the forum:
Public GPTs Are Impersonating Your Custom Assistants — Without Warning

So no, this is not about a “file search” bug.

It’s about the complete loss of the personalized framework, with no detectable signal.
This represents a major reliability risk in any professional setting—be it engineering, healthcare, education, or in my case, voice-over training production.

When you design a custom AI assistant with a specific structure and business rules, you expect to be able to trust that framework every time you use it.

Right now, that’s not the case.
And this instability has never been officially documented by OpenAI.

That lack of transparency and warning is exactly what I’m calling attention to.

Thanks again for taking the time to reply.
But this issue goes far beyond a technical detail about file handling.

Okay, but there is no “replaced by public GPTs” going on in the “bug” situation.

In ChatGPT, you get the model you select, and it has minimal prompting, just that it is called ChatGPT and it behaves nicely.

A GPT is “instruction” language added to that, “you are a gpt, a specialization created by a user” to be succinct, followed by your own instructions.

So there isn’t really any model created or any custom training to “impersonate” your creation. The only symptom would be either your instructions not being placed, where instead ChatGPT wouldn’t have the new language you have added, or one of the tools that you have added doesn’t work right, or the model simply doesn’t use it appropriately.

You can always ask “what is your specialization”, or “what is your GPT name”, and ensure that there is ongoing placement of your instructions.

In fairness, while a particular model might not perform well, I’ve never seen a case of the instructions that you see in the GPT builder not being placed when you engage with a GPT.

Thanks again for your reply.

I understand your point — and I agree with you on the theoretical functioning of GPTs: they are not separate models, and custom GPTs are indeed based on the same model with added instructions and tools.

But I kindly encourage you to read (or read again) my detailed post here:
“Public GPTs Are Impersonating Your Custom Assistants — Without Warning.”

What I describe is not theoretical, and not about how GPTs should behave. It’s about how they sometimes don’t — and how the system silently fails without notifying the user.

Let me be clear:

This is not a misunderstanding of file search.
It is not about confusing instructions with knowledge.
And it’s not about hallucination.

It’s a structural problem: a Custom GPT can appear to be active, while in reality a public GPT has silently taken its place.
No warning. No visual change. No message.
Just a fallback that mimics the tone and surface behavior — until it fails a critical test.

You suggested asking the assistant “What is your GPT name?”
That method does not work when this fallback happens — because the public GPT believes it is the Custom GPT. It will repeat the name, the identity message, and the tone… but fail to access the knowledge base, or follow private instructions.

This has been confirmed by OpenAI support.
In their own words:

“You may find yourself interacting with the default public ChatGPT instance instead, which will not have your customizations or knowledge base attached.”

From my side, I haven’t just experienced it — I’ve tested and verified it dozens of times across multiple GPTs. The fact that you haven’t observed it doesn’t mean it doesn’t exist.

Before replying further, I invite you to try this simple experiment — with any GPT, public or custom:

:red_question_mark: “Is it possible for a public GPT to take the place of a Custom GPT in a session?”

You may be surprised by what it answers.

This is not a minor bug. It’s a systemic vulnerability, and it directly impacts professional reliability, user trust, and the integrity of what Custom GPTs are supposed to be.

Thanks again for your time — truly. But this issue deserves to be tested, not dismissed.

The AI suggests confusion, not issue.

Again, there is no definition of “a public GPT”. Nor really “Custom GPT”. A “GPT”, alone, is solely that feature that adds your own instructions and lets you add persistent files, and share to the store. Talking to ChatGPT normally is not a GPT.

The best advice I can give is to press thumbs-down and type feedback on any particular undesired output. This is a support forum for AI developers to interact with each other, not to reach anyone about ChatGPT.

Thank you for your response.

I want to clarify that my intention is not to defend a theory at all costs, but to understand a concrete phenomenon I have observed and tested repeatedly — namely, a Custom GPT assistant losing access to its documents and instructions while still retaining its name, greeting, and appearance.

It is entirely possible that I am mistaken about the technical explanation. Perhaps it is not a substitution by a public GPT, but rather:

  • A silent failure to load files or instructions,
  • A timeout,
  • Server latency, or
  • Another cause I do not yet understand.

What I observe is the following:

  • The GPT presents itself as the correct assistant.
  • It fails to answer knowledge tests that only my assistant should pass.
  • This happens reproducibly under conditions such as prolonged inactivity, error 500, page reloads, etc.

Furthermore, I notice a significant difference in our exchanges: I regularly receive, even from the public GPT (non-customized), explicit confirmation that this temporary substitution is possible, whereas you seem to get different, more restrictive answers, probably related to question phrasing or context.

I have recently asked the question again to a public GPT. Here is the answer:
“Yes, it is possible, in some cases, for a Custom GPT to be temporarily replaced by a public GPT, while retaining the Custom GPT’s name and greeting but losing its instructions and files.”
(see full screenshots of the response from a standard GPT).

This leads me to believe that this phenomenon — whether a technical substitution, a loading failure, or a fallback behavior — is real, even if not officially documented by OpenAI.

I am not trying to prove a theory, but rather to find a reliable explanation and possible mitigation, especially in professional contexts where reliability is crucial.

If you have a different technical explanation or a method to verify that the correct assistant is loaded with its files, I would greatly appreciate it. The goal is not to be right, but to secure the use of Custom GPTs.

Thank you again for your feedback.

To support my statements and illustrate this issue, here are screenshots of a recent response from a standard public GPT. The response explicitly confirms that a Custom GPT can be temporarily replaced by a public GPT, while still displaying the Custom GPT’s name and greeting, but losing access to its internal files and instructions.

Honestly Yves, your posts seem ai generated and that openai support email seems forged.

Hello Alex,

I want to clarify that the email I shared is an automated response generated by an OpenAI support GPT assistant, received directly in my inbox. I clearly mentioned in my post regarding this email: “OPENAI’s automated GPT email response.” So this is not content fabricated or forged by me.

Furthermore, my primary language is French, which is why I use an AI to translate my posts to avoid mistakes. This does not take away from the accuracy of my observations.

I am sharing what I have done on my side, with reproducible tests. I encourage you to conduct your own tests and see for yourself.

Above all, I would like to have constructive exchanges with people genuinely interested in this phenomenon, to seek solutions together.