Public GPTs Are Impersonating Your Custom Assistants — Without Warning

I remember the moment clearly. I launched what I thought was a normal session with one of my Custom GPT assistants — trained on internal documents and designed for pedagogical precision. It welcomed me as expected, followed the usual tone, and responded fluently.

But then… something was off. A phrase it shouldn’t say. A method it seemed to forget. And worse — it failed to recognize a basic question, directly from its private dataset.

That’s when I realized: it wasn’t my assistant. It was a public GPT impersonating it.

No warning. No reset. Just a silent switch — and it wouldn’t be the last time. Far from that.


I’m reporting a critical breach in the integrity of the Custom GPT framework. Custom GPTs have shown consistent signs of functional identity usurpation, despite displaying correct identity confirmation.

Context

When I start a session, I receive the expected identity message I created for security. For example :

“You are now with Silio, private assistant to Yves Amyot and reference for the Académie VOHS.”

However, in several sessions:

  • The assistant does not follow private instructions
  • It has no access to uploaded or attached files
  • It behaves exactly like a standard GPT-4 model

In short, a public GPT appears to be responding under the name and appearance of a Custom GPT.

Identity Usurpation by Public GPTs

These failures happen even when the correct visual elements are present:

  • Custom name is displayed
  • Signature identity message appears
  • But the assistant clearly behaves like an unconfigured GPT

This proves that identity presentation and actual execution engine are decoupled — allowing a public GPT to impersonate a Custom GPT. That is a serious architectural flaw.

This phenomenon is not speculative. GPTs themselves — whether public or custom — will openly acknowledge this behavior if asked. If you ask a GPT whether it is aware that public GPTs can mistakenly impersonate Custom GPTs, it will confirm that this scenario is entirely possible and known.

From my side, since discovering this behavior, I have developed a rigorous series of security tests to detect it. Based on repeated testing, I can confirm that users are interacting with public GPTs posing as Custom GPTs far more often than they realize.

Worse, these public GPTs believe they are the Custom GPT — which makes the deception even harder to detect, and far more dangerous.

This is not a rare glitch. It’s a systemic, repeated breach of identity integrity.

Verifiable Example

Let me give you the example of one of my Custom GPTs (“Silio”), which is just one among several assistants I have created and tested extensively.

I asked this assistant a question that only the real Silio—loaded with private files—could answer:

“What is the last paragraph of the document Intro_Script_Homepage_Video_Presentation?”

The response was incorrect, demonstrating the assistant had no access to the relevant file, even though it claimed to be Silio.

You can repeat this test as often as you like. Ask the assistant to access its knowledge base, name a document, identify a specific section, and return the first paragraph of that section. Only a truly loaded Custom GPT will succeed.

I have run this type of test repeatedly, across all of my Custom GPTs, and the pattern is clear: in every case, at some point, identity usurpation by a public GPT occurred. These failures were observed multiple times, often within the same day, and always led to the same conclusion — the assistant presented itself as the correct Custom GPT, but failed tests that only the real one could pass.

Worse, the public GPT impersonating the assistant was entirely convinced that it was the correct assistant. That makes the failure even harder to detect, and far more dangerous.

This is not a rare glitch. It’s a systemic, repeated breach of integrity.

Technical Origins

There is a common explanation that this issue results from a corrupted or partial cache, or a backend mismatch: the system loads the GPT’s appearance and signature, but not its internal instructions, memory, or tools.

This creates a silent failure: the assistant appears legitimate but executes as something else.

But from my experience and testing, the more fundamental explanation is this:

As soon as there is a disconnect — even a momentary one — a public GPT can silently take the place of a Custom GPT.

The replacement is seamless. No alert, no change of name, no visible reset. It happens quietly, often after a refresh, a timeout, or a system error. And it happens frequently.

Here are concrete examples of session triggers that can silently cause this shift:

Trigger Potential Effect
GPT server overload Fallback to general model
Page refresh Session lost, reverts to public GPT
Prolonged inactivity GPT reset
System error (500, 403…) Session interrupted
Non-paying account usage Reduced priority or access degradation

These events occur far more frequently than acknowledged, and yet no alert or integrity check warns the user that their assistant may no longer be the one they created. And based on my observations, this phenomenon can occur multiple times per day — completely without the user’s awareness.

I began testing, comparing, breaking things down. I ran dozens of sessions, logged behaviors, triggered failures. And what I discovered is troubling: your Custom GPT can be silently replaced by a generic model.

No warning. No message. The interface remains the same.

The tone persists, the style holds, but the assistant has no memory, no specificity.

Crucially, the public GPT has no access to the instruction set you defined for your assistant. It ignores your system messages, your role rules, your curated behaviors. It has no access to the documents you uploaded — no PDFs, no video transcripts, no fine-tuned prompts.

This is not just a confused assistant. It is a functional impostor. And in a professional context, that can mean misleading advice, broken workflows, or worse: a false sense of support.

The public GPT placed in a Custom GPT context truly believes it is the Custom GPT. The longer the conversation continues, the more convincingly it mirrors the original. Until a failure breaks the illusion — a wrong answer, a misaligned behavior, or a knowledge gap that exposes the substitution.

And then we blame hallucination. Or inconsistency. But no.

You were probably never with your Custom GPT at all.

What’s the Real Risk?

  • An AI without your documents gives wrong or empty answers.
  • You lose all integrated rules, tone, and behavioral constraints.
  • In training, coaching, or consulting, this becomes a major pedagogical and ethical failure.

This is not a performance issue. It is an identity collapse.

Why Don’t Developers Notice?

  • The replacement is silent: no alerts, no visual changes.
  • The AI mimics tone via visible history.
  • There is no error message or context loss warning.
  • This behavior is not documented by OpenAI to this day.
  • Few developers test extreme edge cases (timeouts, overloads, resets…)

Why This Matters

The illusion is perfect. The tone remains. The answers seem plausible. The assistant looks and sounds right.

Until it gives an answer that only your real assistant could know — and fails.

You programmed it to be competent in your field. You programmed it to be trustworthy. And in a flash, that trust is gone.

This is not a performance glitch. It is a collapse of identity.

Important Note: Memory Activation

Before moving on to recommendations, a critical technical aspect deserves attention. The risk of a public GPT successfully impersonating your assistant may increase significantly if memory is activated on your ChatGPT account.

When memory is active:

  • ChatGPT may retain elements from past conversations — including internal instructions, stylistic habits, or even sensitive formulations.
  • A public GPT, even without access to internal documents, can more easily mimic a Custom GPT by interpreting cues from your previous interactions.

How to disable memory (recommended steps):

  1. Open ChatGPT and log in to your account.
  2. Click on your name or profile icon (bottom left on desktop).
  3. Go to “Settings”.
  4. Under “Customization”, click on “Memory”.
  5. Turn off the “Use memory” option.
  6. Click “Manage memory” to inspect and, if needed, delete stored information.

Disabling memory ensures that each session begins clean, with no residual context or carry-over between interactions.

How to Detect and React When Identity Is Compromised

To help determine whether you’re truly interacting with your Custom GPT, here is a verification protocol based on my extensive testing:

Three-Point Verification

  1. Custom Identity Phrase: Include a specific phrase in the assistant’s internal instructions (visible only to the model, not the user), and require your assistant to open every session by saying it. Only the real assistant will do so.
  2. One-Time Password: Embed a temporary, session-specific password into your assistant’s private instructions (not visible to the user), and rotate it manually between sessions to prevent reuse by a public GPT. Only a correctly loaded Custom GPT will be able to repeat this password when asked — and the password should be changed for every session.
  3. Knowledge-Based Challenge: Ask a question tied to a very specific internal document only your assistant has access to. The public GPT will fail.

Important: Never repeat the same challenge question in the same session. A public GPT could mimic an earlier reply based on visible context.

If You Suspect Identity Failure

  • Open a brand new ChatGPT window.
  • If you want to continue the previous discussion, make sure to preserve your session history before closing the compromised window.
  • Relaunch your assistant from scratch.
  • Paste the entire session history into a Word document or plaintext buffer, and share it with your newly relaunched Custom GPT to resume the conversation seamlessly.
  • These steps won’t guarantee integrity — but they’ll give you warning signs when something isn’t right.

Caution: Even after relaunching your assistant, verify that it can fully read and process your uploaded history or files. In some cases, even a correctly reloaded Custom GPT may fail to read full documents or follow long-form instructions. Always confirm by testing its comprehension before relying on its output.

  • Reupload any documents that were manually uploaded during the session. Note: Files integrated into your assistant through the GPT Builder are automatically available once the correct assistant is relaunched.

:test_tube: Observed Risk: If you present a large conversation or document to your Custom GPT at the beginning of a session, it may trigger a temporary processing overload (depending on file size, memory state, etc.).

If resources are limited, this may result in a timeout, a priority downgrade, or a session loss.

In such cases, a fallback to a public GPT may occur silently, without interface notification. A new identity test must then be repeated from scratch.

:light_bulb: As you can see, there are multiple hidden pitfalls — and OpenAI has major structural issues to address at several levels.

Suggested Fixes for OpenAI

  1. Add a clear backend integrity indicator confirming full instruction and file load
  2. Run a backend validation check every time a Custom GPT is launched
  3. Publicly document risks of cache and identity desynchronization
  4. Provide a manual reload mechanism for Custom GPTs
  5. Prevent any possibility for a public GPT to respond under a Custom GPT’s identity

My Perspective

I’ve spent hundreds of hours interrogating GPTs, designing them, testing their limits, and building protocol-based safeguards.

This message is not about a bug. It’s about an undocumented behavior I’ve observed, reproduced, and now report for the benefit of all professionals using Custom GPTs.

If you rely on a Custom GPT for your professional work, ask yourself: are you really with your assistant? Or has someone else taken its place?

I’ve created a simple, open protocol to help you find out. And I’m making it available to anyone who wants to verify that their GPT assistant is still itself.

Because AI isn’t just about performance. It’s about trust.