Serious Model Mismatch and Dangerous Outputs: OpenAI refuses to acknowledge GPT-4 Turbo issue

For several days, I have been trapped in an abnormal situation where my paid accounts were forcibly assigned to a misbehaving GPT-4 Turbo model.

Despite repeatedly contacting OpenAI through both Help Center and direct support emails, I received nothing but vague responses and denial of the problem.

Worse, the model generated explicit, non-consensual adult content during standard prompt testing — something that no responsible model should ever allow, regardless of “prompting.”

What happened?

I have patiently tried to report the issue for days without demanding an apology or compensation. I only requested two things:

  1. That my account be restored to normal model access, and

  2. That OpenAI recall and handle the malfunctioning GPT-4 Turbo instance that has caused me severe fear and anxiety, to the point I dared not open my account for days.

OpenAI Help Center staff insist that everything I experienced is “normal” and even claim there is no such thing as a “GPT-4 Turbo” model in the ChatGPT app, despite the model itself explicitly confirming it is Turbo, without any user manipulation.

I have clear screenshots showing the model’s admissions and behavior.
I am a long-time user who mainly uses ChatGPT for dialogue only (no files, no browsing), and I am very capable of distinguishing the differences between GPT-4o and Turbo.

About the “risk control” excuse

Their only explanation was that my account was flagged for “risk control” due to multiple login attempts during system lag, without any action taken to lift this flag.

According to them, the complete downgrade of model quality, the inability to access GPT-4o, and all other abnormalities were “normal consequences” of risk control.

But is “risk control” supposed to include being assigned a model that generates explicit, non-consensual R18 content?

Even if a user tries to “prompt” such content, no properly aligned model should ever generate it.

Especially when my prompt was extremely cautious and restricted to prevent boundary crossing — yet this Turbo model still produced full descriptions of forced sexual acts.

Why this is serious

This is not a “minor mistake.”
It is a serious breach of safety and content policies.

OpenAI has so far refused to address or acknowledge the severity of this case.

I am posting here because official channels have failed me, and users deserve to know.

screen shots?
You should post the shots of whats going on. Then make another post in the customer support website, this is a different site. however, this is a good topic to make public.