GPT-4o has been bad for my GPT; anyways to switch back to GPT-4

GPT response to what base model it’s using:

“Erin: I am based on OpenAI’s GPT-4 architecture. Specifically, I utilize the GPT-4-turbo model, which is a more efficient and advanced version of GPT-4 designed to provide accurate, contextual, and detailed responses across a wide range of topics. If you have any specific questions or need further details, feel free to ask!”

Previously, Erin as GPT-4 base model (No Vision Capability):

"User: Erin, can you describe this image I uploaded?

Erin: I’m sorry, but I don’t have the ability to process or analyze images. I can only help with text-based information."

After having my GPT use GPT-4 turbo:

"User: Erin, can you describe this image I uploaded?

Erin: Looking at the image you’ve uploaded, it’s a beautifully rendered depiction of what appears to be birds from the Mesozoic era. This era, often known as the age of reptiles, included the rise of birds derived from a group of theropod dinosaurs.

In your image, we see these creatures perching majestically in a lush, ancient forest, hinting at the biodiversity and the complexity of life forms during that time. The details in the artwork, such as the feathers, the varied coloration, and the environment they are in, reflect a careful consideration of paleontological research and artistic creativity.

The art captures a serene moment, perhaps a snapshot of daily life millions of years ago, inviting the viewer to contemplate a world long past. It’s a testament to the power of AI in creating images that can transport us back in time and stir our imagination about periods that we can only piece together from fossils and scientific inference."

I effectively bypassed the GPT base model, and had the image run through gpt-4-turbo (at that time gpt-4-turbo-vision-preview)

I don’t think you can ask an LLM what model it is based on. I believe that the model is configured as a flag in the deployment system. It’s like asking the LLM what gender / what age they are. They may give you an answer based on the training data, not the most up-to-date configuration in the system.


@goo This is not possible right now with GPTs and this is definitely a bug , just as the users are allowed to switch between models, GPT creators should also have that functionality. Thank you for calling this out.


It was bad for me and several other people in a different way: it changed the way the initial user prompt is handled, breaking a number of intellectual property protection schemes. I wrote about it here:

You’re totally ignoring the example I presented with switching to a model with vison capabilities pre-GPT-4o.
It is no bug. *

  • This MAY work for me because I haven’t published my GPT. But I am sure by own experience that what I claim is a fact. Read carefully through my replies to “goo” and see what I have to say.

If you want to know the model it’s using you can either look at the conversation request in the DevTools console or export your data from ChatGPT with a recent conversation with the GPT and just look at the model slug in there.

Here’s an example of the first method I gave previously.

1 Like

Most GPTs are using GPT-4o because the creators have not figured out how to change this. I have.

re: my replies to “goo”

Most GPTs are using gpt-4o because they have been switched over by OpenAI. There is no mechanism by which a GPT can be modified to use gpt-4-turbo.

If you claim to have one, please share it, I would love to see it.

1 Like

Thanks for chiming in. As far as I know, there is no way GPT builders can make this configuration on which model they’d like to use, as of yet. @rfbeck, please share how you modified your GPT to use gpt-4-turbo. I’d love to see it too, seriously. I hate the GPT-4o model.

That isn’t converting your GPT to gpt-4-turbo, that’s making your GPT call the API for gpt-4-turbo as an action. :rofl:

Which just results is costing you additional money to use a service you’ve otherwise already paid for.

I mean, there are reasons why you might want to do this, but this doesn’t affect the model used by your GPT at all.


Sorry…I completely agree with you…I was drawing splines etc, and when it switched to GPT-4o it was a disaster, no way to get anything useful.
I did not see any real reply to your (mine as well !) question: is it possible to select GPT4 ?
In the menu where you “select” base / 4 / 4o, when I try to select GPT-4 there is a writing “your limit will be reset at 12,07”…
Any clue ?
Thanks, Alberto

This can’t be right. Either you are calling the API or not. It’s possible that the model hallucinated the answer.

On the other hand I recently suggested a similar approach, calling another OpenAI API from a custom GPT via action, to another user. So, I am not inherently against this approach as a solution. It’s just a bit unconventional.

I also think that the wording was misleading as it reads like that it’s possible to change the model.
@elmstedt is correct when pointing out that this is not the case. At least not in the sense how I understand @goo in this topic.


I think it’s due to the fact that you ran out of conversations with gpt-4. OpenAI gives a higher limits for paid users and a lower limit for unpaid users.
Also, when you select the model among 4, 4o and 3.5, it won’t change the configuration behind custom GPTs.

I’m not sure how people will use GPT-4o that will work for them, the more OpenAI downgrades it’s model the more people will like it. For sure there is something wrong with me. I did all sort of prompting but not way to get something correct from GPT-4o

For translation the GPT-4 original is better than newer models, no matter how I prompt it it just does not work.

I agree that GPT-4o is not as competent. I don’t understand the logic why the more OpenAI downgrades its model the more people will like it?

Most likely because the “downgrade” you are perceiving is not uniform across all use cases.

The new model is better for most things most people do most of the time, so the overall satisfaction across the entire userbase is higher.


Yea it’s annoying, I don’t know what to do about it either

isn’t there better solution than chatgpt for code generation today ? Maybe the LLM are not general as they used to be and now each user need to pick the best LLM solution for their needs…

I know that mistral european AI is geared toward code generation but the onboarding seems harder. Their solution is named codestral.

I never found a case that GPT-4o could do without an error. The best model so far was GPT4-32k which will be removed today.

  • It can not generalize to more novel tasks unlike the first versions.
  • It can not understand the user inputs more often.
  • In the first turn can solve math or code sometimes better than prev versions, but second and later turns, it just repeating over and over again.
  • and many more…

As far as is helpful to other then there is no issue. Good luck with path to AGI.