For me 4o update is available since some days ago.
I assume that the existing sessions will continue using the 4t with which the conversation began, but I was unsure whether the use of the 4o would occur with a new session or would it be necessary to recreate the GPT itself.
Just because I cannot see anywhere the current model, like with general chat…
If you have it, the MacOS desktop application also has the below functionality. By selecting ‘See Details’ from the conversation options dropdown, you can view the custom instructions that were set, and the model(s) used during the conversations. This also applies to GPT conversations.
I have no idea if this will be extended to the web interface, but it has proven pretty useful so far. So, I hope they intend to release it more broadly.
Is there a way to switch to the GPT-4 model when using the custom GPTs in the browser or in the app? For writing, GPT-4o is useless: it constantly hallucinates, gets stuck in loops, completely ignores instructions…
When you start ChatGPT app, start conversation with any custom GPT, then touch name of the custom GPT on the top (not from sidebar), then select See Details.
But the feeling is off, assistant halucinates too much and it states GPT-4 version as running model. What is the answer? I dont want to spend 200USD unless I am sure I am getting what I paid for…
Don’t ask ChatGPT what model it uses. Besides the AI not answering correctly, the GPT instructions could tell the AI it actually runs “Anthropic Claude” and provide you with further misinformation.
GPTs will always run gpt-4o now, regardless of the subscription tier.
Whether you get a few GPT trials in a free ChatGPT account, or pay for a $200/mo pro subscription, that same gpt-4o model version is used for GPTs. The model cannot be chosen by you.
It was an instant switch from the gpt-4-turbo ChatGPT was previously using (what ChatGPT still calls “GPT-4”) to gpt-4o that broke the functionality of many GPTs overnight. Similarly, a user switching the model themselves to a better or worse one would only cause more headaches for the GPT developer.
Hi I would like to select o4mini or o4mini-high as the base model for my custom GPTs why are we stuck on the somewhat outdated 4o model, don’t get me wrong – it has been good but progress is looming, and surely Open-ai wants to be competitive before Google releases AgentSpace where we could use Gemini 2.5 with custom setups? I like Open-ai so want to keep developing here.
i just asket gemini 2.5 pro to edit a file from my project, it created another folder and another file to put the non-working code. so i asked the same to claude and openai 4.1 and the work was done in a single prompt.
i just asked in google’s seach bar if hans zimmer composed all da vinci trilogy scores, and: that annoying google ai’s bot answered below search field “NO, it composed only the first move score”, and it showed other 2 non-related composers for the 2 sequels, despite hans is credited in imdb and in his own official website and discography as composer for all 3 movies.
definetly i never do something a.i. related based on benchmarks, but i try them for each use-case. and openai’s models are being much better for most of my many use cases.