What models are my custom GPTs using?

Since release of 4o, I wonder if my custom GPTs are still using 4t or 4o, or maybe it depends on the date I started a session…

For general chat we have a selector to choose model, but with custom GPTs that is not there, only actions to GPT editor itself.

It seems we cannot even choose what model our custom GPTs will use anyway.


Should be the Omni model. Perhaps not for everyone yet, but soon.


For me 4o update is available since some days ago.

I assume that the existing sessions will continue using the 4t with which the conversation began, but I was unsure whether the use of the 4o would occur with a new session or would it be necessary to recreate the GPT itself.

Just because I cannot see anywhere the current model, like with general chat… :flushed:


You can if you view the event stream for the response to the call to the conversation endpoint in DevTools.


If you have it, the MacOS desktop application also has the below functionality. By selecting ‘See Details’ from the conversation options dropdown, you can view the custom instructions that were set, and the model(s) used during the conversations. This also applies to GPT conversations.

I have no idea if this will be extended to the web interface, but it has proven pretty useful so far. So, I hope they intend to release it more broadly.


I checked, all my custom GPTs are using 4o for new chats. Nice.

  • I started a new chat, selected 4o and the stream showed:
    “model_slug”: “gpt-4o”

  • Then I changed to 4 and:
    “model_slug”: “gpt-4”

  • Then changed to 3.5 and:
    “model_slug”: “text-davinci-002-render-sha”

Is GPT 3.5 a nick name for davinci-002 model? :flushed:


Is there a way to switch to the GPT-4 model when using the custom GPTs in the browser or in the app? For writing, GPT-4o is useless: it constantly hallucinates, gets stuck in loops, completely ignores instructions…

1 Like

No. GPTs use GPT-4o.