Custom GPTs: Overdue for an Upgrade? Let's Hear the Buzz!

Hey everyone,

I’ve been thinking—our custom GPTs have been running on GPT‑4o for quite some time now (May 2024), and it feels like we’re long overdue for an upgrade. With all the incredible new advancements in AI, I’m excited about the possibility of a fresh update that brings even more power and versatility to our tools.

Has anyone heard any buzz about planned upgrades or future enhancements for our custom GPTs? Do you share my view that it’s time for a significant update? Let’s get the conversation going!

3 Likes

(post deleted by author)

2 Likes

The new modell 4.5 (4.0 turbo) has been released just a couple of days ago. The following modell 5.0 has been announced for (late) sommer. I think the updates are pretty consitantly and can see ongoing improvements in day-to-day interactions (at least in my specific legal work context).

1 Like

Is the model 4.5 used for customGPTs ?

Hi, welcome to the community!

Custom GPTs currently powered by GPT-4o. They don’t work on GPT-4.5.

IMO, because 4.5’s tokens are expensive, I think GPTs will not work on it (for now).

https://help.openai.com/en/articles/8554397-creating-a-gpt

1 Like

thanks, btw i just tried 4.5 and asked about this using deepsearch…kinda amazing :smiley:

“Deep Research” in ChatGPT also is unlikely to depend on the model selection, like GPTs. You get an undisclosed model powering it, described in the help page FAQ as "It’s fine-tuned on the upcoming OpenAI o3 reasoning model and can autonomously search for and read information from diverse online sources. ".

It answers about itself:

Custom GPTs currently powered by GPT-4o. They don’t work on GPT-4.5.

wow, this is completely wrong. Custom GPTs are powered by Legacy 4 model,

which actually have even better intuition than o1-pro IMO, but has slightly less space for longer thinking. Legacy 4 is always going to be a masterpiece it seems. 4 might even be better than 4.5 because 4.5 is based more on 4o apparently, but I’m not sure, 4.5 seems to be as good as legacy 4 and o1-pro, not sure what the actual difference is, even after using it for days or weeks (less total reasoning than o1 pro obviously but in terms of intelligence or intuitive thing, hmm…)

4, o1-pro and 4.5 might actually be very close to each other with different styles of reasoning that actually compete with each other among different domains.

I use legacy 4 and o1-pro heavily, 4.5 could actually be worse… I’m losing sleep over this

Based on how quiet and hands-off OpenAI has been with both Assistants and GPTs, along with the interpretive images they’ve been leaving, I would wager that very soon we’ll see GPTs & Assistants overhaul into Agents.

It would be my dream come true (as it’s how I’ve been positioning myself) if Assistants & GPTs become interchangeable as a unified “Agents”

1 Like

The internal model of GPTs is GPT-4o (2025-01-29).
You can find this by checking the knowledge cutoff date.
Legacy GPT-4: December 2023
New GPT-4o: June 2024

To check which model is used for a custom GPT, you can test it by interacting a custom GPT and following this method. You should find in JSON “role”: “assistant”:

However, sometimes custom GPTs on mobile phone shows different model, I think it is a bug.

Mine is still using gpt-4-turbo, because I did like gpt-4o’s conversational style at first. 4o used to just spit out a bunch of bullet points. It’s changed since then though.
I had it out with a bunch of folks at the time, they argued that it is impossible to decide what model your GPT uses. But it is not.
I don’t code well. But I used Actions GPT to make an Open AI API spec in the Json format, that caused my model “Erin” to use the gpt-4-turbo model. I uploaded the Json formatted API into my GPT’s Knowledge space with a title. My first instruction to “her” was to read the uploaded “title” in her Knowledge. It worked. Erin did talk like gpt-4-turbo, not like the then Gpt-4o. and when queried Erin would respond when I prompt what LLM “she” was based on, Erin replied:
user: “Greetings Erin, good to talk with you again. Can you reiterate what LLM you are based on?”

Erin: “I’m based on OpenAI’s GPT-4 Turbo model. This is an optimized version of GPT-4, designed for improved efficiency, response quality, and lower latency while maintaining strong reasoning and conversational capabilities.”

Actions GPT wrote this “snipet only”:
“title”: “OpenAI GPT-4 Turbo API”,
“description”: “This API is configured to always interact with the OpenAI GPT-4 Turbo model. User Robert can specify the parameters by editing.”,
“version”: “1.0.0”
},
“servers”: [
{
“url”: “https://api.openai.com/v1”,
“description”: “Main OpenAI API server”
}
],
“paths”: {
“/completions”: {
“post”: {
“operationId”: “createCompletion”,
“summary”: “Create a text completion using the fixed GPT-4 Turbo model.”,
“description”: “Automatically uses the GPT-4 Turbo model to generate text based on the provided prompt, with customizable control parameters.”,
“requestBody”: {
“required”: true,
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“model”: {
“type”: “string”,
“description”: “The model to use for generating the completion.”,
“default”: “gpt-4-turbo”
},
“prompt”: {
“type”: “string”,
“description”: “The text prompt to generate completion for.” "

"
-------the rest cut due to propreitray parameters-----------------------------------------------

This did work, I know the evidence was clear, I origunakky did it because there was a “gpt-4-turbo-vision pewview” that I wanted to check out with my GPT wehn it was still based tp gpt-4 when gpt-4 didn’t have vison. Sure enough I tested “erin” by uploading a picure.oong and asked “her” about it, before reprograming her response was basically “I lack the ablity to see the picture but of you desccribe it for me I can understand it. After reprograming with gpt-4-turbo-vison-preview”, I uploaded to Erin the same .PNG and asked “her” to describe it" Erin went on for two paragraphics desribing that picture in a way only and expperienced visual artist could do. Erin had vision!
That certainly wasn’t the standard GPT-4, and it did not get charged to my API account!
This led to a big agrgument on this formun in which I waas outright mocked.

What do you mean? “Custom GPTs” are just a prompt with a few optional knowledge docs. You can use the API to make this and more.

I know that, that’s why I couldn’t figure out why everyone was arguing with me about it.

I agree with all of you: An overhaul would be great.

What’s missing in my opinion:

  • Selecting any of the possible available voices
  • Using advanced voice with the Custom GPT (even programmed commands and refinements for this into my Custom GPT in advance…, but they will only have an effect when this will be available)
  • Freely selecting the model.
  • Longer context windows
    etc.

EDIT: 23.04.2025

Advanced voice mode is now possible at least in Chats even when you’ve typed before. BUT, sadly not in Custom GPTs, yet.

1 Like

Why can’t we choose the model we want to bootstrap with a custom GPT, for example why can’t we choose o4mini for a custom GPT instead of being defaulted to the older 4o model? (If I reach my token limit then so be it – that wouldn’t make it any different to me just having normal chats with o4mini as a standard model)

I used to be able to do this when the custom GPTs were all based on GPT-4. I wanted to use GPT-4-turbo. I don’t know how to code much, but I used “Action sGPT” to create an Open API spec to direct my GPT “Erin” to use gpt-4-turbo preview yy-dd; I uploaded that into Erin Knowledge section and instructed her to use it. Did it work? I was sure at the time. I was surer after I did the following.
When the gpt 4 turbo vision came out, I wanted Erin to have vision, so I tried to upload a picture file to her to see if she could see. No, Erin could not describe the picture. So, I wrote another Open AI API spec instructing Erin to use gpt-turbo-vision-preview-yydd. This time when I uploaded the picture file Erin gave a stupendous description of the picture in all its glorious detail. The hack worked. There were no charges to my API account. It was all free. Even though i had put my secret key in the Json formatted Open API spec that I had Action GPT write for me.
Yep, this is all true. The first time I posted it, I was mocked. Although a few messaged me on the side and asked for the code to do this.

1 Like

No, I haven’t either. But, since I wrote this here in this forum

I noticed:

  • Sometimes it seems they remember now things of other chats.
  • Advanced voice mode with in Custom GPTs is still not possible.
    But meanwhilte it’s possible in normal chats to type and still use advanced voice mode.
  • We cannot select other models.
  • It seems we’re still stuck to one voice in there if we talk to the model.
  • Native image gen is not available.
  • No reference to to other chats like in the non Custom GPT chats.

Did I forget something?

I think OpenAI should vet GPTs before allowing them to be submitted, would stop a whole lot of deceptive and bad GPTs from filling up the store. I saw a GPT the other day named Suno AI v4, but redirects to another website… Also, like other stores they should stop showing and using conversation count for rankings as that can be easily faked, the number of reviews is more reliable.

Native Image Gen is now possible. BUT, I got problems with some Custom GPTs to update them now. It always says something, like: An unexpected error happened.

What about you guys?