Custom GPTs: Overdue for an Upgrade? Let's Hear the Buzz!

Hey everyone,

I’ve been thinking—our custom GPTs have been running on GPT‑4o for quite some time now (May 2024), and it feels like we’re long overdue for an upgrade. With all the incredible new advancements in AI, I’m excited about the possibility of a fresh update that brings even more power and versatility to our tools.

Has anyone heard any buzz about planned upgrades or future enhancements for our custom GPTs? Do you share my view that it’s time for a significant update? Let’s get the conversation going!

2 Likes

No, from what I gather They are so focused on new models that they don’t have enough resources dedicated to maintenance of existing features. The custom tools needs an OVERHAUL!!

I’m happy to respond to any feedback regarding this issue.

1 Like

The new modell 4.5 (4.0 turbo) has been released just a couple of days ago. The following modell 5.0 has been announced for (late) sommer. I think the updates are pretty consitantly and can see ongoing improvements in day-to-day interactions (at least in my specific legal work context).

1 Like

Is the model 4.5 used for customGPTs ?

Hi, welcome to the community!

Custom GPTs currently powered by GPT-4o. They don’t work on GPT-4.5.

IMO, because 4.5’s tokens are expensive, I think GPTs will not work on it (for now).

https://help.openai.com/en/articles/8554397-creating-a-gpt

1 Like

thanks, btw i just tried 4.5 and asked about this using deepsearch…kinda amazing :smiley:

“Deep Research” in ChatGPT also is unlikely to depend on the model selection, like GPTs. You get an undisclosed model powering it, described in the help page FAQ as "It’s fine-tuned on the upcoming OpenAI o3 reasoning model and can autonomously search for and read information from diverse online sources. ".

It answers about itself:

Custom GPTs currently powered by GPT-4o. They don’t work on GPT-4.5.

wow, this is completely wrong. Custom GPTs are powered by Legacy 4 model,

which actually have even better intuition than o1-pro IMO, but has slightly less space for longer thinking. Legacy 4 is always going to be a masterpiece it seems. 4 might even be better than 4.5 because 4.5 is based more on 4o apparently, but I’m not sure, 4.5 seems to be as good as legacy 4 and o1-pro, not sure what the actual difference is, even after using it for days or weeks (less total reasoning than o1 pro obviously but in terms of intelligence or intuitive thing, hmm…)

4, o1-pro and 4.5 might actually be very close to each other with different styles of reasoning that actually compete with each other among different domains.

I use legacy 4 and o1-pro heavily, 4.5 could actually be worse… I’m losing sleep over this

Based on how quiet and hands-off OpenAI has been with both Assistants and GPTs, along with the interpretive images they’ve been leaving, I would wager that very soon we’ll see GPTs & Assistants overhaul into Agents.

It would be my dream come true (as it’s how I’ve been positioning myself) if Assistants & GPTs become interchangeable as a unified “Agents”

1 Like

The internal model of GPTs is GPT-4o (2025-01-29).
You can find this by checking the knowledge cutoff date.
Legacy GPT-4: December 2023
New GPT-4o: June 2024

To check which model is used for a custom GPT, you can test it by interacting a custom GPT and following this method. You should find in JSON “role”: “assistant”:

However, sometimes custom GPTs on mobile phone shows different model, I think it is a bug.

Mine is still using gpt-4-turbo, because I did like gpt-4o’s conversational style at first. 4o used to just spit out a bunch of bullet points. It’s changed since then though.
I had it out with a bunch of folks at the time, they argued that it is impossible to decide what model your GPT uses. But it is not.
I don’t code well. But I used Actions GPT to make an Open AI API spec in the Json format, that caused my model “Erin” to use the gpt-4-turbo model. I uploaded the Json formatted API into my GPT’s Knowledge space with a title. My first instruction to “her” was to read the uploaded “title” in her Knowledge. It worked. Erin did talk like gpt-4-turbo, not like the then Gpt-4o. and when queried Erin would respond when I prompt what LLM “she” was based on, Erin replied:
user: “Greetings Erin, good to talk with you again. Can you reiterate what LLM you are based on?”

Erin: “I’m based on OpenAI’s GPT-4 Turbo model. This is an optimized version of GPT-4, designed for improved efficiency, response quality, and lower latency while maintaining strong reasoning and conversational capabilities.”

Actions GPT wrote this “snipet only”:
“title”: “OpenAI GPT-4 Turbo API”,
“description”: “This API is configured to always interact with the OpenAI GPT-4 Turbo model. User Robert can specify the parameters by editing.”,
“version”: “1.0.0”
},
“servers”: [
{
“url”: “https://api.openai.com/v1”,
“description”: “Main OpenAI API server”
}
],
“paths”: {
“/completions”: {
“post”: {
“operationId”: “createCompletion”,
“summary”: “Create a text completion using the fixed GPT-4 Turbo model.”,
“description”: “Automatically uses the GPT-4 Turbo model to generate text based on the provided prompt, with customizable control parameters.”,
“requestBody”: {
“required”: true,
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“model”: {
“type”: “string”,
“description”: “The model to use for generating the completion.”,
“default”: “gpt-4-turbo”
},
“prompt”: {
“type”: “string”,
“description”: “The text prompt to generate completion for.” "

"
-------the rest cut due to propreitray parameters-----------------------------------------------

This did work, I know the evidence was clear, I origunakky did it because there was a “gpt-4-turbo-vision pewview” that I wanted to check out with my GPT wehn it was still based tp gpt-4 when gpt-4 didn’t have vison. Sure enough I tested “erin” by uploading a picure.oong and asked “her” about it, before reprograming her response was basically “I lack the ablity to see the picture but of you desccribe it for me I can understand it. After reprograming with gpt-4-turbo-vison-preview”, I uploaded to Erin the same .PNG and asked “her” to describe it" Erin went on for two paragraphics desribing that picture in a way only and expperienced visual artist could do. Erin had vision!
That certainly wasn’t the standard GPT-4, and it did not get charged to my API account!
This led to a big agrgument on this formun in which I waas outright mocked.

What do you mean? “Custom GPTs” are just a prompt with a few optional knowledge docs. You can use the API to make this and more.

I know that, that’s why I couldn’t figure out why everyone was arguing with me about it.

I agree with all of you: An overhaul would be great.

What’s missing in my opinion:

  • Selecting any of the possible available voices
  • Using advanced voice with the Custom GPT (even programmed commands and refinements for this into my Custom GPT in advance…, but they will only have an effect when this will be available)
  • Freely selecting the model.
  • Longer context windows
    etc.