GPT-4o has been bad for my GPT; anyways to switch back to GPT-4

Let’s admit it. The free GPT-4o is bad for data analysis and hallucinates tremendously compared to GPT-4. However, all the GPTs have been migrated to GPT-4o from GPT-4.

Is it possible to continue making my GPT talk to GPT-4 instead of GPT-4o? It’s been disastrous.

3 Likes

When I was using my GPTs today, it said it reached the capacity limit of GPT-4 instead of 4o. Still there’s no option for us to choose the model, but feels like OpenAI is testing things in the backend.

GPT in itself is fantastic - love it. But it somehow does feel like we’re all part of a large beta program in several ways.

It is beta, and it will be in beta for a long time from now…

2 Likes

In my experience, a custom GPT4o with a XLSX data base works flawlessly. No mistakes, no hallucinations.

How big was you xlsx file? In my case, my GPT receives real time xlsx files and it hallucinates 60% of the time.

Actually, this is very small, only 66KB with 350 rows, but I have worked with larger ones.

1 Like

Of course we are…this very concept is revolutionary, something that would’ve been impossible a few years ago. So naturally we will end up being the guinea pigs as the users.

That appears to be correct to me too. It let’s you choose between the three models when talking about the GPT, but I don’t think there’s any way to set a “default” GPT from which the custom created GPT conversations draw.

If you are using the APIs, then can you switch the model to gpt4-turbo instead of gpt4-0 ? Quite likely this changed since I checked last week!

No I am not referring to the APIs. I was referring to the custom GPTs now use GPT-4o.

I don’t think this is possible. How are you sure your GPT is using GPT-4 turbo not GPT-4o? Is your GPT listed in the store?

2 Likes

Yes, this reads like it’s a model hallucination.
Extracting the system prompt should resolve the question.

1 Like

What do you mean by “extracting the system prompt”? Can you please elaborate?

If @rfbeck claims to have switched the model used for custom GPTs we should be able to get a confirmation for this to be true or (actually) not by extracting the wrapper system prompt for custom GPTs.
Not referring to the instructions provided by the builder.

I’ve had the same exact issues. I just presented my custom GPTs to people at my company, started a team workspace, and shared the GPTs for everyone to use and this was their first impression.

How are organizations expected to build these into production workflows?

Mine’s not listed in the store. But I’m sure this worked for me.

Because when I first used gpt-4 turbo, gpt-4 did not have vision capabilities. And when I used my method to switch it to GPT-4 turbo, it suddenly did.

To my way of understanding, you can only access GPT-4 turbo through the OpenAI API. How can you build a GPT that uses GPT-4 turbo? Is it possible that there is a misunderstanding of what GPT is?

What was the people at your company’s first impression? Is it that your custom GPT is not very good?

I just don’t think that the LLM is up for prod workflows yet at its current state. It was very good for 2 months before the recent update on 05/29/2024 to GPT-4o. I have also noticed a quality degradation to GPT-4 too recently when I am not using GPT but just ChatGPT.