Let’s admit it. The free GPT-4o is bad for data analysis and hallucinates tremendously compared to GPT-4. However, all the GPTs have been migrated to GPT-4o from GPT-4.
Is it possible to continue making my GPT talk to GPT-4 instead of GPT-4o? It’s been disastrous.
When I was using my GPTs today, it said it reached the capacity limit of GPT-4 instead of 4o. Still there’s no option for us to choose the model, but feels like OpenAI is testing things in the backend.
Of course we are…this very concept is revolutionary, something that would’ve been impossible a few years ago. So naturally we will end up being the guinea pigs as the users.
That appears to be correct to me too. It let’s you choose between the three models when talking about the GPT, but I don’t think there’s any way to set a “default” GPT from which the custom created GPT conversations draw.
If @rfbeck claims to have switched the model used for custom GPTs we should be able to get a confirmation for this to be true or (actually) not by extracting the wrapper system prompt for custom GPTs.
Not referring to the instructions provided by the builder.
I’ve had the same exact issues. I just presented my custom GPTs to people at my company, started a team workspace, and shared the GPTs for everyone to use and this was their first impression.
How are organizations expected to build these into production workflows?
To my way of understanding, you can only access GPT-4 turbo through the OpenAI API. How can you build a GPT that uses GPT-4 turbo? Is it possible that there is a misunderstanding of what GPT is?
What was the people at your company’s first impression? Is it that your custom GPT is not very good?
I just don’t think that the LLM is up for prod workflows yet at its current state. It was very good for 2 months before the recent update on 05/29/2024 to GPT-4o. I have also noticed a quality degradation to GPT-4 too recently when I am not using GPT but just ChatGPT.