Gpt-3.5-turbo-0613 became useless

I’ve built an application with gpt-3.5-turbo-0613 and it was working fine, but today for some reason it became useless, it cannot follow the simplest instructions. Anyone kwos something about that? My application just don’t work anymore because of that, I think Open AI is really dropping the ball here.

Hi and welcome to the developer forum!

There have been no updates that would cause something like that, so I’m not sure what has happened, has your application been updated? Can you give an example input and response? can you also give a code snippet of your API call and any setup code it relies on?

It seems to be working again! :pray: But to give you more information about what was happening:

  1. I put it into a while loop that may call a function.
  2. Receiving the feedback from the function.
  3. Calling the API again with that feedback and then responding to the user.

The problem was that it was “dumb” for a while (since yesterday) and it was ignoring the feedback.

The AI was dumb about function responses just a second ago…

Unfortunately, there is no selecting a stable AI actually from June. You only get the one they continue to revise.

if you use api
try gpt-3.5-turbo-16k its good

1 Like

Just a tip from another developer: You should probably add logging so you can see exactly what you’re sending and getting back. Most likely there was bug in your code and it was sending a request that was malformed in some way. Yes I’m guessing, but my hunches are usually right. :slight_smile:

subtle changes in how the model processes language could in theory render nuances in language unpredictable.

This does happen when it comes to re-training of a model.

essentially the user has to revisit his instructions and alter them to be more explanitory.

  • That said, it shouldnt be.

But what can we do right?

That’s bad advice. gpt-3.5-turbo-16k supports functions. I can’t even get it to stop using them when I want it to…

1 Like

My bad @b0zal

Documentation could be a bit clearer on this …

It’s good for some things. I reported on my experience with gpt-3.5-burbo-16k over 2 months ago.

I am working with a large collection of legal and regulatory documents. My vector store brings back excellent context on queries, but gpt-3.5-turbo-16k returns mediocre to bad results even with the best context. I use it for some minor things, but even then it’s iffy. I was trying to use it for categorization, but it does weird stuff like categorizing a query/response from a real estate dataset as “Real Estate Regulations”. Yeah, before someone responds, the prompt is rather specific. And, gpt-4 does the job beautifully.

I mean, come on!