Gpt-3.5-turbo-16k apologizes profusely

I am having a very serious problem with gpt-3.5-turbo-16k which I have documented here: Gpt-3.5-turbo-16k api not reading context documents - #7 by SomebodySysop

Someone please help!

However, in trying to troubleshoot the problem, I ended up having this conversation with the model in the playground. I swear, it reminded me of one of those Kirk vs. The Computer episodes of Star Trek TOS.




If this is generation from yesterday itself, GPT has been having high error rates recently.

Other than that, by prompting it to not provide apologies or such in the context, the error rates for such problems can be reduced.

Having a lower temperature also helps unless your problems requires it to be

Thanks. The API is not apologizing. It’s saying information that is right in front of it isn’t there. The Playground is apologizing for making the same mistake.

I guess with this particular prompt and this particular text I have hit the wall on gpt-3.5-turbo-16k. That is, the point where we clearly see the difference between it and gpt-4.

The only workaround I’ve found is to turn off the standalone question, which is how I maintain conversational context, and try and find a prompt it likes. And, as many people have already discovered, what it likes and dislikes changes from day to day.

Frustrating. Very frustrating.

As I think has been mentioned, if you take the above text and the context it needs to produce answers and introduce the 2048 token limit to the equation, all reference to the original context is lost and so this is the expected result.