However, in trying to troubleshoot the problem, I ended up having this conversation with the model in the playground. I swear, it reminded me of one of those Kirk vs. The Computer episodes of Star Trek TOS.
Thanks. The API is not apologizing. It’s saying information that is right in front of it isn’t there. The Playground is apologizing for making the same mistake.
I guess with this particular prompt and this particular text I have hit the wall on gpt-3.5-turbo-16k. That is, the point where we clearly see the difference between it and gpt-4.
The only workaround I’ve found is to turn off the standalone question, which is how I maintain conversational context, and try and find a prompt it likes. And, as many people have already discovered, what it likes and dislikes changes from day to day.
As I think has been mentioned, if you take the above text and the context it needs to produce answers and introduce the 2048 token limit to the equation, all reference to the original context is lost and so this is the expected result.