The best answer I have ever received from GPT-3.5-Turbo-16K API

Thanks, I appreciate it, but at this point, I’m pretty convinced that gpt-3.5-turbo-16k cannot be used for this particular application. If it’s working for you, that’s great, but on the documents I’m working with, I’m getting mediocre to bad responses pretty consistently.

Here is an example of non-existent, where the answer is in the document gpt-3.5-turbo is reading, but it fails to see it. Again, consistently, in the API and the playground. GPT-4 and even Claude both respond with the correct information with no changes whatsoever to prompt or format.

@Foxalabs did assist me in finding the right prompt, but the problem is that I can’t expect end-users to figure this out on their own.

Now, it could just very well be the way my data is formatted, I just don’t know. The only thing I know is that the answers, relative to the to the other models, are consistently poor.