Gpt-3.5-turbo-1106 performance

I have made several posts about this issue. gpt-3.5-turbo-16k is far less capable than gpt-4 and sometimes seems incapable of accurately reading returned documents. I have documented cases where, given the exact same question and context documents, gpt-3.5-turbo-16k fails to even recognize the answer in the documents where gpt-4 responds perfectly.

I doubt that it is the vector search results as they won’t be affected by the LLM.

This was way back in June 2023: Gpt-3.5-turbo-16k api not reading context documents

However, I was able to eventually get much better results from the gpt-3.5 model by using xml markup in my prompt: API Prompt for gpt-3.5-turbo-16k - #12 by SomebodySysop

I know your question was about ‘gpt-3.5-turbo-1106, but my understanding is that gpt-3.5-turbo-16k is an alias for that model. Somebody let me know if that is incorrect.

1 Like