GPT-3.5-turbo VS GPT-3.5-turbo-16k

Ok in this way it answers, but the answer it’s totally unreliable. So maybe, it’s better it doesn’t answer at all :slight_smile:

Interesting indeed, I wonder if it is language based or not, it would be an interesting experiment to manually create a translated context and query and see if the system has the same problem in English :thinking:

1 Like

Hi @gianluca.suzzi

I have the same problem!!

The result of gpt-3.5-turbo-16k is not good even when the context is more longer.

1 Like

As far as what I can tell, the “16k” version outperforms the shorter version.

I ended up to the conclusion that the 16k version, even if with larger context, if the the context has some ambiguities it “prefers” to not answer at all then to give an unreliable answer. But this is my personal opinion.

1 Like

I’ve had the same issue with 16k. No result at all, while 3.5 has decent result. I will look at my prompt but much of what I send is customer-created. It won’t always be well-formed. I stopped coding against 16k! I would rather a mediocre answer than none


Hey guys.

I have had this problem and the fix was honestly just my stupidity.

My error was formed simply because of poorly written prompting. The way my prompt was written made model believe that the end of my prompt was the end of it’s completion. If you tweak the prompt and add a declaring intent on the end it reduced the rate of incidence to 0.

Hi, maybe someone can help :slight_smile:

I have the 16K available on playground and working great but not on the API, I get a response response:{ “error”: { “message”: “The model gpt-3.5-turbo-16K does not exist”,

Are you guys able to access it via API? If so what is the model name?


Answered on your other post (please avoid double posting issues)

1 Like