Under token limit, response is cut off regardless


I am at Tier 1 of api usage. I have found that regardless of coming in well under a given model’s token limit (in this case gpt-3.5-turbo-0125, but happens for all), my responses are getting cut off. Measuring my latest input using the tokenizer is 290 tokens, and the response that I received was 295. The response was cut off mid-sentence.

This has occurred for me regardless of the model, and regardless of whether I set the max_tokens parameter or not.

Any ideas?

Unfortunately this looks to have been caused by user error. My db tool has a hover functionality which I just found out cuts off past a certain number of characters. After using the django ORM to query, found the completion was giving full result.