Response-Text to small without max_tokens-parameter?

Hello - i try to use the following API-request

         resp = client.completions.create(
            model = "gpt-3.5-turbo-instruct",
            prompt = wText,
            temperature = 0.5,
            max_tokens = 3000

With this max_tokens parameter i get a correct long text-answer like:

Completion(id='cmpl-8K8n2YRTGhYBowVhMu4k70irnMJdb', choices=[CompletionChoice(finish_reason='stop', index=0, logprobs=None, 
text='\nNo, the text is not about a laundry service. It is about a company that sells various products and services, including food items and shipping services.')], 
created=1699811040, model='gpt-3.5-turbo-instruct', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=31, prompt_tokens=183, total_tokens=214))

But when i am not using this optional “max_tokens” parameter the answer is only too short / cuted like this:

Completion(id='cmpl-8K8qHxZAw5DNAjtKPEehZpd9pX24w', choices=[CompletionChoice(finish_reason='length', index=0, logprobs=None, 
text='\n\nNo, the text is not about a laundry service. It is about a')], 
created=1699811241, model='gpt-3.5-turbo-instruct', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=16, prompt_tokens=183, total_tokens=199))

It seems that the default token-amount is only 200.
Why is that and why is the request by default not using the full range of the possible tokens for the model?
Is there any reason for this?

It seems that the defaults on the instruct model are following the old completion model settings, which was limited in token count when not specified.