Gpt-3.5-turbo-1106 has a 16k context windown but get max token error

I make an API call with max tokens of 6554 token on gpt-3.5-turbo-1106 which has a 16k context but I’m told that completion tokens can be a maximum of 4096 tokens?

Error code: 400 - {‘error’: {‘message’: ‘max_tokens is too large: 6554. This model supports at most 4096 completion tokens, whereas you provided 6554.’, ‘type’: ‘invalid_request_error’, ‘param’: ‘max_tokens’, ‘code’: None}}

Never mind, didn’t properly read the docs.

Will the new gpt-3.5-turbo-1106 model later be updated to allow a bigger than 4096 completion toke limit? Since the gpt-3.5-turbo-16k offers that