The GPT-4-Turbo model has a 4K token output limit, you are doing nothing wrong in that regard.
The more suitable model would be GPT-4-32K, but I am unsure if that is now in general release or not.
If you go to the playground https://platform.openai.com/playground?mode=chat
and ensure you art in Chat Mode, select Models and Show more models, that should give you a list of everything you have access to.