Longer GPT 3.5-turbo Output

Thanks for that valuable point. I changed the model to gpt-3.5-turbo-16k-0613 and the response length was increased by a bit. Since I have to process large files, I have to use gpt 3.5 because of its high context window.