Error: you requested 7409 tokens (1345 in the messages, 64 in the functions, and "6000" in the completion)

I use “gpt-3.5-turbo-0613”.
Following error occured:
“Error: This model’s maximum context length is 4097 tokens. However, you requested 7409 tokens (1345 in the messages, 64 in the functions, and 6000 in the completion)”.
I required response within “30 words”.
Tell me how solve this problem. This error never happen with “gpt-3.5-turbo”.