OpenAI API generates incomplete responses

Am trying to evaluate grammar error check from a user input document using open AI API GPT-3.5-turbo-instruct model. The below is the prompt input that is passed and I have requested GPT to provide me the output in the JSON format. The output generated is an incomplete JSON response

Need expertise assistance to fix this incomplete response

You will be provided with input_text and your task is to highlight the grammatical errors and provide corrections for issues such as missing punctuation marks, removing repetitive words, replacing confusing words, tense, subject-verb agreement, and sentence structure etc and generate a static output everytime in the following json format - {“replacements”: [
{
"value ": “<>”,
"phrase ": “<>”,
“corrective_text”: “<>”,
“corrective_text_phrase”: “<>”,
“type_of_error”: “<<You will need to specify the type of error found (e.g., spelling, punctuation, grammatical)>>”,
“justification”: “<>”
}
],
}input_text=

please find the below python logic used

def process_text(input_text):
chunk_size = 1000 # Adjust the chunk size based on your requirements
chunks = [input_text[i:i + chunk_size] for i in range(0, len(input_text), chunk_size)]
llm_output =

for chunk in chunks:
    final_prompt = prompt.format(input=chunk)
    chunk_output = llm(final_prompt)
    llm_output.append(chunk_output)

return llm_output

@app.route(‘/api/<api_name>’, methods=[‘GET’])
def resolve_api(api_name):
try:
allowed_apis = [‘correct-grammar’]
if api_name not in allowed_apis:
return jsonify({“error”: f"Invalid API route. Allowed routes: {', '.join(allowed_apis)}"}), 400

    input_text = request.args.get('input_text', '')
    llm_output = process_text(input_text)

    response = {
        "input": input_text,
        "output": llm_output
    }
    return jsonify(response)
except Exception as e:
    # Log any exceptions
    root_logger.exception("An error occurred: %s", e)
    return jsonify({"error": "An error occurred while processing the request."}), 500