Why is GPT API giving me a response with lots of spaces and new lines?

At first, I’m sorry that I cannot share the prompt messages I used for the input.

For some prompts, the contents of response only consists of lot’s of spaces and new lines. It doesn’t happens everytime, but sometimes without control.

Q1. Why is this happens
Q2. How to prevent from this situation?

You probably need to share more details of your prompt with us to reproduce the issue. Also try to adjust temperature and system prompt.

When you send blank prompts, the LLM doesn’t have anything to “latch onto” to find what might come next, so it goes a bit nuts.

I would make sure your prompt always includes something. If it’s empty or just newlines, just don’t send it and show a canned message to user maybe?

i have same question too, here is my prompt im make sure my prompt includes my question and agent step logs when i used function call

I’m trying to build a Python script to feed information to chatGPT using the following script
from openai import OpenAI

Blockquote
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model=“gpt-3.5-turbo-0125”,
response_format={ “type”: “json_object” },
messages=[{“role”: “system”, “content”: “You are a young and intelligent software engineer assigned to the task to identify ambiguities in the system requirements JSON”},
{“role”: “user”, “content”: “Identify ambiguities in the following software requirement”},
{“role”: “user”, “content”: “Once borrower clicks on the Payment Request link sent to their email address we need to retrieve the Payment Request Information and return it to the borrower.”}],
temperature=1,
top_p=1
)
print(response)

I’m obtaining

Blockquote
ChatCompletion(id=‘chatcmpl-8u5VtF1zBTMl55pADtV6R0tCmm2eC’, choices=[Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘\n \n\n \n \n\n \n \n\n \n \n\n \n \n\n \n \n\n \n \n\n \n \n\n \n \n \n\n \n \n\n \n\n \n\n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n’, role=‘assistant’, function_call=None, tool_calls=None))], created=1708378253, model=‘gpt-3.5-turbo-0125’, object=‘chat.completion’, system_fingerprint=‘fp_69829325d0’, usage=CompletionUsage(completion_tokens=278, prompt_tokens=73, total_tokens=351))

Not sure how to address this, using the same argument with the Web chat interface provides a reasonable answer.

Thanks!, Pedro.

Blockquote