I am currently using gpt4o for my code and my prompt. I am trying to update my code to utilize gpt-5-mini, but I am having issues. For example, I noticed when I implement gpt-5-mini, the AI responses are coming back as blank.
Can anyone provide insight or assist on how I should proceed?
Solution: Do not send any parameters for max_tokens, max_completion_tokens, or max_output_tokens.
Reasoning: The billed generation also includes internal thinking that you do not see, but which the AI model must perform.
Solution: Do not use OpenAI’s “prompt id” facility to remember API parameter settings.
Reasoning: It returns to and runs the model on settings that are inappropriate for success.
Solution: Look for readable output content in ALL output items, or user response.output_text helper.
Reasoning: The output from “Responses” produces a list separating multiple types of response items, including a reasoning summary item, even if not requested, even if blank.
the “new response format” that could be referred to is over 2.5 years old, and is how gpt-4o would have been being used previously, indicating AI with knowledge cutoff.
the wrong model or the wrong parsing of output would result in an error, indicating AI lack of knowledge.