I am getting this error message repeatedly in a production application that is utilizing the GPT-4o API. This is a new issue, and despite some minor recent tweaks to the prompt, and a small increase in our input tokens that are well within our token limits we have been getting these errors frequently today.
That is a refusal, a trained response to deny without further explanation, or even a supervision outside the AI model layers.
The AI is reasoning that it doesnât want to comply with the input.
You have to align the AI with that use case in the system message, if it is within the policies of OpenAI. Or go further, encourage it to start output with something other than the refusal text.
Even though i donât use
response_format at all, i get:
- In the the Choice:
finish_reason=âstopâ - In the ChatCompletionMessage:
content=âIâm sorry, I canât assist with that.â
refusal=None
According with their docs:
https://platform.openai.com/docs/guides/structured-outputs/refusals
Should be:
refusal=âIâm sorry, I canât assist with that.â
when requesting JSON structured outputs.
Maybe itâs for security reasons, i donât know. But itâs happening as you say.
Itâs for âsafetyâ. You arenât the only one getting new refusals. This of course requires the AI to make judgements. Just experimenting around, it is making dumb errors, such as âseek mental healthâ from fringe inputs as gpt-4o-mini would. Different post-training or more brain damage.
You have to give a system message that the input is not operating outside of to improve success and have the AI continue.
is the System Instructions same as Custom Instructions?
There is only one place that particular phrase is used - in ChatGPT.
Custom instructions there is placed in a container essentially informing the AI âhereâs how a user wants you to act/respondâ. The AI is still âChatGPTâ.
On the API, the placement of messages is within roles assigned to them. âsystemâ is the most authoritative role, usually placed first to agree with chat training. The API programmer has more control, having the entire placement of that initial text be their identity and guidance they provide to the AI. It can be âYou are PinballBot, a customer agent for Jakeâs Pinball Machine Companyâ, complete subjugation to a closed subdomain (if the model will remember). These are messages placed in chat completions, as discussed here.
Assistants on the API takes a field âinstructionsâ, which also becomes system message, just with less promise of ultimate control, as other messages are placed on your behalf by the agent platform.
So on the API, âcustom instructionsâ has little meaning - as all text placement of identity and guidance are custom.