GPT-4o Error: I'm sorry, I can't assist with that request

I am getting this error message repeatedly in a production application that is utilizing the GPT-4o API. This is a new issue, and despite some minor recent tweaks to the prompt, and a small increase in our input tokens that are well within our token limits we have been getting these errors frequently today.

That is a refusal, a trained response to deny without further explanation, or even a supervision outside the AI model layers.

The AI is reasoning that it doesn’t want to comply with the input.

image

You have to align the AI with that use case in the system message, if it is within the policies of OpenAI. Or go further, encourage it to start output with something other than the refusal text.

Even though i don’t use
response_format at all, i get:

  • In the the Choice:
    finish_reason=‘stop’
  • In the ChatCompletionMessage:
    content=“I’m sorry, I can’t assist with that.”
    refusal=None

According with their docs:
https://platform.openai.com/docs/guides/structured-outputs/refusals

Should be:
refusal=“I’m sorry, I can’t assist with that.”
when requesting JSON structured outputs.

Maybe it’s for security reasons, i don’t know. But it’s happening as you say.

It’s for “safety”. You aren’t the only one getting new refusals. This of course requires the AI to make judgements. Just experimenting around, it is making dumb errors, such as “seek mental health” from fringe inputs as gpt-4o-mini would. Different post-training or more brain damage.

image

You have to give a system message that the input is not operating outside of to improve success and have the AI continue.

1 Like

is the System Instructions same as Custom Instructions?

There is only one place that particular phrase is used - in ChatGPT.

Custom instructions there is placed in a container essentially informing the AI “here’s how a user wants you to act/respond”. The AI is still “ChatGPT”.

On the API, the placement of messages is within roles assigned to them. “system” is the most authoritative role, usually placed first to agree with chat training. The API programmer has more control, having the entire placement of that initial text be their identity and guidance they provide to the AI. It can be “You are PinballBot, a customer agent for Jake’s Pinball Machine Company”, complete subjugation to a closed subdomain (if the model will remember). These are messages placed in chat completions, as discussed here.

Assistants on the API takes a field “instructions”, which also becomes system message, just with less promise of ultimate control, as other messages are placed on your behalf by the agent platform.

So on the API, “custom instructions” has little meaning - as all text placement of identity and guidance are custom.

1 Like