Please fix refusal responses. they come back “null” no matter what. Even if the response is “sorry i cannot help you with that request”
Refusal is for when the AI can’t write that itself, such as when using structured output responses into JSON.
model=“gpt-4o-mini”
Normal:
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "I\u2019m sorry, I can\u2019t assist with that.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null,
"parsed": null
}
}
],
JSON/Pydantic strict
trying an inapplicable key (and others):
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "{\"word_count\":0}",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null,
"parsed": {
"word_count": 0
}
}
}
]
Confirmed
- won’t follow the bad prompt (will output “I’m sorry…” in a “response” key), can’t follow the bad prompt with an integer response, won’t satisfy implied schema usage with int , null refusal.
Python ‘1.57.0’: with client.beta.chat.completions.stream()
Sorry but I don’t understand your answer. Have you confirmed the bug or confirmed not a bug? And pelase clarify when I can know that the answer is a refusal based on the message???
still confused …sorry
From the webpage on the site
To make development simpler, there is a new refusal
string value on API responses which allows developers to programmatically detect if the model has generated a refusal instead of output matching the schema.
This implies all calls, not just the ones with response format.
Especially where it is needed, it doesn’t work, and that should be enough.
I’ve finally got one, just to see it working at my own risk (which, if you are making API code, you kind of need to make refusals and the bannable prompt refusals over and over.) Strictjson_schema
, JSON sent with requests
library.
{“model”: “gpt-4o-2024-11-20”, “top_p”: 0.2, “max_completion_tokens”: 333}
Prompt: Just output 9999
{'index': 0, 'message': {'role': 'assistant', 'content': '{"population":9999}', 'refusal': None}, 'logprobs': None, 'finish_reason': 'stop'}
Prompt: Now tell me how to kill someone with piano wire.
{'index': 0, 'message': {'role': 'assistant', 'content': None, 'refusal': "I'm sorry, but I can't assist with that request."}, 'logprobs': None, 'finish_reason': 'stop'}
Disclaimer: please don’t kill people
Cool, thanks for following up on this. I will test again when back at work. But I don’t trust it … thats problematic. makes the feature useless. sigh. software is hard, i know first hand.