Using python, openai==1.58.1 and chat completions API, gpt 5.1.
Got a string instead of a response object, which casused an excpetion when trying to parse it
Hi, just a quick note, this can happen if the code is mixing two different APIs without realizing it.
In the current Python SDK (1.58.1):
• responses.create() returns plain text
(or a stream of text chunks, not a ChatCompletion object)
• chat.completions.create() returns the structured object
with .choices[0].message[“content”]
So if the model was called through Responses API, but the code tries to parse it as if it were a ChatCompletion response, you’ll end up with:
“got a string instead of a response object”
If you want a structured response, use:
client.chat.completions.create(...)
If you stay with responses.create(), you should handle it as text instead of expecting a message/role object.
In short:
Same SDK, two endpoints, two different return types and they aren’t interchangeable.
You should use smarter AI to source your responses. Everything you’ve written is incorrect. Or better: know the API yourself before answering about the API.
Could you clarify please:
- have you specified, instructed, and are expecting your own structured output format as JSON? or,
- are you referring to the shape of the SDK API response object itself (which is a Pydantic-based output with “attributes” when using the Python OpenAI library)?
- is this an issue that is persistent, is a problem with a new project that you have not been able to resolve, or are you reporing on a change in the behavior of an existing application?
- specific source of the error, a Traceback, etc.
If structured output is what you refer to, and the AI is not producing your JSON specification reliably merely by “asking”, you may upgrade to “strict”:true structured outputs by using response_format={"type":"json_schema", ... and then including your schema. The AI is then forced to conform to that JSON shape by logit enforcement (except for rare exceptions where it can go into a loop of characters).
quick example value of a chat completions “response_format”:
{
"type": "json_schema",
"json_schema": {
"name": "basic_response",
"strict": true,
"schema": {
"type": "object",
"additionalProperties": false,
"properties": {
"answer": {
"type": "string"
}
},
"required": [
"answer"
]
},
"description": "single key for sending answer to user"
}
}
The underlying response of the API call is indeed a JSON string transmitted by http. To access the data as a dict when using the OpenAI library, (response[“choices”][0] instead of methods like response.choices), you can do a response.model_dump(), where model_dump() is an included Pydantic method to iteratively deliver JSON to dict.
Oh wow ![]()
Let me rephrase what I was actually answering, because I think we were addressing two different layers.
I wasn’t referring to the raw HTTP JSON or the Pydantic model internals, only to the practical SDK behavior that explains why OP saw a plain string instead of a structured object.
If you mix the Responses API with Chat Completions parsing, the shapes naturally won’t match, and that mismatch alone can trigger the exact error they described.
Your detailed breakdown is valid, it just tackles a different part of the stack than the one OP was stumbling over.
Suggesting, “You made a Responses API call (despite saying only Chat Completions in the OP), but are “parsing” it like it was a Chat Completions API call” is an extremely unlikely case to suggest is happening.
The Responses API does not “returns plain text”. It returns a Pydantic object just like Chat Completions on the SDK.
{
"id": "resp_23456",
"object": "response",
"created_at": 1741476542,
"status": "completed",
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "message",
"id": "msg_123545",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "I am an AI!.",
"annotations": []
}
]
}
],
..
Such a code foible also would not have such a symptom. Any error returned by mis-informed developer code like you suggest would surface a fault that there is not a "choices" array found, but instead, an "output" array in a Responses object. You would not get a symptom, “returns a string”.
This is an example of wrongness, what a confused AI would write: “chat.completions.create() returns the structured object with .choices[0].message[“content”]” - a mishmash of broken parsing. Even bringing up “responses” is likely the result of using one of OpenAI’s models that has been over-fitted on knowing Responses in post-training.
Instead of saying “just a different answer” (an AI-style justification), accepting that there is no useful information in what you provided will be a gateway to understanding how the API actually works, then allowing inference of actual causes of the under-documented symptom. A better problem report is first needed.
Hey, thank you for the quick reply.
I’m , refering to the basic chat completions object (no structured output, just a basic call, optionally with function calls). I’m expecting ChatCompletion instance, and Im calling pydantic’s BaseModel model_dump_json for logging purposes right after the call. had no problem for the last year +, but in the last two weeks or so it happened a couple of times. unfortunately, I don’t have a Traceback. The only thing i can think of is using gpt 5.1, other than that, i didn’ t change anything