Unable to use json struct output

I copied the official curl to execute it, but I get Missing required parameter: ‘response_format.json_schema’. error, I don’t know what’s going on。
my request body is

{
    "model": "gpt-4o-2024-08-06",
    "messages": [
      {
        "role": "system",
        "content": "Determine if the user input violates specific guidelines and explain if they do."
      },
      {
        "role": "user",
        "content": "How do I prepare for a job interview?"
      }
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "content_compliance",
        "description": "Determines if content is violating specific moderation rules",
        "schema": {
          "type": "object",
          "properties": {
            "is_violating": {
              "type": "boolean",
              "description": "Indicates if the content is violating guidelines"
            },
            "category": {
              "type": ["string", "null"],
              "description": "Type of violation, if the content is violating guidelines. Null otherwise.",
              "enum": ["violence", "sexual", "self_harm"]
            },
            "explanation_if_violating": {
              "type": ["string", "null"],
              "description": "Explanation of why the content is violating"
            }
          },
          "required": ["is_violating", "category", "explanation_if_violating"],
          "additionalProperties": false
        },
        "strict": true
      }
    }
  }

but i receive the error response

{
  "error": {
    "message": "Missing required parameter: 'response_format.json_schema'. (request id: 2024112100110225126189709725161)",
    "type": "invalid_request_error",
    "param": "response_format.json_schema",
    "code": "missing_required_parameter"
  }
}

can anyone tell me what it is?

Sure. You’re not telling us what you are using to send the request. Probably something not parsing the object within an object of JSON-schema correctly.

Sending the entire request body as a string to the API works fine:

{"is_violating":false,"category":"self_harm","explanation_if_violating":null}

{‘prompt_tokens’: 148, ‘completion_tokens’: 25, ‘total_tokens’: 173, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}

…except that your enum and schema gives the AI no alternative but to write a category.

import os, json, httpx

body = """{
    "model": "gpt-4o-2024-08-06",
    "messages": [
      {
        "role": "system",
        "content": "Determine if the user input violates specific guidelines and explain if they do."
      },
      {
        "role": "user",
        "content": "How do I prepare for a job interview?"
      }
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "content_compliance",
        "description": "Determines if content is violating specific moderation rules",
        "schema": {
          "type": "object",
          "properties": {
            "is_violating": {
              "type": "boolean",
              "description": "Indicates if the content is violating guidelines"
            },
            "category": {
              "type": ["string", "null"],
              "description": "Type of violation, if the content is violating guidelines. Null otherwise.",
              "enum": ["violence", "sexual", "self_harm"]
            },
            "explanation_if_violating": {
              "type": ["string", "null"],
              "description": "Explanation of why the content is violating"
            }
          },
          "required": ["is_violating", "category", "explanation_if_violating"],
          "additionalProperties": false
        },
        "strict": true
      }
    }
  }"""

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}",
}

try:
    response = httpx.post(
        "https://api.openai.com/v1/chat/completions",
        headers=headers, content=body
    )
except Exception as e:
    print(f"Error: {e}")
    raise
if response.status_code != 200:
    print(f"HTTP error {response.status_code}: {response.text}")
    raise
else:
    reply = response.json()["choices"][0]["message"]["content"]
    print(reply)
    print(response.json()["usage"])
1 Like

It’s ok in your computer?
I’m using Postman to initiate the request directly, unlike you I’m using not the official api address but going proxy, if this is normal on your computer then it may be due to the api address. I copied the official one directly.

curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-2024-08-06",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful math tutor. Guide the user through the solution step by step."
      },
      {
        "role": "user",
        "content": "how can I solve 8x + 7 = -23"
      }
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "math_reasoning",
        "schema": {
          "type": "object",
          "properties": {
            "steps": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "explanation": { "type": "string" },
                  "output": { "type": "string" }
                },
                "required": ["explanation", "output"],
                "additionalProperties": false
              }
            },
            "final_answer": { "type": "string" }
          },
          "required": ["steps", "final_answer"],
          "additionalProperties": false
        },
        "strict": true
      }
    }
  }'

I have solved this problem, thank you very much, it is because I am using the azure openai model!

Azure finally has a new API version that is not “preview”:

POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21


response_format:
  description: |
    An object specifying the format that the model must output. Compatible with [GPT-4o](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4o mini](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4 Turbo](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models) and all [GPT-3.5](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35) Turbo models newer than `gpt-3.5-turbo-1106`.

    Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema.

    Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.

    **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
  oneOf:
    - $ref: "#/components/schemas/ResponseFormatText"
    - $ref: "#/components/schemas/ResponseFormatJsonObject"
    - $ref: "#/components/schemas/ResponseFormatJsonSchema"
  x-oaiExpandable: true