Structured Output Function Removing ":"

I am using function calling to structure my output, it has been working up until today when I noticed a new bug. The response is now removing all “:” from the keys of the JSON. For example:

‘Reference Code Date: Instrument Number’:‘#’,

‘Reference Code Date Instrument Number’:‘#’,

Even though there is no change to the underlying function which correctly has all of the colons. I have tried adding additional quotations but nothing seems to work and it is unclear why this just started happening

TL;DR: working fine against attempts to break.

AI model and version? API endpoint? Is the function specification a “strict” schema?

Then, why function calling ? A function call will alter the tone and the quality of the response, and you will have no way to engage in multi-turn responses or to teach by example, nor to actually use functions.

Colons just produced - as response_format.


{“response part 1: hidden thinking and reasoning”:"The explanation provided
by OpenAI clearly delineates the appropriate use of function calling and the
response_format parameter for structured outputs based on the purpose and
context of the application.\n\n1. Function Calling*:\n - It should be
used when integrating the model with external tools or functions available
within the system. For instance, if your application needs AI to interact
with a database or UI components, function calling is ideal.\n - It deals
with the interaction between the AI model and system-level functions
directly, allowing real-time data retrieval or actions as specified by the
application through APIs.\n\n2. response_format JSON Schema*:\n - This
is more suitable for specifying the structure of the AI’s output to be
consumed by the application’s front-end or other components.\n - It’s used
to ensure the model’s response is structured in a particular way for
consistent and predictable output directly to the user, like consolidating
UI/UX designs or organizing responses for further application
processes.\n\nRegarding your bug about JSON keys losing colons, this sounds
like a parsing or rendering issue either with the API response data or the
mechanism handling JSON in your application. This isn’t directly linked to
whether you use function calling or response_format, but ensuring your
handling and parsing of JSON data correctly anticipates formats and styles
from a chosen method (function calls or response formats) could resolve such
an issue.\n\nBased on your needs: \n- If your focus is interaction with
external data or systems, continue with function calling.\n- If your
priority is to format AI outputs for readability and application-specific
structures, then consider utilizing response_format more

rigorously.",“response part 2: displayed response to
user”:{“response”:{“recommendation”:“Yes, based on what you shared from
OpenAI, you should choose between function calling and response_format based
on your specific needs:\n\n- Function Calling*: Continue using it if your
goal is to connect the AI with external tools, databases, or functions. It’s
meant for integrating model outputs directly with your system’s
functionalities.\n- Structured response_format*: Opt for this if your
main goal is to format the output in a specific way to display to users,
ensuring that the AI’s response adheres to a consistent schema that fits
your application’s UI requirements.\n\nYour issue with JSON keys might be
independent of this decision and could be related to how JSON responses are
being parsed in your application. It would be beneficial to review how JSON
data is handled or consider debugging that part of your implementation.”}}}



I switch over to tools, making a strict structured output. It seems the AI cannot write anything except what I specify in this mandatory tool, including colons.

Looks like it is not a bug, just a model that wants to write things differently today in a non-strictly structured function - used the wrong way. It doesn’t want to bomb out the code that’s supposed to handle your function return?

Model: 4o-mini

Can’t use response_format as I am using File Search.

Yes the two functions I am using both have strict schema set to true, there have been no change to any of the functions since the bug occurring.

It occurs both in the playground and in my streamlit dashboard

Assistants? gpt-4o-mini?

Colons:

You can see now, of course, the thread is useless, having emitted a tool_call with nothing to respond.

The file_search was also useless, not being invoked before responding, and the AI only able to complete the JSON.

Here is the exact function added (which just previously blocked “weather” when it was made the only tool choice by API parameter, to make output any sort of reliable).

{
  "name": "structured_response_schema",
  "description": "Mandatory output format to respond to user",
  "strict": true,
  "parameters": {
    "type": "object",
    "properties": {
      "response part 1: hidden thinking and reasoning": {
        "type": "string",
        "description": "A non-displayed output for you to write step-by-step thoughts about how you will approach producing the desired response - and what response will be most fulfilling."
      },
      "response part 2: displayed response to user": {
        "type": "string",
        "description": "Part 2 is for responding to a user, only after you've outlined everything that will be included in the response and have a good idea of a method to produce a factual answer."
      }
    },
    "required": [
      "response part 1: hidden thinking and reasoning",
      "response part 2: displayed response to user"
    ],
    "additionalProperties": false
  }
}

Observe the placement of strict at the nest level.

Then that output format has to be placed within this as part of a list of functions:

{
            "type": "function",
            "function": {schema}
}

If the function was strict, you would not have any ability for the AI to write function keys differently. If you can talk to the AI and tell it that a key must be removed or added or renamed, it is not strict.

In Assistants, I can send a error back about the tool output keys having colons, and the AI is stuck in a loop forever that will probably bill me a bunch of tokens of the API behind assistants refusing the bad keys then written:

You can also change your tool description or property to indicate the colon in the key name is mandatory.

You ultimately aren’t in control of a lot of things when you are a consumer of Assistants, though.

Yes it’s assistants, the basic structure of it is this:

  1. Upload file or text
  2. Extract info in JSON fields → Save tool output and convert to Excel
  3. Submit JSON Fields
  4. Get Summary of fields
  5. Return Excel and Summary to user

The schema has always been set to strict, that was never changed, but it just randomly stopped returning colons.