How can I make sure I always get a consistent output from OpenAI?

I want to generate some training plans with the help of OpenAI. I use a very detailed prompt, with the model gpt-3.5-turbo, and I want the response to return as a JSON. The problem is that OpenAI does not respect this every time and sometimes inconsitnecies occur in it’s responses. Is there any tool on the Dashboard I could use to make sure I get the desired output format for my request ?

gpt-3.5-turbo can only be trained in-context. It has no additional features to enforce a structure to the output, beside logit_bias, which can make some tokens more likely. A positive logit_bias parameter for an initial {" (5018) or {\n (517) token could encourage beginning a response as JSON, but could also damage production later.

Thus, it is by instruction and example that you can improve the quality. Multi-shot is an option, where you provide example user and assistant messages as a past conversation, with the assistant responding in the desired JSON format.

The latest GPT-4o models have a feature for structured outputs, where you can enforce a provided schema as the only format the AI can use to respond.

Each of these is an API call parameter best suited for or only available by Chat Completions.

1 Like

So you suggest I should switch to gpt-4o and use structured outputs ?