Valid json every time?

Hi - i’m wondering if it’s possible to tell GPT to output valid json everytime. I’ve tried a number of prompts but still occasionally get errors where it’ll insert an invalid element into the json. My prompt is like:

Give this item a name and then elaborate on the description. Put results into a valid json object with keys: "item_name ", “item_description”

1 Like

I just ended up writing a snippet to extract the json (where “output” is a param that i send in)

  if (output == 'json'){
    let start = text.indexOf('{');
    let end = text.lastIndexOf('}')+1;
    let json = text.substring(start,end);
    myresult = JSON.parse(json);
  } 
2 Likes

Nice. Thanks for coming back to share.

I’m doing something similar, except I’m adding in the prompt to ‘respond in JSON format, surrounded by “```”’, then I’m extracting the text between triple-ticks instead and running JSON.parse on it. I wrapped the parse in a try/catch and I’m retrying the completion if parsing fails. I still occasionally get invalid JSON, but I figured due to the specific nature this might be a good candidate for fine-tuning once I build up enough data.

1 Like

What you can also try if feasible, use edit endpoint give it the json with your structure as input and prompt your edit with something like “fill in this json”, I use this as a universal code parser for example where i extract function signatures, parameters.
In my tests it seems really reliable.

Good luck :crossed_fingers:

1 Like

As alternative, I found that asking for comma-separated CSV gives me the most consistent results, but definitely need some sort of basic post-processing to remove occasional unwanted characters. Also, CSV formatting is minimal compared to json format, so it could save a tiny bit of tokens every request, that could add up over time.

You might try changing your prompt to something like this, or something similar:

Create key, value pairs from the following data and format the key, value pairs using JSON notation with keys: "item_name ", “item_description”

If you provide your exact data required to format, I can work on engineering a prompt for you.

:slight_smile:

Hi david,

would you be so kind so show what your exactly prompt looks like? Im also battling to get json output and i dont seem to be able to succeed on this.

Cheers!

1 Like

Having the same issue too.
OpenAI chatbot suggested I use this in my prompt.
“Prompt(encoding=True, validation=True, sanitation=True)”

The results are better but I still get invalid json from time to time.

1 Like

Will setting the temperature to 0 gives stable responses, From time to time the model is inconsistent with generating output in a desired format in this case JSON. I achieved success only with 1 out of 5 tries where I get valid JSON. What can improve the reliability and consistency of the output?

This is the basic technique I’m using as well…

See:

:slight_smile:

So I always show the model an example of the JSON I want it to generate and I have yet to see it generate bogus JSON when doing that. I have gotten bogus JSON in the past but it was before I started including an example of the JSON I wanted back in the prompt…

I should also mention that I’m running my prompts with a temperature as high as 0.9 and I can’t recall a single bad JSON output over the last several hundred model calls.

3 Likes

I need to try this out, since my outputs have been a bit inconsistent.
Thanks

Another add to this… Should the model not return JSON, I convert it’s response to JSON and then write that to the conversation history. I’ve found that should the model ever see in the conversation history a response that violated one of the rules you gave it, the more likely it is to a) always violate that rule, and b) potentially violate other rules…

Although it’s very appealing to use a language model for object creation, I would really try to avoid doing it. Unless the model is completely controlled by you, and each item can be vetted reviewed. If it’s a part of a pipeline, or immediately deposited to a server, it could completely break your system.

The reason I say this is because it inherently can make adjustments, and depending on how long your object is, it can start to make mistakes. This isn’t even considering what happens when the expected data is different. This is all assuming you are the sole user. Depending on how automated your pipeline is, it can completely ruin the process and possibly corrupt your database

I was doing this for a while, with a very small error rate (1% - 5%). However I have found better success in token classification and using logic to create my object.

Solved problem in GPT4: