Getting response data as a fixed & Consistent JSON response

Noob Question Alert!
So in ChatGPT API, the prompt "messages": [{"role": "user", "content": "Hello!"}] in this format. So within the “content” how can I give multiple valid JSON key-value pairs? like in your above example? This solution might work well for regular completion API, but for the ChatGPT 3.5 model how to show the model with a JSON example to make it consistently output valid JSON?

   {"role": "user", "content": "Hello!"},
   {"role": "user", "content": "Hello Again!"},
   {"role": "user", "content": "Hello Three Times!"},
   {"role": "user", "content": "Hello Forever!"},



I have tweaked my preferred prompt to give a better response, and specifically say to provide RFC8259 compliant JSON. I have had consistent results using Chat-GPT as with Davinci, although for Chat-GPT I had to also add the instruction not to provide an explanation to consistently only get the JSON without any pre-amble;

system prompt:

Pretend you are an expert language translator

user prompt:

Create a list of three random source phrases and three random translations for each.
Do not include any explanations, only provide a  RFC8259 compliant JSON response  following this format without deviation.
  "source_language": "language of original phrase",
  "source_phrase": "the phrase to be translated",
  "translations": [{
    "trans_language": "language of the translation",
    "translation": "the translated phrase"
The JSON response:

This gives a consistent JSON response along the lines of…

  "source_language": "English",
  "source_phrase": "I love pizza",
  "translations": [{
    "trans_language": "Spanish",
    "translation": "Me encanta la pizza"
    "trans_language": "Italian",
    "translation": "Amo la pizza"
    "trans_language": "French",
    "translation": "J'adore la pizza"
  "source_language": "Mandarin",
  "source_phrase": "你好吗?",
  "translations": [{
    "trans_language": "English",
    "translation": "How are you?"
    "trans_language": "Korean",
    "translation": "어떻게 지내?"
    "trans_language": "Japanese",
    "translation": "お元気ですか?"
  "source_language": "Russian",
  "source_phrase": "Я люблю мороженое",
  "translations": [{
    "trans_language": "German",
    "translation": "Ich liebe Eis"
    "trans_language": "Turkish",
    "translation": "Dondurma seviyorum"
    "trans_language": "Polish",
    "translation": "Kocham lody"

This worked great for me ! Consistently receiving JSON response.

Finally, an approach that seems to work, thank you!

I would like to hear if anyone has experiences in placing instructions in system role content with chatCompletions? My experiences are not too good, so far

I just figured out another prompt. My inputs are different, though, and to be categorized “high”, “medium”, “small”, “extra_small”.

My prompt (relevant part in bold): “Without any comment, return the result in the following JSON format {“high”:[…],“medium”:[…],“small”:[…],“extra_small”:[…]}”

my app are crucial with a pre-formatted JSON Structure.
which contain not only “reply in text”
but also various system command, and args.

My tips for any1 for need this:

THE POSITION of this JSON Only instruction is the MAIN FACTOR how consistant the GPT will follow.

– As long As this particular instruct is the VERY LAST Part of the entire prompt. you are good to go.
– i placing this just under User input ( as reminder )

This work best for me.

I thought this approach from Microsoft was really interesting: GitHub - microsoft/guidance: A guidance language for controlling large language models.

1 Like

Hey, we were having this problem as well. The way we solved was by adding this “reply in JSON format” in every interaction we had with ChatGPT, not only in the prompt. It seems to be working

You can also try alphawave (pypi install alphawave), it solves this problem by validating a response.
If the response JSON is surrounded by other text, as is often the case, it will extract the JSON
If there is no valid JSON, it uses the json validator error to provide specific failure ‘feedback’ in a retry.
It also manages the conversation history, so that once the failure is corrected, the ‘feedback’ messages are deleted from history so you don’t waste context space.

typescript and python versions available


OpenAI recently annouced updates to their API that now make it possible to get properly formatted JSON in your response.

Previously, you could do a little “prompt engineering” and get stringified JSON by simply appending “provide your response in JSON format” to the end of the prompt. Though, these responses often included incorrect trailing commas or introductory text (“Here is your recipe in JSON format:”) that led to breaking errors.

I’ve written an explanatory post where I go into detail on how you can update your old prompts with the new parameters to get a JSON response. No links allowed here, but you can search for that article on Medium.

Briefly, you are going to want to first define your JSON Schema object. Then pass this object to the new functions parameter in the ChatCompletion endpoint:

      model: "gpt-3.5-turbo-0613",
      messages: [
        { role: "system", "content": "You are a helpful recipe assistant." },
        { role: "user", content: prompt }],
      functions: [{ name: "set_recipe", parameters: schema }],
      function_call: {name: "set_recipe"}

Look up JSON Schema to make sure you define the schema correctly. It is a bit verbose.

1 Like

Fix found!
I have been having the same issue, I even tried triple shot prompting with three examples and no luck it just wouldn’t generate me a JSON without text saying ‘here is your JSON format’. I actually asked chat GPT-4 how to get around this and it found an easy solution.
You basically just define a function that grabs the text between the ‘[’ and ‘]’ brackets and then pass that text off to wherever, for me i’m parsing it into JSON.loads

No imports required.

Here is the example code GPT provided me:
def extract_json_from_string(s):
start = s.find(‘[’)
end = s.rfind(‘]’) + 1
return s[start:end]

json_string = extract_json_from_string(response[“choices”][0][“message”][“content”])
playlist = json.loads(json_string)

This has worked every time and i’ll be using it from now on going forward!

OpenAI released Function calling which can directly get JSON object as output without us having to ask the model explicitly in the prompt.

Currently, only gpt-4-0613 and gpt-3.5-turbo-0613. support it.


I would like to thank everyone and try to help others who may need something similar, related to responses and response transformations in the “string>JSON” format.

To @bruce.dambrosio, thank you because it was through your post that I discovered Promptrix and AlphaWave (Alpha Wave’s automatic response repair in action - Community - OpenAI Developer Forum).

Regarding my experiences using responses and modifying responses (python + json + DB), what kept me going, in most cases, were splits or regex, and only recently did I decide to change two of the main codes I created and maintain (one of which I use for commercial purposes).

I am using the new gpt3.5 and gpt4 models (OpenAI Platform).

I haven’t been able to update with promptrix + alphawave yet, but I would love to have feedback if anyone is kind enough.

In any case, I revived the post and will soon share my results with promptrix.

@joseicarobc are you using the Python or JS versions of Promptrix+Alphawave? I’ve made a number of recent improvements to the JS versions but the Python version is a little lagging. AlphaWave is still pretty much state-of-the-art (in my opinion) with regards to improving the overall reliability of getting structured data back from the model.

1 Like