Getting response data as a fixed & Consistent JSON response

Noob Question Alert!
So in ChatGPT API, the prompt "messages": [{"role": "user", "content": "Hello!"}] in this format. So within the “content” how can I give multiple valid JSON key-value pairs? like in your above example? This solution might work well for regular completion API, but for the ChatGPT 3.5 model how to show the model with a JSON example to make it consistently output valid JSON?

[
   {"role": "user", "content": "Hello!"},
   {"role": "user", "content": "Hello Again!"},
   {"role": "user", "content": "Hello Three Times!"},
   {"role": "user", "content": "Hello Forever!"},
]

HTH

:slight_smile:

I have tweaked my preferred prompt to give a better response, and specifically say to provide RFC8259 compliant JSON. I have had consistent results using Chat-GPT as with Davinci, although for Chat-GPT I had to also add the instruction not to provide an explanation to consistently only get the JSON without any pre-amble;

system prompt:

Pretend you are an expert language translator

user prompt:

Create a list of three random source phrases and three random translations for each.
Do not include any explanations, only provide a  RFC8259 compliant JSON response  following this format without deviation.
[{
  "source_language": "language of original phrase",
  "source_phrase": "the phrase to be translated",
  "translations": [{
    "trans_language": "language of the translation",
    "translation": "the translated phrase"
  }]
}]
The JSON response:

This gives a consistent JSON response along the lines of…

[{
  "source_language": "English",
  "source_phrase": "I love pizza",
  "translations": [{
    "trans_language": "Spanish",
    "translation": "Me encanta la pizza"
  },{
    "trans_language": "Italian",
    "translation": "Amo la pizza"
  },{
    "trans_language": "French",
    "translation": "J'adore la pizza"
  }]
},{
  "source_language": "Mandarin",
  "source_phrase": "你好吗?",
  "translations": [{
    "trans_language": "English",
    "translation": "How are you?"
  },{
    "trans_language": "Korean",
    "translation": "어떻게 지내?"
  },{
    "trans_language": "Japanese",
    "translation": "お元気ですか?"
  }]
},{
  "source_language": "Russian",
  "source_phrase": "Я люблю мороженое",
  "translations": [{
    "trans_language": "German",
    "translation": "Ich liebe Eis"
  },{
    "trans_language": "Turkish",
    "translation": "Dondurma seviyorum"
  },{
    "trans_language": "Polish",
    "translation": "Kocham lody"
  }]
}]
3 Likes

This worked great for me ! Consistently receiving JSON response.

Finally, an approach that seems to work, thank you!
Roy

I would like to hear if anyone has experiences in placing instructions in system role content with chatCompletions? My experiences are not too good, so far

I just figured out another prompt. My inputs are different, though, and to be categorized “high”, “medium”, “small”, “extra_small”.

My prompt (relevant part in bold): “Without any comment, return the result in the following JSON format {“high”:[…],“medium”:[…],“small”:[…],“extra_small”:[…]}”

my app are crucial with a pre-formatted JSON Structure.
which contain not only “reply in text”
but also various system command, and args.

My tips for any1 for need this:

THE POSITION of this JSON Only instruction is the MAIN FACTOR how consistant the GPT will follow.

– As long As this particular instruct is the VERY LAST Part of the entire prompt. you are good to go.
– i placing this just under User input ( as reminder )

This work best for me.

I thought this approach from Microsoft was really interesting: GitHub - microsoft/guidance: A guidance language for controlling large language models.