I want to get json format response which I can pass using gpt-4 model. Also I want to give my prompt to get json data

When I send text content to the gpt-4 model, I want to send this data to set response format.
How can I send this data using openai.chat.completions.create() function parameter

Blockquote
Hey chatgpt, please analyze this call and give answers against these categories:
1. “What Questions Were Asked”
2. “And How They Were Answered”
2. “Pain Points Of The Customer”
3. “Understand Any Objections”
4. “Any pricing discussed”
5. “Results Of The Call”
6. “Overall Summary Of The Call”
Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.

  Here is the example JSON output:
  {
  "questions": ["1st question", "2nd question"],
  "answers": ["answer 1", "answer 2"],
  "pain_points": ["Pain point 1", "Pain point 2"],
  "objections": ["objection 1", "objection 2"],
  "pricing_discussed": ["pricing 1", "pricing 2"],
  "call_results": "The call was so good",
  "call_summary": "This is the overall summary of the call"
  }

Blockquote

1 Like

Using a model’s API functions simply to format the output the AI generates is somewhat a misuse of the function training of the model, and works against how it is programmed.

Here is typical use:

User: I have a question about 2023
AI: I will call the “headine_news function” with a query to help me answer
Function return: (relevant results)
AI: Here’s information about 2023…

Non-typical use:

User: Analyze this phone call transcript
AI: I will call a function to answer this question

Creating a function specification, where each property you list is an array data type with strings, is relatively straightforward. Making the AI use it like you wish is the hard part.


Does GPT-4 needs some “function” to produce the response? No.

I rewrite the forum post into a system message to program the AI’s job:

From the text of a phone call transcript, I want AI analyze the call and give answers:
1. “What Questions Were Asked”
2. “And How They Were Answered”
2. “Pain Points Of The Customer”
3. “Understand Any Objections”
4. “Any pricing discussed”
5. “Results Of The Call”
6. “Overall Summary Of The Call”
Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.

Here is the example JSON output:
{
“questions”: [“1st question”, “2nd question”],
“answers”: [“answer 1”, “answer 2”],
“pain_points”: [“Pain point 1”, “Pain point 2”],
“objections”: [“objection 1”, “objection 2”],
“pricing_discussed”: [“pricing 1”, “pricing 2”],
“call_results”: “The call was so good”,
“call_summary”: “This is the overall summary of the call”
}

With some simulated transcript pasted into the user role, we see the AI response from GPT-4:

{
“questions”: [
“Can you please provide me with your name and the serial number of your lawnmower for reference?”,
“Could you describe the issue with the battery in more detail? Is it not holding a charge, or is it not working at all?”,
“Have you tried charging the battery overnight?”,
“Have you checked for any visible damage on the battery or the connectors?”,
“How long will it take to get the replacement battery?”
],
“answers”: [
“My name is John, and the serial number is GTELM1234.”,
“The battery isn’t holding a charge, and it’s barely getting through half of my lawn before it dies.”,
“Yes, I’ve tried that, but it’s not improving the situation at all.”,
“I’ve checked, and everything seems to be fine physically.”,
“Typically, it takes 5-7 business days for the replacement battery to arrive at your address.”
],
“pain_points”: [
“The customer is frustrated with the short lifespan of the lawnmower’s battery.”,
“The battery isn’t holding a charge and is not lasting long enough to mow the customer’s lawn.”
],
“objections”: [
“The customer is unhappy with the product’s performance and the inconvenience caused.”
],
“pricing_discussed”: [
“No pricing was discussed during the call.”
],
“call_results”: “The customer service agent offered to send a replacement battery under warranty to resolve the issue.”,
“call_summary”: “The customer, John, called to express his frustration with the battery of his electric lawnmower. The customer service agent acknowledged his frustration, gathered necessary information, and suggested troubleshooting steps. When these didn’t resolve the issue, the agent offered to send a replacement battery under warranty. The call ended with the customer’s query about the delivery time for the replacement battery.”
}

With the quality of GPT-4, you don’t need to be counter-productive and try to lean on a crutch like function-calling used the wrong way to get the output you want.

1 Like

I just to want get the response as json format i gave.
How can I get response as that format?
I don’t want to call AI API.
If I can send my prompt as a parameter, it would be better for me as well.

Then I don’t know what you expect. You can’t communicate with the AI telepathically.

The chat completions method you discuss is the API, where you submit your input to a pay-per-use AI model and receive the generated output back, requiring a funded account with OpenAI. And “API” stands for Application Programming Interface, as in: you write a program to communicate with OpenAI’s services.

API Reference - go to chat completions for coding examples.

If you just want to chat with an AI and have it do this job for you, you can sign up for ChatGPT at chat.openai.com, and for $20/month upgrade to Plus, you can use the better reasoning of GPT-4 to answer questions like I wrote.

2 Likes

Thanks for your response and I made wrong description.
As you said, to get response, I need to call AI API.
I will explain what I want to get below.

This is my prompt.

const basePrompt = `Hey chatgpt, please analyze this call and give answers against these categories:
      1. "What Questions Were Asked"
      2. "And How They Were Answered"
      3. "Pain Points Of The Customer"
      4. "Understand Any Objections"
      5. "Any pricing discussed"
      6. "Results Of The Call"
      7. "Overall Summary Of The Call"
      Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.
      
      Here is the example JSON output:
      {
      "questions": ["1st question", "2nd question"],
      "answers": ["answer 1", "answer 2"],
      "pain_points": ["Pain point 1", "Pain point 2"],
      "objections": ["objection 1", "objection 2"],
      "pricing_discussed": ["pricing 1", "pricing 2"],
      "call_results": "The call was so good",
      "call_summary": "This is the overall summary of the call"
      }`

When I call openai.chat.completions.create() API, I want to get result as above format.

I will share my current codebase and I am not sure whether or not this is correct.

const basePrompt = `Hey chatgpt, please analyze this call and give answers against these categories:
      1. "What Questions Were Asked"
      2. "And How They Were Answered"
      3. "Pain Points Of The Customer"
      4. "Understand Any Objections"
      5. "Any pricing discussed"
      6. "Results Of The Call"
      7. "Overall Summary Of The Call"
      Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.
      
      Here is the example JSON output:
      {
      "questions": ["1st question", "2nd question"],
      "answers": ["answer 1", "answer 2"],
      "pain_points": ["Pain point 1", "Pain point 2"],
      "objections": ["objection 1", "objection 2"],
      "pricing_discussed": ["pricing 1", "pricing 2"],
      "call_results": "The call was so good",
      "call_summary": "This is the overall summary of the call"
      }`

    const completion = await openai.chat.completions.create({
      messages: [
        { role: "system", content: basePrompt },
        {
          role: "user",
          content: transcription.text,
        },
      ],
      model: "gpt-4",
    })

Please let me know. Thanks

1 Like

API models don’t have a name, so saying “hey ChatGPT” or “please” doesn’t make much sense…in a system role you are telling the AI what it is and its purpose, not chatting with it.

Here is example node.js from API documentation

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const completion = await openai.chat.completions.create({
    messages: [{ role: "system", content: "You are a helpful assistant." }],
    model: "gpt-3.5-turbo",
  });

  console.log(completion.choices[0]);
}

main();

Or from node.js v4 docs:


const chatCompletion = await openai.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [{"role": "user", "content": "Hello!"}],
});
console.log(chatCompletion.choices[0].message);

Your construction of an API call looks to be in good form if you have the prerequisites in place.

I still cannot understand.
How can I pass this string to get a desired response?

`Hey chatgpt, please analyze this call and give answers against these categories:
      1. "What Questions Were Asked"
      2. "And How They Were Answered"
      3. "Pain Points Of The Customer"
      4. "Understand Any Objections"
      5. "Any pricing discussed"
      6. "Results Of The Call"
      7. "Overall Summary Of The Call"
      Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.
      
      Here is the example JSON output:
      {
      "questions": ["1st question", "2nd question"],
      "answers": ["answer 1", "answer 2"],
      "pain_points": ["Pain point 1", "Pain point 2"],
      "objections": ["objection 1", "objection 2"],
      "pricing_discussed": ["pricing 1", "pricing 2"],
      "call_results": "The call was so good",
      "call_summary": "This is the overall summary of the call"
      }`

If you really need to do all that in ONE prompt, I’d try something like:

“For this list of questions, please place your answers in JSON format, where each answer is a top level property on the JSON, with property names like answer1, answer2, answer3, etc. For any answers that result in a list of things as the reply, put the list in an array of strings inside the answer.”

Are you confused about the “messages” you wrote out here?

The instruction part I wrote out would be the text of the system role “content”, while the user role content would be "call transcript: " + your phone call log.

The AI will understand it’s job as defined by the the instructions and what it should produce, the data it should be analyzed as provided by the user role, and then produce its analysis in the requested format as the response to you.

My above code is working.
Thanks.
As I said, I want to know how can I pass my instructions to the gpt-4 model.

How I Fixed It

Setting Expectations with the “System” Role

I noticed that setting an initial message with the role as “system” helped. This way, you can instruct the model to stick to certain types of responses. For example, I used the following setup:

{
  role: "system",
  content: "You are a machine that only returns and replies with valid, iterable RFC8259 compliant JSON in your responses"
}

This helps set a “systemic” instruction for the model to abide by, and I’ve found that it significantly improves consistency.

Precise Prompt Formatting

Be as specific and clear as possible with your prompt. If you need a specific structure, don’t be shy to explicitly spell it out. For instance:

const prompt = `generate a 4 question advanced and very hard quiz about ${subject} - provide the question, one correct answer and 3 wrong answers in a json array format. The objects will be called question1-4, correct_answer, and wrong_answers`;

Sanity Check the Response

After I get the response, I validate and filter it to ensure that it follows the format I want. This can act as a fail-safe in case the model output deviates:

const content = data.choices[0].message.content;
const parsedContent = JSON.parse(content);
if (Array.isArray(parsedContent)) {
  // Your logic here...
} else {
  console.error("Invalid format: Iterable array should be an array.");
  return null;
}

The Big Picture

Here’s a chunk from my own code that incorporates the above principles:

const generateQuizQuestions = async (apiKey, prompt) => {
  const url = 'https://api.openai.com/v1/chat/completions';
  const headers = {
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json'
  };
  const body = JSON.stringify({
    model: 'gpt-3.5-turbo',
    messages: [
      {
        role: "system",
        content: "You are a machine that only returns and replies with valid, iterable RFC8259 compliant JSON in your responses"
      },
      {
        role: 'user',
        content: prompt
      }],
    temperature: 0.7
  });

  const response = await fetch(url, { method: 'POST', headers, body });
  const data = await response.json();

  if (!data.choices || !data.choices[0] || !data.choices[0].message || !data.choices[0].message.content) {
    return null;
  }

  const content = data.choices[0].message.content;
  const parsedContent = JSON.parse(content);

  if (Array.isArray(parsedContent)) {
    return parsedContent.map(q => {
      return {
        question: q.question,
        correct_answer: q.correct_answer,
        options: [q.correct_answer, ...q.wrong_answers].sort(() => 0.5 - Math.random())
      };
    });
  } else {
    console.error("Invalid format: Iterable array should be an array.");
    return null;
  }
};

I hope this helps! Even though you ran into this issue last year, I hope this can be of use to you or anyone else who finds themselves in a similar situation. Cheers! :beers:

1 Like

Pasting the same thing into three different old threads is not necessary for us to see it.

Informing the intelligent AI that it is not chatting with a user, but instead is a data processor that is interacting with strict API can help. Part of “know your job completely” prompting.

It may help people that stumble from different google searches though.
For me, using openAI’s method is too verbose if you want to save tokens. Hasn’t always complied with my request either - the method above has never failed me.

I actually need this a system prompts for converting stuff in github

@seanfox1989 btw if you replace this :
content: "You are a machine that only returns and replies with valid, iterable RFC8259 compliant JSON in your responses"

with this :
content: "You are a machine that only returns and replies with valid, iterable RFC8259 compliant JSON in your responses with codeblock json."

the response way better

models : gpt-3.5-turbo-0613

1 Like