How to get JSON structure response from API

I am building chat support assistant and as a first step processing user message I need to evaluate if its just “hello”-message or some particular questions. Depending on this evaluation I have different scenarios of further processing with Assistand API. If it particular question, I will provide RAG context to my assistant to generate the answer. So in which way can I return JSON with bool value of message estimation?


To make sure I understand correctly- you want to first evaluate using a separate model? If so, you have two main options here:

  1. Define tool calls for the different use cases you want to move on to, and then send it to the completion endpoint (along with some context on the system message maybe).
  2. Instruct the model in the system message to do what you want (without function calling). Something like Evaluate if the user message is a greeting message- return 'true' if it is, 'false' if it isn't. Don't return anything else.
    *I didn’t try it out, you might need to test this out and adjust a bit. You can also ask for a JSON with more data if you need.
1 Like

The output of “boolean” is very easy to instruct as a permanent behavior. Give examples of those exact two outputs as the only two choices the AI can make.

1 Like

Awesome! So as a simplest scenario: I can establish a separate assistant which will just define if the message is “smth like hello” or containing real question. Than analyse json from assistants responce. And based on this responce proceed with my followin scenario?

originally that was the plan:

  1. use assistants API or completions to define if the user question was just a “hello message”
  2. if it was hello, just generate some appropriate answer. If it was meaningful question:
    2.1. request my rags from my separate API endpoint
    2.2. request assistants API to answer user considering his basic knowledge about my product (which I provide in assistants instruction), current user question and my rags.

But you gave me an idea to incorporate Functions to use it all in one assistant call (functions are supposed to be used for requesting my rags).

1 Like

You would not use the assistants endpoint, which is for if you want internal model calling and conversations beyond your control.

What I show in the screenshot is using chat completions to generate a single output based on an input. You can see it works for a single question, but you many want to pass the last several inputs and responses to ensure the context can be understood by the classifier AI.

1 Like

In my understanding (IIUC) this is the better plan.

You can in general just define one assistant, with function calling that can do the work that you’re trying to do (tbh I don’t fully understand the process there, but it should work for whatever- even a just performing another call to that same assistant).
This way you’re not supposed to care if the user sent a greeting- the assistant will just reply to it because it doesn’t involve any of the tasks defined in the function calls.

Sorry in advance if I’m missing anything.

1 Like

what do you mean by IIUC?

is there possibility in openai chat completions to pass previous “imaginary” chain of chatting between user and bot, I mean (as far as I understood you):

user request 1
chat response 1
user request 2
chat response 2

Yes, you can provide simulated conversation that will be continued upon as long as it is within the realm of what would be believed by the AI.

1 Like

IIUC=‘If I Understand Correctly’, sorry about that

1 Like