Question about Function Calling & JSON mode

I’m trying to build a workflow very similar to MemGPT. The main agent is in JSON mode to parse user queries for enriched parameters such as “thought” or “suggested action”.

I have 2 questions:

  1. Is JSON mode compatible with Function Calling? I can’t seem to make an agent with JSON mode enabled execute functions. It’s always giving me an error

  2. If I remove JSON mode and only keep functions, can I define the same scheme for the Function and get the enriched/parsed messages (“thought” “observation” “suggested action” etc types of fields)?

Basically, for the 2nd one, I’m a bit confused if the function call retains context of historical user-assistant messages for that sessions or if its isolated.

1 Like


I’m not sure I understand the whole use-case but I’ll try to add some context that might be useful:

  1. According to the official documentation:

Note that JSON mode is always enabled when the model is generating arguments as part of function calling.

See here

  1. In my experience, if the function makes sense (its name, description, and parameters are all described well) this is generally doable.

Last: looking at your stack trace, if you’re expecting a function call you might be looking for it in the wrong attribute (message.content which is None, instead of message.function_call or message.tool_calls)

Appreciate the response!

To clarify my use case:
We have an assistant that talks to the user, but the assistant is also connected to another group of agents that execute tasks.

The assistant that talks to the user is using JSON mode. JSON mode allows us to know if we should pass an instruct to the agent group for task execution. We do this by defining a field in the JSON schema as a boolean. If that field is true we pass an instruction for the task to the agent group.

Now in the case of the something like MemGPT, the assistant has some functions it can call to update its memory and RAG.

I want to apply functions to my user facing assistant in a similar fashion so that I can update its memory, save chat history, extract facts about the user in real time.

When I try to add functions to this assistant and run it I get the error I pasted in the original post.

Hope this adds enough context.

Thanks for clarifying, iiuc the comments aI gave above should still be relevant for your question (right?), 2 refers to your actual stack trace error.
Please lmk if I’m missing anything.

** Edit: not 2 refers to your actual stack trace but what’s under “Last”

There’s several things here:

  • If the AI calls a function, there is generally no “content”, thus you get an error when trying to parse content with an empty value. That is the error seen in the first message: parsing the response object.

  • Individual API calls have no memory between them. You must construct a history of user and assistant exchanges and pass as much of that back to the model as you would need for following a conversation.

  • Function calls likewise need their own conversation history, which is inserted after the most recent user input. They are the assistant and function return messages that tell the AI what it requested and what was received back. Then in that second call, the AI can use the information to either answer the user or continue calling functions.

This post shows how to insert the last two when using functions: