Function calling in Chat Completions API vs Assistants API

I’m not 100% clear on when to use function calling as part of the Chat Completions API and the Assistants API. I also see that the functions parameters for creating a chat completion are deprecated, however the deprecation is not mentioned in the function-calling section of the docs. And the use cases listed seem very similar.

Can someone help clear this up, and provide a comparison of when function calling using Chat Completions or the Assistant is the more appropriate choice?

1 Like

The reason for deprecation is the new “tool-call” feature, with similar but extendable function.

The main “feature” (if you will) of tool-call is that it prevents the developer from using the function role message to usefully place simulated document retrieval into the AI context. You need to put a complete assistant call of the function and a function return that responds only to that, with a matching ID, in pairs. Likely soon to come: ID enforcement so you then can’t inject anything the AI didn’t actually perform.

With a function, you provide a specification of an API that you provide to the API. You might give the AI “trigonometry_calculator” or “moon phase” as an example to extend its answering capabilities.

Then, when that utility is seen useful to fulfill the user question, the AI will emit the special function-call language, and you receive a differently formatted API response. You then handle that in your code, optionally calling external services, databases, etc, and return the function’s output.

With adding the AI’s function request and the function response to the conversation and calling again, the AI will answer the user with new-found information, or alternately, can call functions again.

For now, assistant is not the choice. It is a beta product with many things needing to be worked out.


can you provide a link to an example of using the function in the assistant?

@manuelmaccou did you figure it out?
I’m exploring the advantages of using function calling in the ‘Chat Completions API’ vs the ‘Assistants API’.

And even generally speaking, regardless of function calling, is the ‘Instructions’ part of the ‘Assistants API’ equivalent to the ‘System’ in the ‘Chat Completions API’?

Assuming we don’t need ‘Code Interpreter’ which seems exclusive for ‘Assistants API’, i don’t fully understand when to use which option, is there a source that outlines the advantage of ‘Assistants API’ over ‘Chat Completions API’?
Is it configured differently behind the scenes?


Assistants is an agent framework. Your instruction is similar to a system prompt, but is just one part of the message that includes other messaging and internal functions that are out of your control.

Assistants allows the AI, and the AI is encouraged to, make multiple calls persistently in calling for retrieval of parts of uploaded documents via internal functions, writing python code and emitting it by function to a sandbox that can run the code, and then finally emitting functions back to you in similar fashion to chat completions.

It also has a record of user input and AI answers that make up a conversation, only allowing you to place a user question, run the thread, wait for an answer to be created, check the status, and then download the finished answer. Or you find that the AI has been waiting for you to run a tool function for it.

The AI is loaded with the maximum amount of conversation and the maximum amount of documents that will fit in the model. There is no token usage statistic in the API return to show how much you will be billed.

So, the advantage is that some things that people were doing themselves can be done - but with coding made more complex just to interact with the API. And the disadvantages are great, no control.

Here’s a post where I laid out the inputs to tools (as a function API replacement only for preview models) and what you receive back, and then what you fulfill. In programming, much of what you see would not be as illustrated.

1 Like

Appreciate the reply @_j , much clearer now

Thanks, this is one of the best explanations I have read.

I have been starting to use Assistants to build an agent framework. But I have been struggling to find documentation on how agents can communicate independently with each other and how to ensure that they use the context information from the thread (the official documentation seems to rather focus on the structure of threads, messages, etc).

You write: “Assistants allows the AI, and the AI is encouraged to, make multiple calls persistently in calling for retrieval of parts of uploaded documents via internal functions,”

Does that refer to agents independently communicating with each other, and do you know any good tutorials or documentation of this?