Function calling vs tools vs JSON output

Can someone clarify the difference between function calling, tools, and just asking for JSON mode / getting a JSON output?

Suppose I want the 3.5 turbo LLM to read some text and output JSON, would I use tools for that or function calling, or JSON mode?

How does function calling work then?

You would not need function_calling for that - you would simply ask for it to output JSON. Function calling would be if you either want to contribuite something external to the conversation - or have GPT trigger something external during the conversation

1 Like

JSON mode is activated by default in function calling but seems to be more sensitive to schema errors and requires 1 or few-shot examples in the prompt. Conversely function calling only requires the function schema so I would prefer it overall.