Hi there! I’m new here so please forgive any poor choices etc. I have been playing around with the OpenAI API for a few months now, and this is how i previously handled function calls and streaming in python:
if chunk["choices"][0]["delta"].get("function_call"):
if "name" in chunk["choices"][0]["delta"]["function_call"]:
function_name = chunk["choices"][0]["delta"]["function_call"]["name"]
chunk = chunk["choices"][0]["delta"]
function_arguments_chunk = chunk["function_call"]["arguments"]
function_arguments += function_arguments_chunk
print(function_arguments_chunk, end='', flush=True)
function_called = True
However, since function calls are now deprecated, I was wondering if anyone had a solution to get something like this working with the new GPT-4-1106-preview model with streaming and handling multiple tool calls?
I have deduced that a tool call is now handled as a finish_reason
, however I am unsure if this is still the case while streaming a response.
I’ll have to do some more digging, but any help is appreciated!
Many thanks