Hi,
I’m using OpenAI API for Chat Completion
and Function call
.
Using the OpenAI
Python SDK (> 1.X
), I request a Chat completion and pass tools
to the call.
Im in Streamed mode.
The script (sanitised) :
response = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages,
tools=tools,
tool_choice="auto",
)
for chunk in response:
print(chunk)
Here is the output :
ChatCompletionChunk(id='', choices=[], created=0, model='', object='', system_fingerprint=None, prompt_filter_results=[{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
ChatCompletionChunk(id='chatcmpl-8fVOkHKSPBG1RgDbHq4y2v5TF2Cex', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason='tool_calls', index=0, logprobs=None, content_filter_results={})], created=1704902834, model='gpt-4', object='chat.completion.chunk', system_fingerprint=None, usage=None)
Why does in the last chunk, the finish_reason
is tool_calls
? The API never returned me any tool_calls
in the chunks. The content doesn’t looks to be filtered either.
In this case, the model expect me to call a tool but never tels me which one ?
Also, this is not consistent, with the same liste of messages
sometimes, I get a completion or a valid tool call.
What are your taught on this issue ?