Hey folks. I’m about to show how to properly insert your function return back to the AI - something far too useful for OpenAI to document properly.
chat_completion_parameters = {
"model": "gpt-3.5-turbo",
"top_p": 0.5,
"messages": [
{"role": "system", "content": "You are MegaBot, my fine-tune AI identity."},
# chat history goes here
{"role": "user", "content": "Who won the 2024 election?"},
{"role": "assistant", "content": assistant_content_if_exist,
"function_call": {
"name": called_function_name,
"arguments": called_function_args_json,
}
},
{"role": "function",
"name": called_function_name,
"content": "Rudy Giuliani! LOL."},
]}
When the AI emitted a finish reason of “function_call”, it gave you:
- a function name: above,
called_function_name
- a function argument:
called_function_args_json
- it might have also said something,
assistant_content_if_exist
The function argument might not be json or valid, but giving the AI back its wrong output is part of iterative correction.
The AI might not have prefaced the function call with chat, but it can. null “” is also OK to send.
This might help your chatbot where you didn’t give it hundreds or thousands of fine tune call examples like OpenAI did. Providing this correct chat history should be the same format that you fine tuned your return on, as per the single tutorial.
The entire chat history as used before should be passed again in the second API call with the function return, along with the function definition, so the AI knows what it called and what it might call again.
You should continue including the assistant/function roles losslessly each additional turn until AI is done with its function usage and only answers the user.