Chat History with Functions Enabled Example

Hi,

In case you haven’t stumbled on some examples, I thought it would be useful to share how I implemented chat history while using the functions parameter in the chatCompletion method with the 3.5 turbo or gpt4 0613 models (at the time of this posting).

I’m using python, and I’m very new to programming, so if I didn’t explain something well enough or you still have questions, please reach out, I’m happy to help!

EDIT: After I posted this, I realized that when the function is called, there is sometimes a response from the model before executing the function. What this means is that the below code still works, but you’ll just need to extract the content of the initial response to append to chat history/share with the user. I’m working on that now so let me know if you need help.

Thanks,
-Serghei

The code:

#This is a method that takes user input from a chatbox
@app.route('/generate_gpt_response', methods=['POST'])
def generate_gpt_response():

#this stores the user input as a variable
user_input = request.form.get('user_input')   

#this assigns a variable to the local flask session
chat_history = flask_session.get('chat_history', [])   

#printing was essential for me to trace the chat history
print("Initial Chat History:", chat_history)  

#append the user input inside the chat history (correct format for model)
chat_history.append({"role": "user", "content": user_input})

#only keep last 10 chat history elements to not surpass token limit
chat_history = chat_history[-10:]



system_message = "INSERT YOUR SYSTEM_MESSAGE HERE"

messages=[
    
        {"role": "system", "content": system_message},

]

 # Add chat history to the messages array
messages.extend(chat_history) 

 #The trusty print again
print("Chat history after appending user input:", chat_history) 


# Create the chat completion
response = openai.ChatCompletion.create(
    model='gpt-3.5-turbo-0613',
    messages=messages,
    max_tokens=500,
    temperature=.1,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
    n=1,
    functions=functions,
    function_call="auto",
    stop=None
)

#this response message should have the entire output from the model
response_message = response['choices'][0]['message']

#it's fun to see what it contains
print("Print response_message #1:", response_message)  

#extract just the string which is the model's classical "response" to the prompt
assistant_message = {"role": "assistant", "content": response_message['content']}

#Print again
print("print assistant_message #1:", assistant_message)  

#check to see if the model chose to use any functions
if response_message.get("function_call"):
    available_functions = {
        "YOUR_FUNCTION_NAME" : YOUR_METHOD_FUNCTION_USES,
    }
    function_name = response_message["function_call"]["name"]
    function_to_call = available_functions[function_name]
    function_args = json.loads(response_message["function_call"]["arguments"])
    function_response = function_to_call(
        prompt=function_args.get("prompt")
    )

  #this adds to messages to store the history of the model using the function
    messages.append(
        {
            "role": "function",
            "name": function_name,
            "content": function_response,
        }
    )        
    print("messages.append(response_message) #2", messages)

   #second chatCompletion after retrieving the function response from above
    second_response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=messages,
    )
    print("Second Response", second_response)

   #extract just the content (response string) from the response
    content = second_response['choices'][0]['message']['content']
    print("jsonified content", content)

    # Create a dictionary to add add the content to this dictionary style for the model
    content_dict = {"role": "assistant", "content": content}

   # add the above to dictionary role/assitant/content to chat history
    chat_history.append(content_dict)

  # Keep only the last 10 messages
    chat_history = chat_history[-10:]  
    flask_session['chat_history'] = chat_history
    print("Chat History After appending the second response:", chat_history)  # Debug print

    return jsonify({'answer': content})



print("response_message after function check", response_message)
return jsonify({'answer': assistant_message['content']})
1 Like