How do you maintain historical context in repeat API calls?

I’ve got this working in a Google Sheet:

Feeding back an NLP analysis of the last Response into the follow up Prompt helps sustain conversation flow. GPT is asked to extract keywords, named entities, context and sentiment from the Response and add them at the head to the follow up interaction. In this way conversation flow appears to be sustained.

Topic, DoNLP, LastResponse and FollowUp are range names.

DoNLP
analyse the Prompt using NLP and return topic, context, named entities, keywords and sentiment and then respond to the Follow Up question :

FollowUp
Who were the main characters

In A10 is the formula ="On the topic of: “&Topic&” "&DoNLP&CHAR(10)&CHAR(10)&LastResponse&CHAR(10)&"Follow up: "&FollowUp

Where the last response was about Bulgakov’s novel the White Guard the next Prompt becomes:

analyse the Prompt using NLP and return topic, context, named entities, keywords and sentiment and then respond to the Follow Up question : The White Guard was written by the Ukrainian writer Mikhail Bulgakov. It is a novel that depicts the events of the Ukrainian Revolution of 1918 and the subsequent civil war in Ukraine. Bulgakov is also known for his famous novel, The Master and Margarita.

(Source gpt-3.5-turbo Temperature 0.7)
and respond to the follow up question : Who were the main characters

1 Like

From the docs over here: OpenAI API

# Note: you need to be using OpenAI Python v0.27.0 for the code below to work
import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)

If you use python, you could wrap this in a function like

import openai

def chat_with_gpt3(history, newprompt):
    messages = [{"role": "system", "content": "You are a helpful assistant."}]
    messages += history
    messages.append({"role": "user", "content": newprompt})
    
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
    )
    
    return response.choices[0].text.strip()

and call it like

history = [    {"role": "user", "content": "Who won the world series in 2020?"},    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},]

newprompt = {"role": "user", "content": "Where was it played?"}

result = chat_with_gpt3(history, newprompt)
print(result)

I’m not sure if this perfectly answers the question above, but hopefully folks coming across this thread can find it useful. You can add previous prompts, and you could create a ‘summarize’ function also using gpt to shorten the length of previous conversations, and update the history to be simply:

history = [ {“role”: “user”, “content”: “Summarize our conversation so far”}, {“role”: “assistant”, “content”: “{summary}.”},]

1 Like