Getting ChatGPT to Remember Previous Chat Messages

For example, I ask a question: “How old is Mark Zuckerberg?”, it replies back with correct age, but then I immediately ask, ‘Where was he born?’, and now it no longer remembers who I’m referring to. I know that it has to do with appending the previous chat messages, but not quite sure how it’s implemented.

Here’s what I’m working with:

import openai

# replace YOUR_API_KEY with your actual API key for the ChatGPT service
openai.api_key = "YOU-API-KEY-GOES-HERE!!!"

prompt = (f"What are the latest models of Tesla cars?")

# Use the OpenAI API to generate a response
response = openai.Completion.create(

# Print the response

You have to feed the conversation history back in before the new prompt


Right, as previously stated, I get the concept, just not quite sure how it’s implemented. Do you have an example? Thanks.

1 Like

Also, not sure if I’ve written the correct syntax for the prompt. Does it need to have the “You: then my message goes here.” and “ChatGPT: I can’t tell you that…go look it up yourself lazy BAS**RD!”

Here’s one of my basic chatbot implementations. I’ve refactored it since, but it should do what you’re looking for

import openai

# Set the API key

# Choose a model
MODEL_ENGINE = "text-davinci-003"

def get_response(prompt):
    """Returns the response for the given prompt using the OpenAI API."""
    completions = openai.Completion.create(
             engine = MODEL_ENGINE,
             prompt = prompt,
         max_tokens = 1024,
        temperature = 0.7,
    return completions.choices[0].text

def handle_input(
               input_str : str,
    conversation_history : str,
                USERNAME : str,
                 AI_NAME : str,
    """Updates the conversation history and generates a response using GPT-3."""
    # Update the conversation history
    conversation_history += f"{USERNAME}: {input_str}\n"
    # Generate a response using GPT-3
    message = get_response(conversation_history)

    # Update the conversation history
    conversation_history += f"{AI_NAME}: {message}\n"

    # Print the response
    print(f'{AI_NAME}: {message}')
    return conversation_history

# Set the initial prompt to include a personality and habits
    I am a friendly artificial intelligence.
conversation_history = INITIAL_PROMPT + "\n"


while True:
    # Get the user's input
    user_input = input(f"{USERNAME}: ")

    # Handle the input
    handle_input(user_input, conversation_history, USERNAME, AI_NAME)

You can edit it however you like, but presumably you wanted some sort of back-and-forth?


Thanks for the response Dent! I’ll take a look over this once I get the chance. I’ll be travelling to another city for vacation today, but I’ll give you an update once I give it a shot. Thanks again. :+1: :+1:


Oh, and to answer your question. I just don’t like the idea of asking a question on a specific person, place, or thing, then have to keep repeating that person, place, or thing, when I can just simplify the next question by say something along the lines of…“and what is the population there?” (referring to a specific place already/previously asked about).

1 Like

Oh, yeah, the code I provided feeds the convo history back in, so it should understand which antecedent you’re referring to when using pronouns in your follow-up question

1 Like

Awesem-O!!! I’ll give it a shot in a few… I’ve got to travel a few hours, then give it a go esta noche.

1 Like

This explains how to continue a conversation (along with the problems you will face, and possible solutions)



I hope you see this. I’ve been looking around and I haven’t yet found a good solution for it.

Can i add more data to the model and then interact with it? For example: can i feed my entire Git repo into the model and then ask questions about the data? Can i give it an entire chat history and then ask it questions about it? I’ve seen embeddings being recommended but it still doesn’t fully address my requirements. Sure, i can give it a rolling chat history as part of the API request and it will “remember” some things but what I want is to put my data into the model itself and basically expand it’s “brain”.

Do you know of a way to do this?

1 Like

I solved the issue with a variable that stores previous messages and possible context from a file.
Reason i ended up to use files is that for example a power shortage may cut the conversation. This way its preserved in file. Ofcourse sql or some other solutions can be considered aswell…

In my program, i used the files for simplicity and mostly as proof-of-idea. You find it at github GitHub - EkBass/AiConsoleChat: OpenAi Chat Bot with some neat features


这也是我面临的一个大问题, 我正在努力解决. 找到解决方案将非常有帮助.

This is great. Thanks for posting. I’ve tried to implement similar using deque() to limit how many questions are “remembered”. The problem I’m facing is that ChatGPT will answer any previous questions every time I submit the string with history. Do you have a way to handle this behaviour?

When you send the data to the API, make sure the list elements are in order (system, user, assistant, user, assistant etc…) and that the last element in the list is the user prompt. If there are previous user prompts without an immediate corresponding assistant response, that specific unpaired user prompt will then receive the assistant answer.

-system: take on this behavior
-user: prompt goes here
-assistant: (the API response will be in response to the above)

-system: take on this behavior
-user: prompt goes here
-assistant: responds to the above user prompt
-user: prompt goes here
-assistant: (the API response will be in response to the above)

If you’re having trouble, ask daVinci or chatGPT.

Yes… this happens if you use the completion API method and you send an array of prompts to the API using the prompt param.

If you want specific help, you need to post your code so we can see what are your API params and the specific API endpoint you are calling.



1 Like

Thanks for the code, i’m using the same approach for this problem. And now i’m facing a new problem, its about the max_tokens that chatgpt can handle on their API request(i believe it’s only 4097 tokens maximum). How can i manage to keep this “memory” features now?

1 Like

Would love to hear some feedback on this!

Thanks Dent. This is a nice little script to understand prompt engineering. I think there is a slight issue with your code snippet … I think that last line should read:
conversation_history = handle_input(user_input, conversation_history, USERNAME, AI_NAME) otherwise it does not retain the history …

Also, it is not clear to me whether there is any benefit in using the openai.ChatCompletion.create interface for this application and building up the messages parameter and adding roles like system, assistant, user.

1 Like