For example, I ask a question: “How old is Mark Zuckerberg?”, it replies back with correct age, but then I immediately ask, ‘Where was he born?’, and now it no longer remembers who I’m referring to. I know that it has to do with appending the previous chat messages, but not quite sure how it’s implemented.
Here’s what I’m working with:
import openai
# replace YOUR_API_KEY with your actual API key for the ChatGPT service
openai.api_key = "YOU-API-KEY-GOES-HERE!!!"
prompt = (f"What are the latest models of Tesla cars?")
# Use the OpenAI API to generate a response
response = openai.Completion.create(
engine="text-davinci-003",
prompt=prompt,
temperature=0.5,
max_tokens=1024,
top_p=1,
frequency_penalty=1,
presence_penalty=1
)
# Print the response
print(response["choices"][0]["text"])
Also, not sure if I’ve written the correct syntax for the prompt. Does it need to have the “You: then my message goes here.” and “ChatGPT: I can’t tell you that…go look it up yourself lazy BAS**RD!”
Here’s one of my basic chatbot implementations. I’ve refactored it since, but it should do what you’re looking for
import openai
# Set the API key
openai.api_key = <<YOUR PREFERRED METHOD OF FETCHING API KEYS>>
# Choose a model
MODEL_ENGINE = "text-davinci-003"
def get_response(prompt):
"""Returns the response for the given prompt using the OpenAI API."""
completions = openai.Completion.create(
engine = MODEL_ENGINE,
prompt = prompt,
max_tokens = 1024,
temperature = 0.7,
)
return completions.choices[0].text
def handle_input(
input_str : str,
conversation_history : str,
USERNAME : str,
AI_NAME : str,
):
"""Updates the conversation history and generates a response using GPT-3."""
# Update the conversation history
conversation_history += f"{USERNAME}: {input_str}\n"
# Generate a response using GPT-3
message = get_response(conversation_history)
# Update the conversation history
conversation_history += f"{AI_NAME}: {message}\n"
# Print the response
print(f'{AI_NAME}: {message}')
return conversation_history
# Set the initial prompt to include a personality and habits
INITIAL_PROMPT = ('''
I am a friendly artificial intelligence.
''')
conversation_history = INITIAL_PROMPT + "\n"
USERNAME = "USER"
AI_NAME = "AI"
while True:
# Get the user's input
user_input = input(f"{USERNAME}: ")
# Handle the input
handle_input(user_input, conversation_history, USERNAME, AI_NAME)
Thanks for the response Dent! I’ll take a look over this once I get the chance. I’ll be travelling to another city for vacation today, but I’ll give you an update once I give it a shot. Thanks again.
Oh, and to answer your question. I just don’t like the idea of asking a question on a specific person, place, or thing, then have to keep repeating that person, place, or thing, when I can just simplify the next question by say something along the lines of…“and what is the population there?” (referring to a specific place already/previously asked about).
Oh, yeah, the code I provided feeds the convo history back in, so it should understand which antecedent you’re referring to when using pronouns in your follow-up question
I hope you see this. I’ve been looking around and I haven’t yet found a good solution for it.
Can i add more data to the model and then interact with it? For example: can i feed my entire Git repo into the model and then ask questions about the data? Can i give it an entire chat history and then ask it questions about it? I’ve seen embeddings being recommended but it still doesn’t fully address my requirements. Sure, i can give it a rolling chat history as part of the API request and it will “remember” some things but what I want is to put my data into the model itself and basically expand it’s “brain”.
I solved the issue with a variable that stores previous messages and possible context from a file.
Reason i ended up to use files is that for example a power shortage may cut the conversation. This way its preserved in file. Ofcourse sql or some other solutions can be considered aswell…
This is great. Thanks for posting. I’ve tried to implement similar using deque() to limit how many questions are “remembered”. The problem I’m facing is that ChatGPT will answer any previous questions every time I submit the string with history. Do you have a way to handle this behaviour?
When you send the data to the API, make sure the list elements are in order (system, user, assistant, user, assistant etc…) and that the last element in the list is the user prompt. If there are previous user prompts without an immediate corresponding assistant response, that specific unpaired user prompt will then receive the assistant answer.
-system: take on this behavior
-user: prompt goes here
-assistant: (the API response will be in response to the above)
-system: take on this behavior
-user: prompt goes here
-assistant: responds to the above user prompt
-user: prompt goes here
-assistant: (the API response will be in response to the above)
Thanks for the code, i’m using the same approach for this problem. And now i’m facing a new problem, its about the max_tokens that chatgpt can handle on their API request(i believe it’s only 4097 tokens maximum). How can i manage to keep this “memory” features now?
Thanks Dent. This is a nice little script to understand prompt engineering. I think there is a slight issue with your code snippet … I think that last line should read: conversation_history = handle_input(user_input, conversation_history, USERNAME, AI_NAME) otherwise it does not retain the history …
Also, it is not clear to me whether there is any benefit in using the openai.ChatCompletion.create interface for this application and building up the messages parameter and adding roles like system, assistant, user.