Thanks for the response Dent! I’ll take a look over this once I get the chance. I’ll be travelling to another city for vacation today, but I’ll give you an update once I give it a shot. Thanks again. :+1: :+1:

2 Likes

Oh, and to answer your question. I just don’t like the idea of asking a question on a specific person, place, or thing, then have to keep repeating that person, place, or thing, when I can just simplify the next question by say something along the lines of…“and what is the population there?” (referring to a specific place already/previously asked about).

1 Like

Oh, yeah, the code I provided feeds the convo history back in, so it should understand which antecedent you’re referring to when using pronouns in your follow-up question

1 Like

Awesem-O!!! I’ll give it a shot in a few… I’ve got to travel a few hours, then give it a go esta noche.

1 Like

This explains how to continue a conversation (along with the problems you will face, and possible solutions)

https://thoughtblogger.com/continuing-a-conversation-with-a-chatbot-using-gpt/

4 Likes

@Dent

I hope you see this. I’ve been looking around and I haven’t yet found a good solution for it.

Can i add more data to the model and then interact with it? For example: can i feed my entire Git repo into the model and then ask questions about the data? Can i give it an entire chat history and then ask it questions about it? I’ve seen embeddings being recommended but it still doesn’t fully address my requirements. Sure, i can give it a rolling chat history as part of the API request and it will “remember” some things but what I want is to put my data into the model itself and basically expand it’s “brain”.

Do you know of a way to do this?

1 Like

I solved the issue with a variable that stores previous messages and possible context from a file.
Reason i ended up to use files is that for example a power shortage may cut the conversation. This way its preserved in file. Ofcourse sql or some other solutions can be considered aswell…

In my program, i used the files for simplicity and mostly as proof-of-idea. You find it at github GitHub - EkBass/AiConsoleChat: OpenAi Chat Bot with some neat features

3 Likes

这也是我面临的一个大问题, 我正在努力解决. 找到解决方案将非常有帮助.

This is great. Thanks for posting. I’ve tried to implement similar using deque() to limit how many questions are “remembered”. The problem I’m facing is that ChatGPT will answer any previous questions every time I submit the string with history. Do you have a way to handle this behaviour?

When you send the data to the API, make sure the list elements are in order (system, user, assistant, user, assistant etc…) and that the last element in the list is the user prompt. If there are previous user prompts without an immediate corresponding assistant response, that specific unpaired user prompt will then receive the assistant answer.

-system: take on this behavior
-user: prompt goes here
-assistant: (the API response will be in response to the above)

-system: take on this behavior
-user: prompt goes here
-assistant: responds to the above user prompt
-user: prompt goes here
-assistant: (the API response will be in response to the above)

If you’re having trouble, ask daVinci or chatGPT.

1 Like

Yes… this happens if you use the completion API method and you send an array of prompts to the API using the prompt param.

If you want specific help, you need to post your code so we can see what are your API params and the specific API endpoint you are calling.

Thanks

:slight_smile:

1 Like

Thanks for the code, i’m using the same approach for this problem. And now i’m facing a new problem, its about the max_tokens that chatgpt can handle on their API request(i believe it’s only 4097 tokens maximum). How can i manage to keep this “memory” features now?

2 Likes

Would love to hear some feedback on this!

Thanks Dent. This is a nice little script to understand prompt engineering. I think there is a slight issue with your code snippet … I think that last line should read:
conversation_history = handle_input(user_input, conversation_history, USERNAME, AI_NAME) otherwise it does not retain the history …

Also, it is not clear to me whether there is any benefit in using the openai.ChatCompletion.create interface for this application and building up the messages parameter and adding roles like system, assistant, user.

1 Like

Will conversation history tokens charged for api usage or only new message considered for charging?

Of course, it will be charged. The more conversation there is, the more expensive each subsequent message will be.

1 Like

Doesn’t cover everything, but has a good overview of a full solution as does this video.

2 Likes
1 Like

To use the ChatGPT API without losing the continuity of the conversation, you will need to implement a persistent session management system on your end. Here’s how you can do it:

  1. When the user starts a conversation, send an API request to ChatGPT to initialize a session. Save the session ID that is returned by the API.
  2. Whenever the user sends a message, include the session ID along with the message text in the API request.
  3. When you receive a response from ChatGPT, save the session ID in a database or a cache along with the conversation history.
  4. If the user sends another message, use the saved session ID to resume the conversation by sending it along with the new message to the ChatGPT API.
  5. Repeat this process for each subsequent message in the conversation, always including the session ID with the message and saving the session ID and conversation history after each API call.

By implementing this persistent session management system, you can maintain the continuity of the conversation and ensure that the user’s context is preserved across multiple API requests.

2 Likes

I have created an alternative API for exactly this. You can use it to add conversation history to your GPT API calls and just use it like the GTP API with your own token.

Find it here: https://gptconverse.online/