Dialog memories

What I would like to know:
Can OpenAi remember my name when used in dialog mode over the course of several consecutive sessions? And can it remember the topics discussed in past sessions?
Thanks for sharing experience about this.

Hi and welcome
No, it can’t. It doesn’t have memory. In fact, not even in the same session. If you delete the prompt and start over, it’s a fresh new AI

1 Like

Thanks. To bad.

In this case I dont see why I should open an
account to play with its conservation skills …

Nunodonato: Can you explain more about that. What do you mean by a “fresh new AI”? How long does a session last? When does that hand-off happen?

Is it like a session with online banking, where you are logged out eventually and a new session starts if you come back two hours later?

My reason for wanting to know this is that in my research, I am working with chats and simulated discussions that stretch out to the full length of maxing out the 2,048 token limit. So the final exchanges in such conversations or simulations are quite expensive. (I think around 100 times more expensive than a simple prompt of a sentence or two.) So it is important to know when I should not bother continuing a chat or simulation because it is no longer the same session.

AFAIK, there is no “session”. the session is the prompt. If you delete lines from it, the “session” is gone. Thats what I meant by a fresh new AI


So you are saying that GPT-3 as we experience it is a sort of virtual “goldfish” with no memory except what is in the prompts?


about the limit for long chat sessions: I haven’t yet tried it, but what occurred to me was to, at a certain point in the chat, pick up the oldest “N” lines, and ask the AI to summarize it. then use the summary as the beginning of the following prompts.
That way the “history” is kept, and the conversation can continue

I have on occasion used SudoWrite (a GPT-3 API) for that.

There are some articles that discuss “priming” GPT-3, most of which pertain to programming applications. Do you believe GPT-3 is primable?

I think you must be referring to fine-tuning: OpenAI API

No. Here is an example of an article claiming GPT-3 is primable. The Subtle Art of Priming GPT-3. It’s not clear for many how GPT-3 is… | by Carlos E. Perez | Intuition Machine | Medium

1 Like

What do the axes and colors represent in your graph?

1 Like

@m-a.schenk you following the embeddings stuff for this ?

1 Like

you got the question exact :slight_smile:
I’m re-reading the thing before I start hitting enter…

1 Like

I solved it… So basically I just input into a database the last 4 messages the user sent and last 4 messages that openAI sent. So everytime I pass a prompt I put take into consideration the past conversation with the user: ${messageUser1} - ${messageGpt1} for this case I just passed the past messages but you could also add his name or whatever.

The other thing you could do is fine-tune your own model, check out: GitHub - Vinithavn/Finetune-GPT-3-for-customer-support-chatbot-: Finetune the OpenAI GPT-3 model for a customer service chatbot application