Create an AI companion with long term memory

Sure! Send me an email at joan@boluda.com and tell me about it! :smiley:

Nice thinking and try continues with other,best wishes for you.

Try letter and subscription
Well, i have an idea based on idea i have wondered about…
It is not to give it a long term history, but to give it parameters to work with. After that if done properly, AI does not need long term memory and just decent amount of centext is enough.

I’m working on my own assistant, it has long and short term memory. That is the easy part of doing an assistant :wink:

1 Like

Oh great! <3

  • Since that’s the easy part, you’ll have no problem sharing how you did it!

it’s all documented New project: Creating an AI Mind

3 Likes

Noice

1 Like

I would be delighted to help in any way I possibly can. This has been a fascination for me and would truly enjoy the workm I am a 5;decade long musician I’ve been on stage since I was four I just turned 54 and I’m about to go on a national tour with a country artist opening for his group I have tested the water it’s a little bit with regard to bouncing ideas off the AI for lyric composition and thus far it’s been outstanding I have a personal goal of not only getting nominated but winning a Grammy award for song written by me lyrics by AI I’d eventually like to get music composition all the way down to production MIDI digital audio workstation engineering mastering I like to see where AI fits into that plethora of industry Dynamics once it is that my finances level off since I’m not working right this second getting ready to tour I had planned on signing up and doing what I can do on my own and come up with some nice examples and bring it back to the powers that be, there it open AI

We have a very limited Beta for a self-learning chatbot with persistent memories and personalities.

It is still very early days - but if anyone wants to give it a try, send me a private message. We can brainstorm the best prompt to use to give it the personality you need. Once this is done, it will start to learn from your interactions with it.

Full disclosure: This will end up being a paid product. Unfortunately, it costs money to use the API. We use Davinci, and not ChatGPT (The benefit is that it is not moderated/hobbled like ChatGPT either)

3 Likes
  • You could give people the option to put in their own API key
    • This would remove that cost burden on your side
1 Like

This is a simple task to accomplish @joan

  • You store your desired output (completions) in a database.

  • You first query the DB for a match to the user prompt; you can use a DB full-text search or embeddings (semantic search), depending on your use case and string lengths. Short phrases and strings do not work well with vectors (embeddings).

  • If the DB query does not turn up a match which meets some (your predetermined) threshold you then call the OpenAI completion API for a new completion and store the new completion in the DB.

  • Repeat.

In other words, you use a DB search as the front-end, and the API completion as the back-end which only is called when you do not get a successful front-end match.

Each successive back-end call, increases the likelihood that a future query will be satisfied in the DB (the historical data).

It’s pretty basic application development, if you think about it from a systems design perspective. Any experienced webdev with basic API experience should be able to do a basic beta mockup for you in less than a day; or better yet, do it yourself if you have basic web development experience using APIs.

Hope this helps.

3 Likes

Try using Curie to summarize the current conversation in one sentence. Then have it summarize the next convo. Then append the later summary to the first summary. Then summarize the summaries once the context length gets over what Davinci can handle as context. Then summarize the summaries of summaries… and so on. I have not implemented this but it should work.

1 Like

This is also known as recursive summarization.

1 Like

The above blurb is @joan 's core requirement.

Recursive summarization does not solve Joan’s core requirement to store the full-text of all prior sessions.

There is only “so much you can do” if you only store summaries in the DB and that is not the OPs original requirement.

Recursive summarization is a solution to a different problem statement / system requirement.
:slight_smile:

You can also store the full conversations along with summaries, thesaurus words, most frequent words in that conversation, insights/resolutions, narrative arc, etc, in your own database outside of GPT3. Then when the person says something, you search past conversations in MySQL or whereever, and pull up relevant text, and input that text as context for the prompt to GPT3. For example if the person says, “im still thinking about how to train my dog better” then your algorithm searches your MySQL database for the highest frequency of those unique keywords in past conversations, pulls up to 1,000 characters of the most relevant info, and inputs that as additional context, so the prompt is: “im still thinking about how to train my dog better. [Refer if relevant to this scene context: they had a conversation where it was said ‘1000 characters of relevant past info’]” I wrote an article on this in 2005, have been using GPT3 with long-form prompts, and am happy to collaborate.

1 Like

I’m also curious about this topic and looking for a teammate.
Now, I’m building the chatbot that be able to learn about the world, user situation, and user intuition in conversation.
If anyone interested in, send me a private message.

Hello,can’t find how to pm you.I’d like to know more about your project,thanks!

Hi fellas! I’m experimenting on it, just out of curiosity. Built a telegram bot this weekend, plugged in a database and semantic search into it. Deployed to a cloud. It also handles group chat context fairly well. Let’s experiment on it together. Open source or paid - doesn’t matter. Join telegram group chat here: Telegram: Join Group Chat

This is just a recursive fine-tuning concept, if you’d like to develop something I’m very down to apply some unorthodox approaches. Right now I can only get GPT-4 working with a python prompt script deployed to a T3-app, but even that I’m having some challenges and using venv to run the py files seems like it’s not going to work when I deploy to vercel. If you can help me with the deploying of the python file or helping me get the api working for GPT-4 chatcompletion, I will absolutely put together the prompts and help grind this out.

I integrate and store all conversation in the word document itself using google sheets but I would love real long term memory linked to my google account(can’t believe I am saying this lol)