Can OpenAI be used for Long Term memory?

I’ve been trying to understand how I can implement Retrieval Augmented Generation for my app. I have data from multiple Users (objects with fields like skillsList, hobbies, etc) that I need to store in order to feed a model, but at this point, from what I understand, I can both create and store the embeddings in OpenAI itself, or I can just use ada 2 to create the embeddings and use a vector db like Pinecone to give me back the vector and give it to OpenAI to answer. Why would I store the vectors in OpenAI? What are the pros and cons? Thanks

where did you find that you can store the vectors with openAI?

a long time ago there used to be a search service, but that’s been shuttered for quite a while now.

1 Like

I think OP is referring to the Retrieval capability on the Assistants API.

1 Like

I just answered the same exact question here: I want to create a continuously improving legal AI - #5 by SomebodySysop

1 Like

@sp33d

Did you intentionally choose the ‘Prompting’ category when creating this topic? I’m asking because your question seems more suited to the ‘API’ category. I’m curious if you’re thinking more about how to store data with a prompt, rather than using the API. :slightly_smiling_face:

You’re absolutely right. Thanks for pointing that out

My recommendation would be to create the embeddings and store in a vector db. I have found that approach to be by far most future proof and cost effective. You can connect these to GPTs via Actions as well if you have an OpenAPI Schema for the knowledge in the vectordb.

2 Likes

Thanks for the info @cass !

Can you share any more details on GPTs via Actions, and an OpenAPI Schema for the knowledge in the vectordb?!