Langchain conversational memory with embeddings

I’m creating a chatbot that answers questions based on large text using embeddings (Using Langchain) functionality works perfectly, but it works only one way. Either embeddings or conversational memory.

The Goal is to combine both conversational memory and the vector database for embeddings, allowing the chatbot to remember user inputs while still providing answers based on the embeddings.