How to use a redis or other database to store docstores with Multi-modal RAG in OpenAI and LangChain?

Hi everyone, I was watching a tutorial about Multi-modal RAG, but I only found examples with ‘InMemoryStore’ in langchain. I have pdfs with tables and images, that’s why I need to use Multi-modal approach.

Searching in the documentation only there are these 3 classes:
1 - EncoderBackedStore.
2 - LocalFileStore.
3 - InMemoryStore.

But seems that them all are for local instances, but in production is supposed to I have to store docstore in a remote database, isn´t it?.

What can I do?. Thank you.

from langchain_community.vectorstores import Chroma
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import InMemoryStore
from langchain_openai import OpenAIEmbeddings

# The vectorstore to use to index the child chunks
vectorstore = Chroma(collection_name="summaries", embedding_function=OpenAIEmbeddings())

# The storage layer for the parent documents
store = InMemoryStore()
id_key = "doc_id"

# The retriever (empty to start)
retriever = MultiVectorRetriever(
    vectorstore=vectorstore,
    docstore=store, # ====> I want to change this to a remote storage
    id_key=id_key,
)