Can a chat plugin work with a vector db?

When using Langchain, we can feed documents fetched from a vector DB to the LLM to generate natural language response. Is it possible to do the same via a plugin? If yes, some links or docs would be very helpful!

On the flipside, If it’s not possible, can anyone help me understand why not?

Langchain code to generate response from vector db query:

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
openai = ChatOpenAI(temperature=0.3, max_tokens=200)
qa = ConversationalRetrievalChain.from_llm(openai, vector_store.as_retriever(), memory=memory)
query = "some random question"
result = qa({"question": query})

Yeah, I would just have ChatGPT submit the query to your API like:

{ "query": "How do snails procreate?" }

Then on the backend you are just converting the query to an embedding and then going through the normal pinecone retrieval process.

The message that your API returns can be something like:

{
    "contextual_information": [
        "Random blurb about invertebrates.",
        "Random blurb about aquatic life cycles."
        "Etc.."
    ]
}

Then ChatGPT can answer based on information from the DB.

2 Likes

I’m using ChromaDB and the instructorXL model for embedding and getting very good results

This official plug-in example is basically that

1 Like