Error while reading PDF file using openai chromadb module

I am using OpenAI API along with Langchain and Chromadb module to work on the conversational search of PDF file as input.

My below program works fine for PDF file with 726 pages as input. Whereas program fails to execute for PDF file input with pages more than 6000 pages.

Error - “ValueError: Batch size 6691 exceeds maximum batch size 5461”

Please let me know what changes should I make to “Setup a text splitter” section to get the code working.

Program Execution Output (full logs with error):

Program For Conversations PDF search - source code:

#Load OpenAI API Key
#os.environ["OPENAPI_API_KEY"] = openai.api_key
from openai import OpenAI
#from openai.embeddings_utils import get_embedding
from langchain_community.document_loaders import PyMuPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
#from langchain.embeddings.openai import OpenAIEmbeddings
from langchain_openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI


#Setup Loader - in this case a PDF Loader
loader = PyMuPDFLoader("suse_administration.pdf")

#Load and split the pdf into pages
pages = loader.load_and_split()

# setup a text splitter
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=500,
    chunk_overlap=100,
    length_function=len,
)

#split pages into smaller chunks called docs
docs = text_splitter.split_documents(pages)

#transform to embeddings
embeddings = OpenAIEmbeddings()

#setup and store docs and embeddings into ChromaDB
vectordb = Chroma.from_documents(docs, embedding=embeddings,
                                 persist_directory=".")

#Make the database persisten
vectordb.persist()

#setup memory so it remembers previous questions and answers
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

#Perform the conversational Retrieval Chain
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0.5),vectordb.as_retriever(), memory=memory)

#Run the question
question = "Explain grouping and combining commands in Suse linux.?"
result = qa.run(question)

#print the values to the screen
print(result)
balajiraja@Balajis-MacBook-Air openai-1 % vim openai-internal-kb.py     
balajiraja@Balajis-MacBook-Air openai-1 % python ./openai-internal-kb.py
/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain/_api/module_import.py:92: LangChainDeprecationWarning: Importing Chroma from langchain.vectorstores is deprecated. Please replace deprecated imports:

>> from langchain.vectorstores import Chroma

with new imports of:

>> from langchain_community.vectorstores import Chroma
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here 
  warn_deprecated(
/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain/chat_models/__init__.py:32: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import ChatOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
Traceback (most recent call last):
  File "/Users/balajiraja/openai-1/./openai-internal-kb.py", line 35, in <module>
    vectordb = Chroma.from_documents(docs, embedding=embeddings,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain_community/vectorstores/chroma.py", line 790, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain_community/vectorstores/chroma.py", line 754, in from_texts
    chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain_community/vectorstores/chroma.py", line 312, in add_texts
    raise e
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/langchain_community/vectorstores/chroma.py", line 298, in add_texts
    self._collection.upsert(
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/chromadb/api/models/Collection.py", line 300, in upsert
    self._client._upsert(
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 146, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/chromadb/api/segment.py", line 449, in _upsert
    validate_batch(
  File "/Users/balajiraja/openai-1/lib/python3.12/site-packages/chromadb/api/types.py", line 525, in validate_batch
    raise ValueError(
ValueError: Batch size 6691 exceeds maximum batch size 5461

I was facing the same error.

This worked for me

# Split the documents into smaller batches
batch_size = 5461  # Set to the maximum allowed batch size
for i in range(0, len(splits), batch_size):
    batch = splits[i:i + batch_size]
    vectordb = Chroma.from_documents(batch, OpenAIEmbeddings(), persist_directory="./chroma_db")
1 Like

Brilliant @holy_kau . Your fix works for me.

Now I could read Pdf file with 6646+ pages.

Thank you.