Language Model Usage in a chatbot Langchain project

I am using Langchain for the chatbot project and 340,823 tokens are charged to my account twice a day, I was wondering what it was and what is the charge for.

Find the screenshot attached.

Welcome to the forum!

Those are embedding-ada-002 model calls, those will be for embedding datasets of some sort, do you have langchain running a Chat with PDF or similar? if so then those are your PDF’s (or other data) being vectorised for later retrieval.

1 Like

Thank you for the reply.

Yes, I am using Langchain with custom data files: pdf, txt, json etc.

Are those calls supposed to happen even if I am not creating a new vector every time? I am using the existing one and querying the chain using OpenAI API.

Note that a lot of LangChain tutorials re-generate the vectors on every run, including their main Docs article on the topic. Sharing your code would help, but if you followed that, you should change to using a persisted Chroma database on your filesystem.
Or learn from what you know and move beyond LangChain writing your own code. It’s great at proof-of-concepts, but less so beyond that.

Here is the code related to vector creation if that will help:

 if (fs.existsSync(VECTOR_STORE_PATH)) {
      // 14. Load the existing vector store
      console.log("Loading existing vector store...");
      vectorStore = await HNSWLib.load(
        VECTOR_STORE_PATH,
        new OpenAIEmbeddings()
      );
      console.log("Vector store loaded.");
    } else {
        // 15. Create a new vector store if one does not exist
      console.log("Creating new vector store...");
      const textSplitter = new RecursiveCharacterTextSplitter({
        chunkSize: 1000,
      });

      const normalizedDocs = normalizeDocuments(docs);
      const splitDocs = await textSplitter.createDocuments(normalizedDocs);

      // 16. Generate the vector store from the documents
      vectorStore = await HNSWLib.fromDocuments(
        splitDocs,
        new OpenAIEmbeddings()
      );

      // 17. Save the vector store to the specified path, saves locally
      await vectorStore.save(VECTOR_STORE_PATH);

      console.log("Vector store created.");
    }

    // 18. Create a retrieval chain using the language model and vector store
    console.log("Creating retrieval chain...");
    const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever()); //this is how we query

    // 19. Query the retrieval chain with the specified question
    console.log("Querying chain...");

I am checking if the vector exists and only if not creating a new one.