Questions regarding pricing

Hello,
I am building something using OpenAI assistant and vector storage.
My question is, for example, I have one assistant and I have linked it with one vector store, and this vector store contains 10 files, now when I ask any question that requires information from all 10 files, when I run the assistant for this question. At which points will I need to pay for the API?

For example, for every chunk it processes in the vector store, do I need to pay for it? , or do I only need to pay for tokens that are consumed in final output + input?

One thing I know is I need to pay for storage, but I am confused about the cost of the processing that it does for finding the answers from across the files.

It would be great if you could give some information about this.
Thank you

You will be billed at two points. As you have mentioned the first being for storing vector files, as of now the first GB is free and from then on I think it is $0.10/GB/day of vector storage. The second being the tokens to make the LLM call, i.e the context you will be passing to get the answers that includes system prompt + any past conversations for context + chunk of data retrieved from vector DB using file search which is passed as context to answer questions. As far as my expertise goes, this is it. Lmk if you have any questions. Cheers!