I used the assistant api to upload about 1500 documents of type docx,json, and txt containing 29MB and attached the vector store to a new assistant. For some I’m on day 2 of waiting for it to finish indexing. In comparison from what I could tell it took a few hours a few weeks ago for 42mb of about 1700 file vector store.
Are they cheaping out on us? Any plans on expedited indexing in the future? I get that embeddings are expensive, but come on now, 29MB in over 1 day???
Client requirements are for OpenAI usage. I would love to do this, client might want it in the future. Why offer embedding processing, storage, and retrieval if it takes forever to process. It’s still “Indexing” probably on queue to save OpenAI some money.
I’m having a similar problem, but just using the playground (not the API). My vector store is 129 MB and Open AI says it can handle 512 MB. But it spends days and days (tbh weeks) saying it was “indexing”… It’s never completed it. No idea what to do.