Is this hallucinations or what on gpt turbo?

I ve been working for 2 weeks try to figure something. Iam creating an order bot which am loading my products and their prices respectively from CSV file by using dataloader then send them to pinecone. Am using openAI embeddings for that. But the issue is, if am asking for individual products, it will bring me right answers. If i order 5 products only first 3 will bring right answers. The rest will be lies. Tried to check number of tokens in openAI usage, am almost using less than 3000 tokens which is within my limit for every query i make. I have tried my best and feel like tired. Thinking of giving up on this project. Does anyone have an idea to solve this?

1 Like

Are you asking for all 5 products at once? You may need to chain them in batches?

Yes. Since is an orderbot, customer may ask even require 10 products at once and then it should calculate total.

I’m not familiar with Pinecone, but assuming they have a query language similar to SQL that you can build and send dynamically at runtime, I’d suggest looking into trying to have ChatGPT convert the user’s request into a query in that syntax which you then use to get the products they requested.

If you went this route, you’d need to add some additional checks to ensure that it’s a valid query; and, more importantly, that nothing that could insert, update, or delete data gets used. Similarly, if it’s a multi-tenant database, you’ll want to be sure they are only looking at their own data.

After a time, i found out pinecone doesn’t allow multi vectors search. Instead each statement is queried once. They will look something similar to that you have written. So if you have 10 products, mostly it will look for the first one and which resemble other nine.

But elasticsearch allow multi queries. So instead of using pinecone i decided to use elastic search and it worked.