How can i use OpenAI to identify my own products from my database

I am new to AI development. I have a requirement where i need to scan my product image and find the appropriate product from our database.
With the help of the text-to-image model, I am able to read the image text. With the help of this extracted text, I wanted to fetch the exact product from DB through OpenAI. Please help me if anyone has implemented such a use case.


1 Like

Hi and welcome to the Developer Forum!

I haven’t tackled a project quite like the one you’re describing. However, with more specifics (such as the types of products, their similarities, and the available product information in your database), I could possibly suggest some ideas for you to experiment with and see if they yield results.

@HenriqueMelo , Thanks. If you have any suggestion for this use case, let me know

You probably don’t need anything fancy here. Convert all the image to embedding and store it in a vector store. And at query time convert the scanned image into vector representation and then fine the nearest neighbors.

1 Like

That approach is effective only if there is an image associated with each product in the database, which I’m not certain is the case here.

@vsraman85 can you confirm if you have one image associated with each product in the database?

1 Like

@HenriqueMelo. It is possible but in DB I have more than 100K products. I need to scan all the dimensions of product image and feed it. How about the content of image text storing into Vector Database to find the match after the input image is scanned?

@vsraman85 I’m uncertain if the text content extracted from the images will suffice, but you could start by experimenting with a smaller dataset instead of applying it to all 100k products.

Also, I’m not clear on the necessity of scanning every dimension of the product images. It might be more effective to convert the images into vectors for greater accuracy. My suggestion is to begin with a smaller sample, store the image text content as vectors, and then assess the accuracy of this approach.

1 Like

I think the solution here is even simpler. You don’t need a database of scanned images.

When the request comes in, I am assuming it will include an image – or consist entirely of the image. Use the API to retrieve the text description of the image.

If your product database is embedded, you just need to run a cosine similarity search of the text against your product vector store.

If your product database is NOT embedded, then you will need to use the API to construct a SQL request, based upon the image text, to search it.

1 Like

As per latest version of OpenAI,
from openai.embeddings_utils import get_embedding, cosine_similarity
I am getting error in above import statement like ModuleNotFoundError: No module named ‘openai.embeddings_utils’. I don’t see any lastet update in openai site as well. Any one facing same issue and resolved it?