I am trying to build a web application which includes embeddings to find relavant search results across large documents. Once the embeddings are calculated and a query is asked it’s taking atleast 5 min to get back the search results.
Models used: “text-search-curie-doc-001”, “text-search-curie-query-001”
Reference: openai-python/Semantic_text_search_using_embeddings.ipynb at main · openai/openai-python · GitHub
Backend : Django
File size: 1.2MB
embeddings csv file size : 3 GB