Querying to a vector store directly

Instead of Attaching a vector store containing chunks of a file to assistants or threads and getting the answer via that way which uses an LLM. Is there any method in openai to directly implement the similarity search on openai vector store to obtain the whole message chunk with highest similarity?

No

1 Like