Implementing a file upload in my application using open ai api

I have been using chatpt4 api for while now . i want to include a file upload option to it, if file has been uploaded i want chatgpt to remember the file contents and give responses accordingly

You have two options:

  1. Utilize the Assistants API with Retrieval capability.
  2. Develop a RAG by storing your own embeddings on a vector store.

The Assistants API offers ease of setup but might have slower responses due to its beta stage, making it relatively more expensive. Conversely, building your RAG grants complete control over the embedding process and is cost-effective. However, setting up a RAG is more complex compared to the Assistants API.

In short: If you want something that works out-of-the-box, opt for the Assistants API. For greater control and cost-efficiency, consider the RAG approach with your own embeddings.

2 Likes

could you explain more clearly

“What are embeddings” by Cohere: “Embeddings are a way to represent the meaning of text as a list of numbers. This is useful because once text is in this form, it can be compared to other text for similarity, clustering, classification, and other use cases.”

A vector store is like a place to store your embeddings and compare then against files.

RAG stands for “Retrieval Augmented Generation”, and this basically means that the system will run embeddings on the prompt of the user against the files you provide.

Imagine the user asks “What is the weight of [imaginary product]?”. The system then will run an embedding against the files you provided and the prompt. The embeddings engine will then classify what information is the most likely to answer that question (in some cases be related to), and pass the info to the LLM that will craft the respose.

Further explanation by Cohere: Retrieval Augmented Generation (RAG)

1 Like

does open ai doesn’t provide any API s for document uploading

Will the server incur additional costs if I include RAG into my project and submit it to the server?. My request is that it must analyse the data from a file, as shown in gpt-4, and create discussions inside it when I input a document in the csv or xlsx format.

How you perform the RAG is up to you. You could use the ada-embeddings-002 model from OpenAI, it’s very cheap. Or you can host your own embeddings models if you wish.

Is it recommended to implement OpenAI’s Assistant API in a live server, considering its current beta status? What potential difficulties or issues should be taken into account, given the ongoing development and potential updates during this beta phase?