OpenAI implementation Vs Azure OpenAI implementation

Hi,

I am using Azure AI Service to work with GPT 3.5 LLM to get the Chat Completions.

Here is the procedure in Azure

  1. Upload the documents to Azure Storage (container)
  2. Create a skillset to generate embeddings with text-embedding-ada-02 deployment model
  3. Documents uploaded to container are indexed with Azure Search service (Cognitive search)
  4. GPT 3.5 model deployment is used to make chat completion calls. This model uses the Azure search services to generate the text responses.

I am looking to implement similarly in OpenAI.

Here are my questions:

  1. Does OpenAI has storage to upload documents.
  2. Does OpenAI has APIs to turn the text chunks to embeddings.
  3. Does OpenAI has APIs to let GPT 3.5 model to use the embeddings and to generate responses?

And finally,
We have multiple projects with separate set of documents. Does OpenAI has API where I give a projectId, and it look in to that project’s data set only?

Can you help to understand if such implementation is possible with OpenAI ?

  1. Yes, it does.
    Files endpoint
    As per the document above, there is a Files endpoint where you can upload files for Assistants, Fine-tuning, and Batch API purposes.

  2. Yes, it does.
    Embeddings endpoint
    There is an API to embed text into vectors using the embeddings endpoint.

  3. Yes, it does.
    File search with Assistant API

Using the Assistant API, you can embed the data uploaded via the Files endpoint into vectors, and the language model can use that to generate responses.
Please note that the Assistant API is in beta, but the answer is “yes.”

I hope this helps in some way. :slightly_smiling_face: