Is anyone getting different results from Azure OpenAI embeddings deployment using text-embedding-ada-002 than the ones from OpenAI? Same text, same model, and the results are considerably far in the vector space.
Did you figure out the reason? I’m seeing the same results. The Azure API doesn’t allow batched embeddings (correct me if I’m wrong), so was hoping to combine the two for build and runtime.
The only possible explanation I can imagine: this is done deliberately to avoid re-usage of embeddings between both services.
Azure OpenAI doesn’t allow batched embeddings. You need to use multiple simultaneous requests. If you hit the maximum req/min, you will need to ask Azure a quota increase.