Azure OpenAI Embeddings vs OpenAI Embeddings


Is anyone getting different results from Azure OpenAI embeddings deployment using text-embedding-ada-002 than the ones from OpenAI? Same text, same model, and the results are considerably far in the vector space.

[-0.01285131648182869, -0.007368265185505152, -0.017670560628175735, -0.02830475941300392, -0.018679548054933548, 0.017391759902238846,....

[-0.005709473, -0.018625768, 0.040557254, -0.023957571, 0.0006636985, -0.010459222, 0.01857245, 0.025699295, ...


Did you figure out the reason? I’m seeing the same results. The Azure API doesn’t allow batched embeddings (correct me if I’m wrong), so was hoping to combine the two for build and runtime.

The only possible explanation I can imagine: this is done deliberately to avoid re-usage of embeddings between both services.

Azure OpenAI doesn’t allow batched embeddings. You need to use multiple simultaneous requests. If you hit the maximum req/min, you will need to ask Azure a quota increase.

1 Like

Would you say the output from Azure’s embedding is qualitatively worse than that from OAI’s because of these limitations?

Microsoft has updated it’s embeddings model now.

They return the same vectors that OpenAI endpoint.