My understanding of embedding models is that they are a deterministic thing, mapping text to a numerical vector.
I repeatedly regenerated an embedding for two words about 10-15 times. They were the same most of the time, but, in two cases we got a different embedding.
Am I misunderstanding how embedding models work, or, is there something going on under the hood of the API - bug or otherwise?