There’s no confirmed update but small backend changes can affect results. You also can’t pin a version so for stability save and reuse your embeddings.
If an AI model has been stealth-altered, then the query embeddings will have altered vectors and dot product vs expectation when they are run with semantic comparisons vs a non-compatible database that was obtained by another version of a model.
If you have an embeddings database, you can re-run the same text again and quantify statistically if the individual dimension scalars are the same, within under 1% (not identical because of non-determinism that already existed). Then you’d be able to answer for others (without any historic 3-small data) if the model has changed.
I agree about saving the embedings not being a solution. My app compares candidate keywords suggested by the user to existing keywords to see if any of them are sematically similar to the new candidate. So if my persisted embeddings are from before a stealth change and the candidate’s embeddings are calculated after the change then my comparisson will be less precise. And the only solution would be to recalculate all embeddings before each comparison…