What is the max float precision from embeddings?

I’ve been working with the embeddings returned from OpenAI, and I noticed that the decimal precision seems to vary quite a bit. Some values go pretty deep into decimal places, while others don’t - so I’m wondering:

  • What’s the maximum number of significant digits OpenAI actually returns in their embeddings?
  • And, what is the largest precision for each model currently

I’m trying to figure this out so I can choose the most efficient compression method, but also want to make sure I’m not losing meaningful precision in the process. I haven’t been able to find a clear answer in the docs or anywhere else.

If anyone has insights into the actual float precision (or any best practices for compressing these vectors), I’d really appreciate it!