I understand where they are used, their voice examples can be tested in the playground. I wonder why there are no examples for new voices anywhere in the documentation itself. To be able to evaluate the voice for free and without authorization on the playground. For example, as it was done for the TTS API voices. https://platform.openai.com/docs/guides/text-to-speech#voice-options