Integrating OpenAI's Whisper & TTS with FastUI's Web Component Demos

:wave: Hello,

I’m exploring a proposal to integrate OpenAI’s Whisper and Text-to-Speech (TTS) models into the new python web framework FastUI pydantic/FastUI by the pydantic pydantic/pydantic team. I’ve been working on adding a media Recorder component and an Audio playback component to FastUI. After successfully building an audio recorder prototype, I began to create its demonstration page and my first thought was to integrate it with OpenAI. I’m willing to build these integrations into the demo if there’s interest from OpenAI. To clarify, I’m not seeking financial compensation for this integration; I’m interested in access to an API for demonstration purposes to showcase the capabilities of these AI models within FastUI.

I’m excited about the potential to collaborate, especially considering all of the current interest surrounding audio-first applications, and would love to hear your thoughts, feedback, or interest in this proposal. Let’s explore how we can showcase the power of these audio capabilities together.

Thank you for considering this opportunity. :clinking_glasses:

P.S. Not to distract from the excitement discussing audio solutions, I’d like to mention the support for video recording is right around the corner.

Best,
Zac