Regarding ChatGPT completions, I have already a platform acting as a gateway, where my authenticated users can use my endpoint API exactly in the same way as they will using the openai endpoint but without the openai key. With just one api key, my platform handle the request from my users and respond them with the openai response.
I assume, that in the near future, to implement gpt-o audio input from users, openai will accept an incoming audio stream (websocket, google protobuff, or something similar), this is a game changer in terms of server resources. To keep the structure of my platform how it is right now, I would need to create and replicate and manage those streams. Doable, but it will add complexity, latency and cost.
One simple solution can be to use another approach, if I could create an individual api key for each session of my users, i could just give my users the key they need to communicate directly with openai.
For this it would me amazing if I could have endpoints in the current openai api to create/delete api keys.