Something I’d like to see is an official solution to the “bring your own API key” problem.
Not every idea is commercially viable enough to launch a product, but in some cases it would be interesting to safely allow the user to run the service on their own equipment.
It is already happening anyway, so perhaps making it safer would allow a faster growth of the AI app ecosystem.
The problem in my view, is that it is not as simple as it seems, thus making it hard for OpenAI to embrace this.
Some key points that would need to be addressed would be:
- Auth: As in google authentication, it needs to asks for explicit permission on what models it will use and how many usages it will require. Can be revoked any time;
- Guardrails: The user can enforce moderation usage and log persistence to prevent and monitor malicious usage. This will prevent stealthy use of the application for ulterior motives;
- Blacklist: If an app is reported or caught in too many moderation issues, it can be blacklisted (this point needs further refining);
- Limits: It should have a relatively low request rate, perhaps giving it an ephemeral key that can only be used X times in the next hour, then it requires new auth. This would prevent the app from running rough without the user consent.