They do, but the issue is this context does not apply here.
Inside of the realm of Google Cloud there are Service Accounts which can be used with a single API key. I, for example can run up a client’s account for thousands of dollars through a quick bomb attack on something like their Document AI account using nothing more than this API key.
Something like AppCheck cannot be accomplished without some additional work to access Document AI. Could it make sense to just “pass over” the responsibilities so someone can perform OCR on their documents? Sure, does it match current standards? Hell no
They do, but they don’t. Out of all the other companies I respect OpenAI the most in this area because
A) They will refund credits even if it was clearly the user’s fault
B) They actively search for leaked keys and revoke them
"Ok, here’s what you and everyone else need to do.
You need a server. A Lambda, Google/Firebase Function, Vercel ServerActions (easy-mode function).
You need a client software for iOS that sends requests to this server. You can go barebones with REST and insecure
Client —REST—> Server
or you can use the client sdks and do something more secure.
Client —REST under the curtains —> Server
Now, you still have security problem. Because your Server API is unsecured. How do you secure your client->server? If you have your users create an account and login in to use the app:
Have an authenticated API endpoint that requires the user to be logged in to make a request to that endpoint (this is easy with Firebase or AWS Amplify, and or ClerkJS + Vercel Functions…but Clerk unfortunately have no iOS SDK) If your want your users to use the app without logging in and making an acocunt:
This is harder. You have to ensure that every client->server call is ONLY coming from YOUR iOS app, and is ONLY happening by the App itself (not them using a client token and warping the request).
This is called App Attestation. Where an API call attests that it’s indeed coming from an app. iOS supports an older way (deviceCheck) and a newer way (appcheck). You can hard-mode implement yourself or use higher-level libraries.
And even after attesting the app, a crafting pen tester can still subvert that, so you need to rate limit your API, and then also rate limit your server->OpenAI.
If you just want a solution that works, just do the following.
Setup Firebase functions,
Use Function v2, callables (secured). Setup Firebase app check, and secure functions with app check verifying calls only.
a. If you have login, great, make the function calls login authenticated.
b. If you don’t have login…app check is better than nothing.
Setup firebase function client on iOS, so your iOS app calls the function.
The function makes the API call to OpenAI with your API Key.
Set up rate limiting on the Function, because…you never know.
Set up account limit in OpenAI on the usage for your production api key.
Now your Client → Server API → OpenAI stuff.
So essentially you get iOS->Server API security, API rate limiting, OpenAI rate limiting.
You lose streaming, but…you can go through the pain in the ass of setting up your API to be streaming and then forward the stream to client. Figure out how to do with that a Function.
Also, you can swap our Firebase for Vercel, but then you lose the easy login and app check features( have to handroll), or you can go AWS Amplify (and you lose the app check and easy firestore/storage…etc). Or host your own Express server on Railway and then you go through a different nightmare. Pick your nightmare.
The reality is the amount of setup you have to do to make this secure means if you are handrolling this yourself (passing JWTS) and what not, god bless you, you don’t need to read anything I am writing because you are a better dev than me.
Otherwise, you need to use a lot of libraries that take away a ton of the work of checking things, authorization, authenticating calls, device attestation…etc.
There is a LOT of work to be done to use OpenAI securely from an iOS app. I understand why the authors decided to leave the documentation section regarding security architecture empty. Unfortunately, I don’t have the time to do a full PR and contribute, I only have the time to give you this little snippet to help you along the way."
– end quotation
I totally agree with the Firebase Function implementation, at least for me that is the way to go.
Here is a tutorial on how to set up Firebase function for safe OpenAI communication.
Combined with Firebase Authentification, AppCheck and Device Check + Rate Limits this is a good solution.
So for all the folks who come after me and are developing iOS Apps please consider this.
The problem is real and since OpenAI is a very popular API you very much likely get hijacked.