API key compromised, API key security

I have an iOS App using openai api for chatgpt 3.5 only. over the weekend I found out that my api key was used for excessive chatGPT 4.0 usage which I am not using in my app. The only possibility is a compromised use of my api key.
I get the api key from a secure https connection. It’s not in my source code and it’s not in my GitHub repository. I also have encrypted the api key on the server side to make it even more secure. Changing the api key will stop the compromise for a few hours.
My questions is if someone has similar experience and problems with the security of the api keys. I don’t think that the api key was reverse engineering from my code, I rather think it is hijacked during the actual api requests which would be a problem with the openai api architecture.
Any advice would be great, thank except the usual answers ein tired to hear: don’t put it in your code, don’t commit to repository, etc.

The API key should never be in your app.

Your app should never connect to the OpenAI servers directly.

Your architecture is fundamentally unsafe.

1 Like

What are you taking about. The app key is not in my code. And what do you mean by not communicating with the openai server directly?
Are you an bot? Your answer came just 10 seconds after posted my question. Not very helpful then…

I’m not a bot, just fast.

It would be really impressive if it was a bot answer though.

You wrote that,

Which reads to me that while the API key is not hard-coded into your app, your app is retrieving it in order to make the API call from the users’ devices.

This is inherently unsafe.

The API key should never leave your control.

    participant Client as iOS App Client
    participant Your_Server as Your Server
    participant OpenAI_API as OpenAI API Server

    Client->>Your_Server: Draft and Send message
    Your_Server->>OpenAI_API: Forward message
    OpenAI_API->>Your_Server: Generate and Return response
    Your_Server->>Client: Forward response back to client

Your API key should only ever be communicated between your server and OpenAI’s server. If you ever send it to a client it will, with near-certainty become compromised.


Ok not a bot :wink:
So my API key is encrypted on a server and my app is getting the encrypted key via https. (Encryption happens on the client, and can be still hacked but that is not what is happening ) So that is not enough I found out.
I don’t have a proxy yet, which I will eventually need. I was not aware of that. Thank you for that information!!!
Still, that’s some poor API architecture when I have to set my own proxy to access an API, to make sure my API key gets not hijacked. And it got hijacked in NO time. Meaning a few days of being in the AppStore. Could have been really bad.
Anyhow that needs some improvement on openai´s side.

1 Like

That’s every API architecture.

You cannot put anything on a user’s device in such a way to prevent access to it—it’s their device.

You can make it harder yes, but it’s their device and their network, the API key must be sent in a way the endpoint can recognize. If the client is sending the key the client can capture the key.


Got it.
Are there any recommendations for a self-hosted proxy server that can handle openai requests? Or are people writing their own proxies?

Oh… Oh no… Why is it that I hear this once every other week in this EXACT format. This needs to be made clear: If you are sending an encrypted key to a client for them to use with OpenAI servers your key is not encrypted.

Perform the request yourself and send the results. You get 2 million free invocations a month with Google Cloud Functions.


It’s dead simple to set something up in flask. I have some code at home I can share when I get back. Or you can just ask ChatGPT to whip something up.

Just make sure you secure your endpoint too!


thanks for your answer, i looked into the google cloud functions or firebase functions in my case. It´s a solution but it seems that the cloud functions don’t support streaming, which is crucial for my app. so looking into other solutions now…

Where did you find this? It certainly does support streaming

here for example: Is there a way to stream OpenAI (chatGPT) responsse when using firebase cloud functions as a backend? - Stack Overflow
proxy and streaming seems to be an issue for some people, trying to find the best solutions…

The context is a little different here.

You can stream with Cloud Functions, 100%. I have used it for years and have been streaming the response (from OpenAI) without any issues.

You can’t stream the response back to the user. This is by design. Typically you would instead update something like RealTime Database with the chunks and have the user subscribe to it

If you don’t want a database (because you’re taking advantage of Assistant Threads for example) you can use Cloud Run. You may be able to use Cloud Functions v2 though. Not sure, it is built using Cloud Run.

Are there any recommendations for a self-hosted > proxy server that can handle openai requests? Or > are people writing their own proxies?

I’m confused by what you’re talking about when you say a proxy. You would store the API key as an environment variable on your server.

The server receives the content from the user/client and the server makes the OpenAI request using the API key from the environment. The server then processes the OpenAI response into the format your app requires, sending only the information it needs.

Yes , i understood that now.
Probably another stupid question but once I have the server making the request I need to protect the server itself with some kind of api key to make sure it doesn’t gets used by any other client. So the openai key is save but an attacker could instead just make request to the server directly. So i need to protect the server now, too.

1 Like

My rule of thumb is never allow your credentials to float around your application. Don’t put your credentials in environmental variables. That’s a no no. Expose your clients to API functions running in a server or Lambda (or something similar). Let the server/Lambda fetch your credentials from a vault.

Most of the answers given are correct but I want to add that a good practice is to have your credentials stored in a vault. Your server will only retrieve them in an encrypted format when ever they are needed (use and discard).

A good example of a vault is AWS System Manager, you store your credentials in an encrypted format, read and use them in your server/Lambda functions and that’s it. The chances of your credentials being hijacked (with such a design pattern) is minimal.

Jonathan Ekwempu

Hi Peter, I encountered a similar problem. The solution involves setting up a server that processes requests from your iOS application and forwards them to the OpenAI API. By doing this, you can securely include your key in the message header, which the server can verify before responding. This ensures that the server responds exclusively to requests from your app. note that this is just one layer of additional security on your server.

first of all thanks for the answers to you all.

It just really upsets me that we pay for API usage and still have to set up our own server/proxy or cloud functions to make really secure calls. (Talking about Apps here basically.)

Instead, openai could come up with a secure client server environment out of the box like Firebase/Google Cloud is doing with their AppCheck function for example:

"With App Check, devices running your app will use an app or device attestation provider that attests to one or both of the following:

  • Requests originate from your authentic app
  • Requests originate from an authentic, untampered device
    This attestation is attached to every request your app makes to the APIs you specify"

That just works and I don’t have to build another server which brings yet another layer of complexity with it.

In the end openai gets it´s money also from the compromised/fraud usage, which is paid by me, the actual customer.

Is really comical: Here an API for you, but you have to build your own server and your own API first or you pay also for the hacker using your API.

Anyhow looking into claude.ai and see what they are doing…

I understand where you’re coming from but almost all paid-for APIs works in similar ways, especially one that requires the server to manage the state for the majority of it’s endpoints.

Firebase is a unique case because it’s part of the design. It’s functionalities all depend on Authentication. It would not make sense for OpenAI to also manage authentication on your behalf because it’s not their responsibility.

AppCheck (also known as ReCaptcha) is a huge PITA as well. Keep in mind that AppCheck isn’t performing authentication on your behalf, instead it is confirming that the device that’s sending the request is doing it through your application. This is different.

You can use AppCheck if you’d like. It’s not bound to a platform like Firebase.

You are greatly limiting yourself by trying to find a way to just “touch” the request and feed back the exact results.

1 Like

there is also “device check”, that checks against an Apple private key in combination with AppCheck. The point is they do a lot to provide a great developer experience because they know the pain points of client/server applications.

OpenAI should know that too. But they also profit from compromised keys. That´s kind of fucked up in the same way Apple hardly removes overpriced fraud apps b/c they profit from it.

IMO This disclaimer should not exist in the first place:

:warning: OpenAI strongly recommends developers of client-side applications proxy requests through a separate backend service to keep their API key safe. API keys can access and manipulate customer billing, usage, and organizational data, so it’s a significant risk to them.