So I have the OpenAI API running on a webpage. But I’ve realized that my API key is just there in a .js file in plain text. What’s to keep some guy from stealing the API key and running his GPT traffic on my bill?
Is there some way to encrypt it? or keep it server side? or restrict GPT to only responding to calls from my website domain?
So I asked GPT, it said all of these were workable strategies, and added one more, CORS. But CORS doesn’t actually apply here, and then it proceeded to hallucinate about how the others would work.
First things first - you should revoke that API key.
There are multiple ways to secure your OpenAI API keys including environment variables and middleware API. You should use whatever fits your requirements.
Thanks for that. I’ll be pulling it as soon as I have fixed the problem.
Do you have any specific suggestions? “You should use whatever fits your requirements.”
If you’ve deployed on Vercel you can use environment variable to store the keys. Similarly for Azure.
Those are both cloud systems not web servers.
I’ve now been going round and round w/ GPT on this. (“You could do X.” “Couldn’t a thief do X+1?” “I apologize for the confusion. You could do Y.” “Couldn’t a thief just do Y+1?” “I apologize for the confusion…” ) there is apparently no way to implement any kind client-side web solution securely.
So unless OpenAI cooperates with us in some way (allowing us to limit the keys to certain URLs, or setting up a key-encryption system) you can only use some sort of server-side (cloud) solution and them telling us we can use Python scripts and make our own client side apps is a lie. You can’t do any of that without handing your end-users your API keys.
My current best guess at a stop-gap solution is to change my API keys every 24 hours.
In that case you can use environment variable on your web server or write an entire middleware API with authentication and host it on your web server.
But if you use an environment variable on your web server, in a client-side script, you’re exposing it to the client.
And me having to write an entire middleware API with authentication and hosting it, sure makes them telling us we can use Python scripts and make our own client side apps a lie.
I’m no security advisor, so take my word with a grain of salt, but there should be no need to send the API key to the client.
The API key would be stored on your web-server and used to make API calls to OpenAI’s server. The client would interact with your web-server, which would then make the API calls to OpenAI’s servers on the client’s behalf. This way, the API key is never exposed to the client.
Sure some XxX_Hax0rman_XxX could target your web server, but that has nothing to do with OpenAI’s security.
Well GPT thinks you can do this on the OpenAI website (you can’t)
I apologize for the confusion. It seems that the OpenAI API Dashboard has changed since my knowledge cutoff date, and the process for creating API keys with restricted permissions has been simplified.
To create a new API key with restricted permissions, follow these steps:
Go to the OpenAI API Dashboard and sign in to your account.
Click on the "API Keys" tab on the left-hand menu.
Click the "Create API Key" button.
Enter a name for your API key and click "Create".
Under "API Endpoint Restrictions", select the API endpoints you want to restrict access to. You can choose from the following options:
Unrestricted: Allows the API key to access all API endpoints.
Restricted: Allows the API key to access only the selected API endpoints. You can select specific API endpoints by checking the boxes next to them.
Click "Create API Key" to create the API key with the selected API endpoint restrictions.
chatGPT isn’t always reliable when it comes to facts, it’s better to stick with the documentation, try having a look at this
try having a look at this
Thanks for the link, it pretty much confirms it.
Without OpenAI participating in limiting these API keys in some way ( by referring URL specifically) ,this is a huge problem.
No one said cybersecurity would be easy, just trying to help, yes you can build your own client side app’s in python and use them on your own devices, but if you want to deploy to a production server or website you’re going to have to do some server side scripting that’s not sent to the client.
It could be easy.
I’ve implemented other web APIs, where there are control panels where you can designate allowable referring URLs and apply other limits.
OpenAI has to step up to the plate here.
I solved this by using API routes and environment variables in nextjs. I deployed over at Kinsta without any problems, both via automated webpack and then via dockerfile to optimize the resulting image size / RAM usage.
There’s an OpenAI npm package that lets you configure your API key easily, and the nextjs api route method would never serve the API key to the user.
You need to proxy the client requests via your own backend webserver, so that it’s your backend that has the OpenAI key and actually makes the request to OpenAI
So your frontend sends a request to your backend with no key, your backend makes a new request with the secret key
But it is worth noting that the pants-down .js code I was using was written by GPT.
And then it wasted another day of my time by telling me it could be secured when it actually can’t.
Yeah, it’s frustrating. I’ve had no issue to create my own NodeJS proxy, until I wanted to stream my responses like ChatGPT. Works great when using the direct API URL from OpenAI, but when I’m creating a simple (or more advanced) proxy, I can’t get it to work, whatever I do. No streaming responses when using a proxy server in between🥲. I might figure it out eventually, but a simple “domain restriction” config screen would safe me so much trouble.
Great topic. I’ve just been searching an existing topic for a similar question.
I wanna develop a google chrome add-on that is based on ChatGPT, hence uses the API KEY of OpenAI to access it.
But once I do the packaging, does it make the background code be exposed somewhere online to other viewers?
I can imagine that open ai would support a key hashing mechanism that serverless solutions could use to generate keys for one time use.
I got my HTML page to call my flask server which has the key and calls OpenAI API. That works, but when I try streaming the response thru my server, it takes about 6x longer than direct HTML / OpenAI streaming. The problem with HTML / OpenAI streaming is the exposure of the key. Any ideas? Thanks.