I have an application that allows people to make requests to ChatGPT bringing in their own key. Its a pass thru method, we never store the key we just pass it along to the API and provide an interface that is useful for authors. And it was working for weeks but then around 3-4 days ago after some activity it will start erroring out with the error messages below in the logs:
openai.error.InvalidRequestError: Please set an “HTTP-Referer” header with the URL of your app
INFO:openai:error_code=400 error_message=‘Please set an “HTTP-Referer” header with the URL of your app’ error_param=None error_type=None message=‘OpenAI API error received’ stream_error=False
I am using the Python OpenAI library and I upgraded to the latest version which didn’t fix it.
When I look at the documentation I don’t see examples in the Python libraries where headers are included so a bit at a loss of what to do.
The BYOK app (which is on the cloud server we manage) makes the API calls directly to OpenAI. An update I have recently set a HTTP-Referer to the headers of the outbound calls so hopefully that might resolve the issue (fingers crossed, I thought I had it solved with an upgrade to the latest version of the Python OpenAI library and turned out not to be the solution).
But its a mystery as to why this errors pop up after a certain amount of time. I wonder if it is some sort of flag seeing multiple keys coming from the same IP or something like that or perhaps the volume of calls coming from a single IP address when several users might be utilizing the app at the same time. I am confused on the best way to isolate the cause that the calls go from working to eventually being denied due to lack of a referrer.
You could maybe test your hypothesis regarding the number of API keys used if you send many requests from a single API key during a low activity period.
Admittedly , I would dislike paying for a bug hunt but it could help validate the claim that is caused by the number of API keys used from a single IP.
OpenAI is not exactly approving the pass-through BYOK method you are describing.
From the docs:
Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.
We are like a beefed up version of OpenAI Playground with some automations thrown in for authors. They bring their keys, and they can have access to a prompt cookbook with all sorts of author specific prompts, when they get their completions they can have them automatically saved to Notion. We provide a number of convenience functions and are the middleman to tie all these systems together to help our club members be more efficient.
So how do you reconcile that its ok for a developer to use a service like Heroku to store an OpenAI key in an environment variable that is used to call OpenAI? If “Salesforce/Heroku” and “Amazon/AWS” are the “others” that can’t be shared with?
Perhaps “others” doesn’t mean machines when organizations set up safeguards so that data is not accessible to staff, perhaps it means don’t share with other people. Do you disagree?