Unexpected $67 Token Usage Spike with Models I Never Use

Hi there,

My OpenAI token usage exploded to $67 (5.2M tokens) in the last two days without my action. Normal daily usage is only $0.10-1.00.

Key Details

  • Usage shows these models causing costs: chatgpt-4o-latest, gpt-4_5-preview-2025-02-27 (input, output, cached)
  • I don’t use ANY of these models in ANY of my websites or applications
  • I’ve already rotated API keys and deleted the old ones
  • Activity seems to affect only one project (e-kurz)

Questions:

  1. Is this a security breach?
  2. Do I need to recreate my projects too?
  3. How can I restrict API access to specific domains?
  4. Has anyone experienced similar mysterious usage?

Any help would be appreciated. This unexpected cost is significant.

More Details:

Whenever anything unexpected happens in your billing I would recommend revoking your API keys.

  1. I would wager that yes, there has most likely been a security breach.
  2. Just your keys
  3. You can’t without middleware. Only endpoints
  4. Yes, this exact issue pops up once every so often I’ve found on this forum and in the majority of cases it’s from compromised credentials
1 Like

First of all, thank you for your answer :slight_smile:

I will try to contact the support but I don’t have high hopes.

About the api keys. I had deleted them all and reloaded them, then reloaded them with a few euros. After an hour the same thing happened. A few million tokens had been used up again.

Do I understand correctly that the tokens, in my case 4o and 4.5 preview, were used from outside? Can I even determine that?

I have also renewed the password for the account, but since no one else is automatically logged out, which would make sense, this is only half secure.

The keys are in an .env file and are secured with htaccess. Can I do more here?

Okay so i will look this up, thanks!

Well, maybe you are passing the key to the client side?
Is your system of requesting to OpenAI happening on the server or client side?
This might seem like a basic thing but I see so many projects querying the OpenAI API via the client side, saying they did store the keys in an .env while still using the keys in client side code.

It’s very suspicious that this happens even though you revoked old API keys and generated new ones.

Double check your code and see if that is the issue. :hugs:

3 Likes

Check “legacy user keys” in the billing section,

https://platform.openai.com/settings/profile/api-keys

Generate a new one and delete all others.

The usage page should give a bit more insight of source, such as by project, user, or other.

Change your password used for accessing the account.

Check for additional organization “members”.

If someone has discovered you have an app in the wild that is exploitable (such as client code making any direct call to openai at all because you gave out your keys), it must be shut down until you can find the insecurity.

3 Likes

I Use cURL / PHP so no. or what do you mean by that? the key from the .env is only used in PHP. No visible variable in JS or something like that :wink:

it is! :see_no_evil_monkey:

okay i deleted the key. there was actually one, but i never used it that way

the project that caused the accesses is the main project / first project. I cannot delete this either. However, I also deleted the API key assigned here - unfortunately without success

I did allready

Yes, this is the point! i have already searched the whole code, there is no api key anywhere, only in the .env. but i’ll keep checking!

I’ll keep you updated, thanks so much for your Input!

PS: i discovered something, i downloaded the usage statistics of the last days and there are a lot of hits (which was to be expected ;)). The accesses are all from the same user (user-12345abcdef) ← made unrecognizable. Does that mean that was my user? Can I look up my user ID somewhere?

PHP is a trigger word…and doesn’t deserve more than what I predict an AI predicts:

bot, on PHP

If PHP Goes Down, Can They Get the Key?

  • If .env is properly protected? :cross_mark: No
  • If .env is accessible via a URL? :white_check_mark: Yes, risk of exposure
  • If PHP fails but the web server serves PHP files as text? :white_check_mark: Yes, possible risk
  • If the server has a security vulnerability? :white_check_mark: Yes, risk of RCE

Best Practices to Secure .env Files

:white_check_mark: Store the .env outside the web root (e.g., /var/www/config/.env instead of /var/www/html/.env).
:white_check_mark: Set correct file permissions (chmod 600 .env so only the web server can read it).
:white_check_mark: Never expose API keys in JavaScript or client-side code.
:white_check_mark: Use server-level restrictions to deny access to .env.
:white_check_mark: Regularly audit your server security and patch vulnerabilities.

Not to mention RCE using discoveries in your code itself.


“user” could mean association with a user key (as described before). It would need that because another account can set “organization” on their API calls if they are a member.

You can just make a call of your own locally with a placed key, such as to embeddings (which is likely not abused), even using a user (account) key for one model, a project key for another model. Then you have the appearance of “user” for known calling methods, to let you know what your own calls look like. It is likely not the “user” parameter that can be sent to the safety system.

3 Likes

This does not mean that it’s not accessible from the client side!
While php is a server language, it can still deliver information to the client, potentially revealing sensitive information like your API-Key.
How exactly are you getting a user prompt and how exactly are you querying the OpenAI API and how exactly are you returning the answer?
Please be detailed so we can rule out any errors on your end - not saying there are any, but we should make sure so the cost doesn’t go up again. :hugs:

2 Likes

That means that the code is most likely being called by your platform, based on the assumption that this user ID is hardcoded into it. At the very least it points to a single script causing these problems.

One thing I’ve noticed a lot of startups fail to acknowledge is they need their own form of protection against malicious actors on their own platform.

You have hidden the OpenAI API key, great. Are you ensuring that users can’t call your PHP script to run inference for free?

3 Likes

Thank you for your input. I’ve identified and fixed the security issue. The problem was not with PHP itself, but with my implementation - I was controlling the GPT model selection through an unvalidated URL parameter.

You were absolutely right about the potential vulnerability. I was using URL parameters to control which GPT model to use without proper validation. I’ve now hardcoded the model to “gpt-3.5-turbo-0125” in my cURL request, removing the ability for anyone to manipulate it through URL parameters.

The API calls were properly structured on the server side via cURL, but the model selection could be manipulated by changing the URL parameters.

Yes, this is currently an issue we’re still working on a solution for. We’re considering implementing either a login system or a credit-based model, but we haven’t decided on the exact approach yet.

For security, I’ve now protected the account so that all API keys are stored in .env files, which are additionally secured via .htaccess. Even if PHP fails, no code is exposed as plain text. The keys are no longer present in the source code but are only referenced through the .env files.

If anyone has further suggestions on securing inference access, we’d love to hear them!

While I couldn’t find the exact culprit who made these unauthorized API calls (there were over 250 calls to chatgpt-4o-latest alone!), securing the endpoint should prevent further abuse. Thanks for your insights!

2 Likes

A login sounds great for this issue. It should revoke unauthed people (or bots) from spam-requesting your endpoint. That way you could save on a bunch of costs.
You could also grant a certain amount of requests per account that way.

Also, you mentioned that you now added security.
Did you push or commit any code to any version control like GitHub before this change?
If so, the API Keys might still be in the history of that repository.

Cheers! :blush:

4 Likes

We had our web application updated (from extern), and they used some GitHub repositories. I don’t know how they secured everything properly because I’m not a web developer myself, which is why I paid them to do it :wink: I hope they did well. The API-Key was in an .env file, so I don’t think the key was pushed to GitHub.

Anyway, I added all env entries to .gitignore as you mentioned :wink: And as mentioned before i changed all Key’s and i added Limits to each project and only allowed specific GPT’s we use, no others.

Thanks for your input :slight_smile:

3 Likes

I’m glad you got it sorted out.

Just to be safe, you may want to consider a third-party audit of the code.

Have you considered using gpt-4o? It’s less expensive, and more intelligent than gpt-3.5.

You absolutely need some sort of authorization. I’m not understanding exactly what is going on, but if someone was spamming your server for the lulz then they’ll do it again, regardless of the model choice.

As j.wischnat mentioned, a login platform is necessary for rate limiting. You NEED to consider each user and how much inference cost they can churn per hour, minute, whatever. Then you need some notification system to ring alarm bells when sketchy stuff is happening (Example: Someone is managing to hit your limit every hour, their IP or User Agent is jumping all over the place). Your team can implement anti-bot securities like Cloudflare on shady users.

These are all basic security principles.

3 Likes

Glad you got it sorted, @TomL … and great to see you back after 2 years!

It’s tough growing a developer community sometimes because most developers are in Github comment threads… or coding! Haha.

Seriously, though, it’s been great to see this community of ours grow over the last few years. I really liked what I saw in this thread… people helping each other.

3 Likes

Yes, i thought the same. I will contact an different developer just to double check :wink:

True, we thought about this before but in the current situation we need a complete new UX Design for this. Meanwhile i will embedd a IP reload block to allow only one Call per minute. An alarm and max usage is allready set in the OpenAI Backend. Hope this will fix some issues.

The Problem is - i am the team :sweat_smile: so i have to decide. but as I said, I have some extern developers, they should take care of it now

Thank you and thank you for the welcome wishes :slight_smile:

I am also positively surprised how the thread has developed and how I have received help here! That’s often not the case and you tend to be treated arrogantly. But I was able to get to the bottom of the problems here and now solve the whole thing programmatically with the help of people.

Thanks again to everyone! :saluting_face: :slightly_smiling_face:

2 Likes

Want a second oppinion? Drop me a mail at jschultz@php.net

I would love to check it out free of charge.

There’s a serious problem if adding a new back-end feature requires a complete front-end re-design.

This is fair. You can also consider using “anonymous accounts”, which use a proven combination of fingerprinting to identify users and provide them keys.

Competent or not, it may make sense to secure yourself behind a proven, robust system like Firebase or Supabase. These suggestions being made is elementary-level cybersecurity.

If you were using one of these, you would be able to thwart bots and scriptkiddies with less than 5 minutes of effort.