I'm spending a crazy amount of money during testing

I’m developing an app, and even though I send maybe 20-40 API calls to ChatGPT daily, I spend about 40 dollars daily. I’ve reviewed my prompts with a fine-toothed comb, which are usually around 400 tokens each.

I noticed that even though I have gpt-4o-mini specified in my code, when I look at my usage costs, I have 11 different chatgpt input models. Ranging from o1-preview-2024-09-12, input, gpt-4-1106-preview, gpt-4o-2024-05-13, input, etc. and most of my costs are for gpt-4, input, gpt-4-turbo-2024-04-09, input, and chatgpt-4o-latest, input.

I’ve looked everywhere in my code for a possible 2nd ChatGPT model entry, but I only have the one GPT-4o-mini specified. Also, one thing to note is that even though I’m on Tier 4, o1-mini and o3-mini have never worked. Even though I’ve allowed them to be included in my API restrictions. I’m not sure if it’s related or not. Does anyone have any ideas on how to fix this?

Is this app publicly available? It sounds like someone else has your key

Nope. It’s not live yet. There was a developer that worked on the project a while back, but I verified he’s not using the key. If I don’t use the app for a day, there won’t be any API calls, so it’s definitely coming from my app.

Delete all API keys (make sure you check how many different projects you might have (dropdown from the top left) and remove them all.
Then create a new one. Given your nummber of tokens it should be cents and dollars

There is something seriously wrong with your code if it’s calling random models and you don’t know why.

It would actually take additional effort to do this

I just thought of something. o1-mini and o3-mini have never worked for whatever reason. In troubleshooting, I allowed all chatgpt models to see if it was a weird restrictions issue. Is it possible that even though I have gpt-4o-mini specified in my code for some reason it would use any model I allowed?

I disabled all models today except for gpt-4o-mini, and I also added code to calculate input and output tokens for every API call. I haven’t had any problems today and it’s only using gpt-4o-mini, so the only thing I can think of is that by allowing all models it was using any possible model for some reason. Very odd.