Wrong API model is being utilised

Hello, I have been using GPT 3.5 turbo model API to serve chatgpt response through my application but from 10th September, 2023, some of API request leverages GPT-4-0613 model instead of GPT-3.5-TURBO-0613 despite not using GPT-4-0613 model anywhere

and the request that is being served through GPT-4-0613 using token in huge amount, this seems to be a bug

here is the screenshot of requests

1 Like


Do you have anyone else working on your project, or have you at any time used your API key with a 3rd party site or application? If so then your key has been leaked and is being used by another, it could also be that your API key is held within your public application and someone has reverse engineered it, in either case you need to revoke your current key, generate a new one and have a think about API key handling best practices before re-launching, if applicable.

You can also take a look at


No, I’m the only who is working on this project, the API key is inside the flutter application, I don’t think reverse engineering of flutter app is possible at the moment, as it’s a new technology by google (I might be wrong)

1 Like

Yeah… You should go ahead and delete your API key immediately. It has been compromised.

Do not give permission to clients to make paid API calls. You need an intermediary layer to authenticate the user, handle the request, and return the response.

You should always assume that anything and everything you publish can and will be taken advantage of. A reverse engineering isn’t even necessary when you are relying on the user to make the API call on your behalf.


I guarantee you with the certainty of taxes in April that someone will have reverse engineered the application using some method or other, if the Key is out there, it will get decoded.

You must remove your key from the public domain and have it behind an application server that acts as the API calling relay to your application, or you can use one of the API key management services offered by AWS, Azure, Google, etc. no system is ever 100% secure, but you can make it economically inefficient to attempt it.


Apart from the great advice from @Foxalabs and @RonaldGRuckus, I’d also recommend that (once you are finished with rotating API keys) you incorporate Moderation in your application before making any API calls.

This’ll help you make sure that your users aren’t abusing your app to break the OpenAI usage policy.


That is not how software security works. Anything, and I mean anything, that gets sent to a client, will be read by that client.

For many generations, game consoles vendors tried very hard to keep their signing keys secure, they put stuff in read-only memory chips, and epoxy them to the circuit boards, where they’ll break if they try to be removed, but in the end, the user has control of the bits, and will read the bits, so those keys got broken.

You must be 100% sure that any secret credential you have, is only within your control, and doesn’t get sent to any user, whether “encoded” or “encrypted” or not. If you need to gatekeep usage of your OpenAI token, you need to run a service that your client talks to, and have that service talk to OpenAI in turn. Anything else is insecure by design.


Got it! So the right approach would be placing an API key in backend server and make client call goes through my backend server , basically my server would make API call to openAI on the behalf of user instead of giving direct control to the user for API, this way, intermediary layer layer will be established between end user and openAI server, am I getting it right?

even if it’s a right approach, I have one question regarding it, wouldn’t be API endpoint along access token still be exposed? because I would still have to use the server endpoint along with access token to make request to my server

You make your own access and authentication method, such as user account and session between the client application and the servicing server. You likewise use transport encryption. You use your own communication protocol.

Then the only thing the user can hack out of the application is their own password.

Yes. This is why you need some way to authorize your user (some kind of login or oauth system) and some per-user rate limits. And, typically, some overall rate limits.
If it’s a mobile app, then you may be able to use accounts already on the phone, rather than having to request a new sign-up. That used to be really smooth, but because of fear of tracking or something, I think the mobile phone OS-es prevent that kind of thing in newer versions, and you have to manually ask for a phone number or other sign-in information.

1 Like

Any client side app can and will be reverse engineered. Extracting a simple API key string is trivial, even for beginners. It doesn’t matter if the “technology” is new.

For example, I can get the API key by intercepting the web request, or extracting it from the raw binary, or other methods.

Revoke the key immediately and rethink the way you handle data, especially sensitive ones like tokens and API keys. Always put all of this on the backend.

Everything a user receives, literally every byte, he can modify (and retrieve) at will, if he wants to. Be wary of that.

Seems like someone has figured it out and is using it for his purposes. Worst case he asks illegal things to the model and gets YOUR account banned.

1 Like

I am facing the same problem! :frowning: I instantly deleted my API key and generated a new one. It hasn’t been used anywhere apart from my Flutter app. My app uses the GPT-3.5 Turbo model only. I am still seeing the GPT-4 model being utilized massively.

If your API key is exposed in the client side code, it’s bound to happen.

API key should be stored only server side that too with key vault or environment variable or some other secure storage.

You should revoke this API key