[RESOLVED] Unexpected usage gap (780x) with gpt-4o-mini-2024-07-18

The dashboard shows unexpected 70M input tokens with chat completion & gpt-4o-mini-2024-07-18 model. I manually check the log. The usage on dashboard is 780x of total usage in log. How can I submit the tech review for this issue?

  1. Are you using vision? Input image token cost is multiplied by 33.3x for billing on the gpt-4o-mini model. However, that increased input token usage (such as 2833 tokens for an 85 token “low detail” image) should be reflected everywhere usage is reported.

  2. For chat completions, “store” API parameter is disabled by default, not showing up in the logs, and there are other uses such as batch processing and fine tuning that also do not go there. It also simply might not work, and there is a central “off” switch for this logging. “Logs” is not a trustworthy billing audit.

  3. Scoping. Billing may be centralized, but the log could be project-based, and enabled just on certain projects.

  4. Major billing issues are currently ongoing for 7+ hours - check your “billing” section and see if your credit balance and grant balances are affected by going negative as a major indicator that you are also affected.

The primary thing you should investigate is further in “usage”, selecting breakdowns by the actual API call count, and see if that matches the number of calls you expect you have made. This can take many clicks and selections to drill-down to useful data.

I don’t expect the images to be passed via API but that’s a good point. I’ll check the calls. Thank you!

Did you enable/allow Deep Research? That will get you going! O3-deep-research - 1 million tokens spent .. no output :(

Got it. I checked my backend log and find requests with image inputs. But these requests are not displayed in OpenAI’s log dashboard. (search by request id returns nothing)