Is it possible to know the costs from API calls after the call?

When you create an app using openAI APIs, you may want to charge for that.
You have to charge at least the openAI charge.
Is there any way to know the cost on the API response.
Of course, I can know that from my user board. I want to have an automatic way to charge the user for using my app, as I will not have to pay myself.

I am using an Assistant, and I have seen how tricky is the cost. I was thinking an automatic response from them where you see the cost.


You want to know the exact cost that have been caused by a specific API call.
We cannot exactly know the cost in advance because the number of output tokens is not known or maybe your app is based on the Assistants API and can perform additional actions under the hood.

My idea is to leverage the changes in rate limits before the API has been called and after the return has been retrieved. You can get the rate limits in the headers and use this data to calculate the costs according to the pricing page.

Initially you would need to know your current rate limits x-ratelimit-remaining-tokens and then get the delta after the reply has been received.
Since the rate limits reset you would need to account for that as well.


It does have to be in advance, at least after it is completed, as I have on my board.

1 Like

Unless you set-up the system to always reply with a fixed number of output tokens then nobody knows the exact cost in advance. This is due to the stochastic nature of the model. But it’s a possibility to force a certain type of reply length.

Another, less exact way would be to use something like max-tokens and always bill for that amount. This would require some playing around with the prompts but is doable in general.
You can use OpenAI official tokenizer library to calculate the actual tokens used and calculate the price according to the price of various models for completion APIs.

For Assistants API, it seems not very clear so far how to calculate the actual costs, especially for run steps.

1 Like

Indeed, it seems a mess. For instance, I am not using GPT 4 as so the costs will be low, but it is using GPT 4, guess, for generating images. They charge something like: Assistant + model + extras.

This is the challenging part: the under the hood. It is not clear for instance what they are charging under the name “Assistant”.

1 Like

I absolutely agree.
It’s a solution with potential to the upside.

Regarding your specific question I think the general issues remain. It’s not possible to get an exact price in advanced.

You will likely make faster progress if your customers pay in advance and you deduct the costs from their prepaid credits, for now.

1 Like

did you get an answer to this? most of the comments below refer to estimating in ADVANCE which is not what you asked.

is there an openAI API call that gives us realtime billing info? Then in theory you can call it before and after an API call to estimate costs for a certain type of call. Then given token counts, you can use that guide in future, esp when you have lots of concurrent requests and don’t want to do a billing call on every single API call.

Another way maybe if you could use an API call to generate API keys per customer, and then use that for billing?


Sadly, not yet. I am still not sure how to do it

In fact, I have found a solution, at least, for chatGPT 3.5 Turbo. You can get it from data.usage.total_tokens

data is the response back from openAI API.

   const output = await fetch(url, options);
   const data = await output.json();

I am using Velo (Wix), which is JavaScript based.