New Fine-tuning billing -

This usage ran up a $70 bill within seconds. OpenAI have not answered. Much appreciated if someone can . Their website says |Model|Training|Input usage|Output usage|
| — | — | — | — |
|babbage-002|$0.0004 / 1K tokens|$0.0016 / 1K tokens|$0.0016 / 1K tokens|
|davinci-002|$0.0060 / 1K tokens|$0.0120 / 1K tokens|$0.0120 / 1K tokens|
|GPT-3.5 Turbo|$0.0080 / 1K tokens|$0.0120 / 1K tokens|$0.0160 / 1K tokens|

When a fine tuned model is created as it is implied by ‘training’ the cost should have been 72 cents (training with 450K tokens).

Thanks for any info

3:11 PM

Local time: Aug xx 202x, 8:11 AM


463,824 trained tokens

3:13 PM

Local time: Aug 2x, 202x, 8:13 AM


463,824 trained tokens

3:14 PM

Local time: Aug 2x, 202x, 8:14 AM


463,824 trained tokens

3:14 PM

Local time: Aug 2x, 202x, 8:14 AM


463,824 trained tokens

3:16 PM

Local time: Aug 2x, 202x, 8:16 AM


463,824 trained tokens

Just to let you know, this is not the correct place for account/payment issues, for that please use

With that out of the way, let’s see if we can get to the bottom of this, do you have any other usage of this account? i.e. does anyone else use the account for GPT-3.5 or GPT-4? is there any inference usage tokens? not on that day but for the entire month?

How many epochs were used? Fine-tuning costs:

<tokens>/1000 * <basePrice> * <numEpochs>

I chatted with an OPenAI bot i think and never get any support until months later. “Do you have any other usage of this account” - yes but as can be seen for this day these are the only usage breakdown. That’s it. The five times shown are the usage and it shows the trained tokens. BTW there is no model available to use. Every time i try to use the model outputted it says not available. Because i am new to the fine tuning APIs i didn’t know what it did or didn’t do so tried it repeatedly five times and ran up that bill.

thinking the model didn’t get created ran the code five times. if there is a default epoch setting then that was it, because i did not specify an epoch param.

I found it used default of 4 epochs each attempt and I ran it five times . A grand total of 20 epochs in that case.

Let me clarify my prev msg. Yes there is usage by this account in the past , and no use for the last 2 weeks, until day before where it showed correct usage e.g. .001 or something like that. Billing has be $0 until yesterday’s use. Yesterday upon running the fine tuning five times resulted in the usage shown in the other email and the billing thereof. So, the only usage yesterday was from the fine tuning five attempts. That was it.

This seems pretty close to what is expected in costs.

Fine tuning davinci 5 times with 4 epochs would be:

463,824/1,000 × .006 × 4 × 5 = $55.66

5 epochs would be $69.57


Won’t make that mistake again thats for sure. Three things about the results 1) none of the models are available for use , all of them result in “Does not exist”. 2) n_epochs defaults to 4 , and being new i had no idea this hyper param could be set, so no chance it would have been 5 for any of the attempts. 3) Any way to retrieve all the models using wild card , because i can only find the name for three of them . Thank you for explaining.

1 Like

Just found ```
openai api fine_tunes.list

1 Like

You can use the API to list your fine-tuning jobs. The data should include a status attribute to tell you if it is done (successful) and fine_tuned_model which is the model name you would use


Python: openai.FineTuningJob.list()

1 Like

THanks. Also when i first tried to use this : ```
openai.FineTuningJob.create(training_file=“file-abc123”, model=“gpt-3.5-turbo”)

[quote=“novaphil, post:11, topic:328757”]
[/quote] —> so the api is openai.FineTune.list() , think the docs may need an update. not sure. Just to update here, it did list all five models.

Ah yes, that’s the older endpoint for Davinci, forgot that’s the model you selected.

So, the models were created using FineTune. AS i follow the examples i am unable to use completion = openai.ChatCompletion.create …says This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions.

P.S. I used DaVinci only because it said ‘gpt-turbo-3.5’ was invalid model for fine tuning. Would love to use gpt-turbo…

Davinci is not a chat model, so you’ll need to use Completion endpoints. You should see your completed models in the Playground, once you select Mode:Completion.

If you want to fine-tune GPT3.5 you’ll need to use the new FineTuneJobs endpoint and use a different format for the fine-tuning. See new fine-tuning guide (and you can compare with legacy fine-tuning guide).

This makes sense. I should have tested with small amounts of data instead of jumping head first. Live and learn. Thank you much for the helpful pointers.

1 Like

I am using code interpreter interactively via GPT4. How can i use that using API ? There was no obvious way to use API and that’s when i resorted to trying it with fine-tuning. Thanks for any info

Code interpreter is a specially-trained AI model, along with the sandbox environment where it can run python code, along with a user interface to access files.

It is a complete solution only available in ChatGPT.

You will get a different AI model that is similarly trained (likely the same one) when you provide a function to the API in your request (an undocumented discovery I put forth).

However, you would need to write your own software bodge solution using information not documented, only reverse-engineered. And high skill and insight.

  1. use a dummy function that doesn’t get called,
  2. Add the extracted ChatGPT code interpreter prompt to the system prompt,
  3. capture python function calls when AI wants to run code and run them (along with content the AI wants to say at the same time).
  4. To chat history, add the AI chat content response and actual unseen AI-generated function language correctly as “assistant” role, and the last line of generated variables from python as function role, and call API again.
  5. Repeat until you don’t get any function calls.

A complete chat history is required for the iterative calls the AI make make to fulfill a question.

Fine-tuning for functions is not currently available, likely because it would require disclosure of the actual language the AI receives and sends to understand functions.

Wow! what a great response and at the same time daunting. Can’t say i get it at first read, have to work on it a bit to understand what is involved. Thanks