Seeking Advice: Fine-Tuning GPT Models - Hitting Daily Rate Limit Issue

Hi everyone,

I’m a Master’s student currently finishing my research dissertation. My project involves exploring how adaptable and flexible pre-trained GPT models are for fine-tuning into off-label applications. For my research, I’m using the financial stock market as a test bed, where I gather daily news and stock prices for the SPY index. I’ve orchestrated 4 GPT models to process and predict stock price movements:

  1. The first filters news, selects those with potential stock price impact, and performs sentiment analysis.
  2. The second cross-references the news data with stock price fluctuations from previous days and predicts the next day’s index price.
  3. The third model only analyses stock prices and makes predictions without any news influence.
  4. The fourth compares the predictions of the second and third models to issue a final prediction for the next day.

Here’s the core of my process:

  • I run predictions with the non-tuned models (completed).
  • After a 30 trade-day cycle, I fine-tune each of the 4 models.
  • I use the fine-tuned models for the next 30 trade days, then run another fine-tuning cycle, repeating this for 80% of the dataset.
  • I evaluate the performance of the final fine-tuned model using the remaining 20% of the dataset.

The dataset consists of 552 trading days:

  • I’m using 420 days for training and fine-tuning, which involves 14 fine-tuning cycles for each model.
  • In total, this leads to 56 fine-tuning jobs.

I’ve recently fixed the bugs in my system and was all set to run the full-scale test, but I encountered this issue:

Error creating fine-tuning job: 429 This fine-tune request has been rate-limited. Your organisation has reached the maximum of 16 fine-tuning requests per day for the model ‘gpt-4o-mini-2024-07-18’.

This limitation is blocking me from running the tests on my full dataset. Additionally, I couldn’t find any clear information about whether there are monthly rate limits for fine-tuning on top of the daily cap of 16 jobs per day.

I have sent an email to finetuning@openai.com regarding this question, but thought I might also ask a broader spectrum of developers for help.

Does anyone know if there’s a monthly cap on fine-tuning jobs for this model? Also, has anyone had success in requesting an increase to these limits for a short period, perhaps for research or testing purposes? I’m considering running the tests over four days (simply creating a setTimeOut of 24 hours every 16 fine-tuning jobs), but if there’s a monthly limit, I might be wasting time.

If anyone needs more details or access to sections of my code, I’d be happy to share in private. Since my research is not yet submitted or published, I need to be mindful of ethics and disclosure beforehand.

I’d appreciate any advice on how to approach this, especially if there’s a workaround or alternative strategies for managing this limitation.

Thanks in advance for any help you can provide!

Best regards!

2 Likes

Hi there and welcome to the Forum!

As for the limits, these are defined in terms of the Tier you are in. You can look them up under the following link: https://platform.openai.com/settings/organization/limits

Specifically, there are daily limits for the maximum number of fine-tuned models, which is in line with the error you are experiencing. I am not aware of a monthly limit though. The most straightforward way to increase your limits would be to add enough funds to your developer account that would place you into a higher Tier. For reference, you can find the Tier requirements here: https://platform.openai.com/docs/guides/rate-limits/usage-tiers

In principle it is possible to request an exception to limits, even if you are at a lower tier. You can do so at the bottom of the page under link shared above. However, it may take some time to get approved, if at all.

P.S.: I am not sure if your use case fits with the intention of fine-tuning under the OpenAI fine-tuning endpoint, especially for the steps that involve prediction - although it may depend on how you are approaching the predictive analysis. Anyway, leaving this out for the moment.

1 Like

Thank you very, very much for your reply!

Specifically, there are daily limits for the maximum number of fine-tuned models, which is in line with the error you are experiencing

Oh, yes, I am aware of those, as per the print below you can see the model gpt-4o-mini-2024-07-18 is not listed:

I assumed it would fall under “Other” and as “Other” has no set amount of limit, than that it would mean there is no limit, but apparently there is a limit of 16 jobs per day (as per the server response), unfortunately.

I am not aware of a monthly limit though

That is actually great to hear, I read they used to have monthly limits about a couple of years ago, so I wasn’t sure it would still remain.

In principle it is possible to request an exception to limits, even if you are at a lower tier. You can do so at the bottom of the page under link shared above

I actually did go there before posting to request that, but when clicking on the option “Request an exception” I had two problems:
1 - there is no option to request an exception for fine-tuning “jobs per day”
2 - there is no option for the model gpt-4o-mini-2024-07-18, which would then fall under “Other”, I assume, but if selecting other I receive the below message:

We are not currently accepting requests for other models
This includes GPT4 Turbo preview models. If you feel a model is missing here, you can let us know by reaching out at our help center.

Instead of going through the help center I decided to send an e-mail directly to them.

P.S.: I am not sure if your use case fits with the intention of fine-tuning under the OpenAI fine-tuning endpoint, especially for the steps that involve prediction - although it may depend on how you are approaching the predictive analysis. Anyway, leaving this out for the moment.

Yes, I am really delving into an off-label usage, that is fine. The purpose is to explore that and see the results and evaluate how far off they are. We hope to see if the responses could become more accurate in weighting different predictions made, or better weighting positive or negative news against actual stock prices outcomes.

Thanks very much again for the answer!

2 Likes

Good luck with the project in any case. Perhaps you can share your insights here once you’ve completed the work. I’m sure Forum members would be interested.

1 Like

Hey
We have a tier 5 org and experimenting with fine-tuning. I have hit the same 429 error yesterday.
There is no info on gpt-4o* models fine-tuning limits, neither there are any response headers to know when it will reset the limit.
It would be nice to have more info on the topic in docs.
P.S.
Great project of yours! Good luck with it

What specific error message are you getting?

{
    "error": {
        "message": "This fine-tune request has been rate-limited. Your organization has reached the maximum of 16 fine-tuning requests per day for the model 'gpt-4o-mini-2024-07-18'.",
        "type": "invalid_request_error",
        "param": null,
        "code": "daily_rate_limit_exceeded"
    }
}
1 Like

Maybe request id can help to identify:
request_id req_59c29afdcd69b8a545ada7415303fa1e

Hey everyone!

First of all, thank you all for answering here and giving me some of your time!
I apologise for my late reply; the past two days have been a bit hectic, and this post slipped my mind.

A few hours after my post, I actually received an email reply from the team at OpenAI. They sympathised with my research and granted me an upgrade in the number of fine-tuning jobs per day, allowing me to run the entire system in one go.

Very much a “Put her to sea, Mr. Murdoch!”

The team was very accessible and swift in replying, so I recommend that, if anyone else finds themselves in the same position, to just send an email explaining the situation and your intentions.

Shoutout for their very quick reply and attention!

2 Likes