Hi everyone,
I’m a Master’s student currently finishing my research dissertation. My project involves exploring how adaptable and flexible pre-trained GPT models are for fine-tuning into off-label applications. For my research, I’m using the financial stock market as a test bed, where I gather daily news and stock prices for the SPY index. I’ve orchestrated 4 GPT models to process and predict stock price movements:
- The first filters news, selects those with potential stock price impact, and performs sentiment analysis.
- The second cross-references the news data with stock price fluctuations from previous days and predicts the next day’s index price.
- The third model only analyses stock prices and makes predictions without any news influence.
- The fourth compares the predictions of the second and third models to issue a final prediction for the next day.
Here’s the core of my process:
- I run predictions with the non-tuned models (completed).
- After a 30 trade-day cycle, I fine-tune each of the 4 models.
- I use the fine-tuned models for the next 30 trade days, then run another fine-tuning cycle, repeating this for 80% of the dataset.
- I evaluate the performance of the final fine-tuned model using the remaining 20% of the dataset.
The dataset consists of 552 trading days:
- I’m using 420 days for training and fine-tuning, which involves 14 fine-tuning cycles for each model.
- In total, this leads to 56 fine-tuning jobs.
I’ve recently fixed the bugs in my system and was all set to run the full-scale test, but I encountered this issue:
Error creating fine-tuning job: 429 This fine-tune request has been rate-limited. Your organisation has reached the maximum of 16 fine-tuning requests per day for the model ‘gpt-4o-mini-2024-07-18’.
This limitation is blocking me from running the tests on my full dataset. Additionally, I couldn’t find any clear information about whether there are monthly rate limits for fine-tuning on top of the daily cap of 16 jobs per day.
I have sent an email to finetuning@openai.com regarding this question, but thought I might also ask a broader spectrum of developers for help.
Does anyone know if there’s a monthly cap on fine-tuning jobs for this model? Also, has anyone had success in requesting an increase to these limits for a short period, perhaps for research or testing purposes? I’m considering running the tests over four days (simply creating a setTimeOut of 24 hours every 16 fine-tuning jobs), but if there’s a monthly limit, I might be wasting time.
If anyone needs more details or access to sections of my code, I’d be happy to share in private. Since my research is not yet submitted or published, I need to be mindful of ethics and disclosure beforehand.
I’d appreciate any advice on how to approach this, especially if there’s a workaround or alternative strategies for managing this limitation.
Thanks in advance for any help you can provide!
Best regards!