Recently I’ve been failing to create fine tune tasks, but it’s not clear where I can find the reason for the failure.
I tried to reduce the size of datasets and the length of prompt and completion, but no improvement.
Later, after checking the following information, it says that the maximum number of fine tune models for this month may be reached.
OpenAI GPT-3 Fine tuning Guide, with examples - HarishGarg.com
Right now, you can fine-tune up to 10 models per month and each dataset can be up to 2.5M tokens or 80-100MB in size.
Is there really a limit to the number?
And if the limit does exist, is it possible to apply to increase the maximum number?