This is normal, fine-tuning can take many hours, and the status will remain “pending” until it runs, often in the “pending” state for a very long time.
Actually, while its not important to share fine tune ids, I don’t think there is much of a cybersecurity risk in doing so.
Even if someone has it, it’s unlikely they can run the model unless they break into your account, and if they break into your account, it’s easy to get the fine tune model ids anyway.
Then, if someone was a kind of super hacker who could figure out a way to jailbreak the user authentication and run your model, models are ran by model name not id and the pending status models do not have a model name yet.
Then, even if someone could get your model name and hack the system to run your model, so what?
The terms and service of OpenAI is to not fine-tune with confidential or private data, and so your models should not contain any private data according to OpenAI policy.
So, while I understand your recommendation @humphree, as a cybersecurity person with decades of experience in this field, I fail to see any “practical risk” for a user in posting ids of pending fine-tune models, to be honest.
Don’t get me wrong, I am not encouraging people to post the ids and names of their fine tuned models; I’m simply pointing out that on the “risk curve”, I don’t see that @fcasals has done anything a reasonable cybersecurity professional would consider to have “significant risk”.