Hello,
when I do fine-tuning via the OpenAI API, I can send the .jsonl
file, create the fine-tuning, and check the fine-tuning status—all via the API.
However, when I try to use the model for chat-completion
, it says it’s not available. I realized that the fine-tuned model is not being enabled by default for chat-completion
, and I have to manually activate it from the model list in the OpenAI dashboard.
This makes it very rigid if I want to fully automate the process in my platform via API.
Is there a way to solve this and have the model enabled for chat-completion
by default?
thanks!