I’m curious about the finetuning capabilities of ChatGPT models with vision, particularly the new ChatGPT-4o. This model seems to perform better on vision tasks than GPT-4 V from my limited testing.
However, I’ve noticed that the latest finetuneable model available is GPT-4-0613, which only includes data up until 2021. Given the advancements and potential improvements in more recent models like ChatGPT-4o, it would be incredibly beneficial to have the ability to finetune these newer versions.
Could someone shed light on why it’s not possible to finetune ChatGPT-4o? Are there any plans to enable finetuning for these advanced models in the future?
Thank you for sharing your insights! I hope that GPT-4o will be finetunable soon though. It performs drastically better (without finetuning) on the task I need it for and previous models just don’t suffice.
I completely understand and share your anticipation for the fine-tuning capabilities of GPT-4o. I, too, hope that in the future, we can apply fine-tuning on even more powerful models to fully leverage their potential. Let’s look forward to technological advancements and the rollout of new features.
Missing however is the active/queued job rates required to actually create, along with any clue as to the pricing.
Request access to gpt-4-0613 is now gone, only gpt-4o remains (by alias). The prequalification to be able to request access was being an existing user of fine-tunes and fine-tuning models.
(bonus to OpenAI for finally making the web fine-tuning fully functional; only missing is Weights and Biases)
Did you have to make a request or contact openAI to get access to it? Since when did you notice you have access? Also what’s the cost per token vs. gpt-4-0613 fine tuning?