Hey. I would really love to be able to fine-tune the vision-model to read receipts more accurately. Will fine-tuning the visual model be possible in the future?
During the DevDay Keynote, Sam Altman said that fine-tuning of GPT-4 models will be available in the future, but prior to that they would let some selected developers who had previously used the fine-tuning for the gpt-3.5-model to test it first.
we also have the same needs here.
How long will it approximately take to have the fine-tuning available for GPT Vision API?
As far as I can remember, it was not mentioned a date for it to be released, and since it will be a program with selected users to participate (for fine-tuning the GPT-4 Turbo one) I guess this can take a couple months maybe more.
Just a guess, I’m not sure about it!