Since it’s free right now, I’m going to try fine-tuning a model to better output without ASCII characters for a table I’m working on…
But Fine-Tuning wont help the model more accurately retrieve the data I want. It will only help format the output better, after all else has been tried and you’re still getting errors.
Also, another thread mentioned the possibilities of Visual Fine Tuning, which sounds like it shouldn’t be too far away with multi-modal 4o already tunable.
Hello guys, how are you?
Does anyone have any techniques for preparing large datasets for fine tuning? I see that this is where we spend the most time preparing the dataset.
Out of curiosity, does fine-tuning always have to be done at the code level, or can it be done through a web interface within an assistant? If it has to be done via code, could someone post an example?
Great example! Thank you for providing! I know this was just to show it’s possible, but curious what the benefits vs cost would be to fine-tune with 10 examples vs including the 10 examples in the initial prompt to the model. Generally, the model can provide a formatted response just using prompts with instructions and examples. Are there any benefits to fine tuning vs including in prompt when the sample size of examples is small?