Gpt-4o-mini fine-tuning with only 10 lines of code

Yeah.

Since it’s free right now, I’m going to try fine-tuning a model to better output without ASCII characters for a table I’m working on…

But Fine-Tuning wont help the model more accurately retrieve the data I want. It will only help format the output better, after all else has been tried and you’re still getting errors.

Also, another thread mentioned the possibilities of Visual Fine Tuning, which sounds like it shouldn’t be too far away with multi-modal 4o already tunable.

1 Like

Hello guys, how are you?
Does anyone have any techniques for preparing large datasets for fine tuning? I see that this is where we spend the most time preparing the dataset.

1 Like

Out of curiosity, does fine-tuning always have to be done at the code level, or can it be done through a web interface within an assistant? If it has to be done via code, could someone post an example?

1k reason to train just un one mesage ??

Training pairs, i.e. a prompt input and an expected output. Q and A.

If you want to train the AI you should show it lots of examples of a typical input and an expected output, the more the better.

@Foxalabs you seem quite knowledgeable,

Can you please have a look at my fine tuning questions?

I’d really appreciate it!

community.openai dot com/t/fine-tuning-gpt-4o-or-4o-mini-on-our-codebase/920438/2

Hello, welcome!

Fine-tuning a model is code only, to my knowledge, right now.

You have to create a very specific type of .json L file, using a specified question/answer or question/answer/response/answer format.

Great example! Thank you for providing! I know this was just to show it’s possible, but curious what the benefits vs cost would be to fine-tune with 10 examples vs including the 10 examples in the initial prompt to the model. Generally, the model can provide a formatted response just using prompts with instructions and examples. Are there any benefits to fine tuning vs including in prompt when the sample size of examples is small?