DPO Finetuning not possible with gpt4o mini

Hello,

I get an error while trying to fine-tune gpt4omini with DPO. It seems is not available for DPO only. Is this true? Are you working on this please?

Where there’s an interface to choose a model - you have one choice.

They didn’t discuss any particular model limitations, so this can be what currently exists. Prodding with a stick, there’s nobody to poke.

1 Like