@john.allard yeah should not be resolved, still happening (for me personally only on gpt-4o fine tune not mini)
@arvy.kubilius @arsenii Do you guys use response_format={“type”: “json_object”} at inference time? If you remove this it works right? Can you not use it without this json format restriction?
In our experience the models are quite reliable in responding with json format if we ask them explicitly. We write this in our system prompt:
The response must be a valid JSON array that contains …[describe the expected structure here]… Only provide a RFC8259 compliant JSON response.
Yes, that is what I use. Actually, there are more details in the post called “Fine-tuning and nonsensical JSON output (tons of extra keys)”, even the Jupyter Notebook and the data are shared there.
It is is still strange it does not work, i.e. something breaks when I do fine-tuning. @VSZM @john.allard