Is there a way to fine-tune a model for use with the Responses API (specifically, for an application that uses function calling and file search)? I was hoping to take real user queries and fine-tune it to better call the correct function tool…and also take function tool responses and fine-tune the resulting message back to the user.
The fine-tuning guides all reference the Chat Completions API, and the required jsonl format is for chat completions. I could try to convert all my Responses API history into the Chat Completions format and build a jsonl file with that, but I’m skeptical as to whether the resulting fine-tuned model would then work when I try to use it in the Responses API setting. Plus, chat completions doesn’t support file search, so I guess I’d have to avoid any of the queries where file search was an appropriate response.
If fine-tuning and responses API are just incompatible right now, any idea if it will become an option in future?
And yes, I’ve done a lot of prompt engineering already and will continue to do so!