Finetune for langchain usage

Hello everyone,

I’m currently using the API through LangChain with structured output. From what I understand, this essentially configures a function calling setup where the Pydantic base model handles the output, and the textual response is often left empty.

I’d now like to fine-tune gpt4o-mini on this use case but haven’t found much related content or examples to guide me. I’d appreciate any advice from experts here on how to approach this or any example workflows that could help clarify the process.

My initial thought was to define the output schema as a tool and then provide the call within the output, but I’m a bit unclear on how to structure this properly and ensure to get all details right

Thanks in advance for any guidance!