Fine tuning a QnA langchain model

Hi,
How to fine tune a q n a model using langchain? I need to make the answers generate in a particular way. I’ve the json file to fine tune but unsure where to add it in the code since it’s not a completion model.

I’d you want to fine-tune a model using LangChain you’ll likely have better luck on a forum dedicated to LangChain.

We can help you here with OpenAI APIs and fine-tuning OpenAI models, but LangChain wouldn’t come into it at all.

1 Like

Why not go the fine tuning section of your account and do the job there? Especially if you need one model only.

It’s the most straightforward solution.

https://platform.openai.com/finetune

result generation is done using openai only. the model i’m using is gpt-3.5-turbo-1106.