Can GPT-3.5 be trained on prompt and completion pairs that have longer texts?

I have created a database of summaries of some articles. These summaries are written by me in my own writing style. Is there any way I can train the GPT-3.5 model to imitate that style by having the original articles as the ‘prompt’ and the summaries I’ve written are marked as ‘completion’.

Something to note, these articles are summaries are really long. I just haven’t seen any training being done with longer texts.

There are only GPT-3 series of model type that can use to fine tuning. Maybe we can use another type like gpt-3.5-turbo in the future. We can just wait for OpenAI.

Is it possible to fine-tune on longer texts with gpt-3?

It depends on the model itself. I am facing the same problem now. :melting_face: