Fine tune model problem


what i would like to do is (f.e.) transform one string to another:

prompt: ABC
completion: (A(B(C)))

base model: davinci

it works well in playground:





i would like to train my own model, so that i do not have to tell openAI the format.



this is just a easy example, but with my real world problem, i have too many formatting options and i’m limited on 4.000 tokens (sum input/output).

i tried to train the model, but the model always gives me output from davinci, not from my custom trained model. :confused:

would be nice, if someone could help me to solve this problem.


This can be achieved by code. Why use AI?

Even better, ask ChatGPT to write code for you that enables you to achieve this.

this is just an easy example, to learn, how this works. I’m going to use more complex transformation.
thanks for the reply!

ok, i found it out myself. @sps thanks for the reply.
I just need to have a lot more training data and the PROMT needs to have a seperator like “->”, as well as the completion needs to end with a suffix like " END".

here is the jsonl for others, who might want to train their own models:

{“prompt”:“abc ->”,“completion”:" a(b(c))) end"}
{“prompt”:“bcd ->”,“completion”:" b(c(d)))) end"}
{“prompt”:“cde ->”,“completion”:" c(d(e)))) end"}
{“prompt”:“def ->”,“completion”:" d(e(f)))) end"}
{“prompt”:“efg ->”,“completion”:" e(f(g)))) end"}
{“prompt”:“fgh ->”,“completion”:" f(g(h)))) end"}
{“prompt”:“ghi ->”,“completion”:" g(h(i)))) end"}
{“prompt”:“hij ->”,“completion”:" h(i(j)))) end"}
{“prompt”:“ijk ->”,“completion”:" i(j(k)))) end"}
{“prompt”:“jkl ->”,“completion”:" j(k(l)))) end"}
{“prompt”:“klm ->”,“completion”:" k(l(m)))) end"}
{“prompt”:“lmn ->”,“completion”:" l(m(n)))) end"}
{“prompt”:“mno ->”,“completion”:" m(n(o)))) end"}
{“prompt”:“nop ->”,“completion”:" n(o(p)))) end"}
{“prompt”:“opq ->”,“completion”:" o(p(q)))) end"}
{“prompt”:“pqr ->”,“completion”:" p(q(r)))) end"}
{“prompt”:“qrs ->”,“completion”:" q(r(s)))) end"}
{“prompt”:“rst ->”,“completion”:" r(s(t)))) end"}
{“prompt”:“stu ->”,“completion”:" s(t(u)))) end"}
{“prompt”:“tuv ->”,“completion”:" t(u(v)))) end"}
{“prompt”:“uvw ->”,“completion”:" u(v(w)))) end"}
{“prompt”:“vwx ->”,“completion”:" v(w(x)))) end"}
{“prompt”:“wxy ->”,“completion”:" w(x(y)))) end"}
{“prompt”:“xyz ->”,“completion”:" x(y(z)))) end"}

like i wrote, the " ->" after a prompt and the " end" after a completion is necessary to make this little example work. And of course, we need a lot or, or at least sufficient training data.

I’m having the same issue as you originally were.

My simple example works great in Playground as a prompt, but when I try to turn the prompt into a JSONL file and fine-tune a model based on it, it doesn’t work. It just returns output from the base model.

Were you able to get this to work? If so, for your simple example, how many prompts/completions did you need?


Actually there is openai tool to prepare you data and it add “->” and “\n” at the end of the prompt and the competition respectively, yet it did not solve the problem for me, still get answers from the base model