Fine tuning - REST request HTTP 400 with model attached

Hi all,

I’m at the final stages of the process, but can’t get the final completions request to work. When properly authenticated, when I make a request to https://api.openai.com/v1/completions that looks like this:

{“prompt”:“Valid Text some stuff”,“stream”:false,“model”:“ft-14ueCfFQ68Zl8FDeeYgq9gco”}

I get:
StatusCode: 400, ReasonPhrase: ‘Bad Request’, Version: 1.1

If I take out “model” in the request it works fine. I can see the model in my list of models and it looks fine, and works in the playground.

Any ideas?
Thank you!

1 Like

Hey Kevin,

Looks like your model parameter is referring to the ID of a fine-tuning run (ft-XXXXX), rather than the name of a fine-tuned model. If you used the command line interface to fine-tune the model, it should print out the model name at the end of it (ada:XXXXX, babbage:XXXXX, or curie:XXXXX). Feel free to email support@openai.com if you can’t find it.

2 Likes

Thank you so much, Rachel!

You may consider updating a few places…this was ambiguous, at least to me:
https://api.openai.com/v1/fine-tunes

-d '{"prompt": YOUR_PROMPT, "model": FINE_TUNED_MODEL}'

I love this tip - lol 6 months and I didn’t know that existed. Thank you for everything :wink:

1 Like

I thought I had this, but there is something different with these endpoints, or I am going insane :wink: Both are possibilities :wink:

A POST to here:
https://api.openai.com/v1/engines/curie/completions

Of this:

{“prompt”:“Text: Lorem ipsum lots of it blah blah\n\n “,“max_tokens”:100,“temperature”:0.75,“top_p”:1.0,“presence_penalty”:0.5,“frequency_penalty”:0.5,“n”:1,“stream”:false,“logprobs”:1,“stop”:”\n\n”}

Works fine.

A POST to here: https://api.openai.com/v1/completions
Of the same thing, results in an HTTP 400

{“prompt”:“Text:Lorem ipsum lots of it blah blah\n\n “,“max_tokens”:100,“temperature”:0.75,“top_p”:1.0,“presence_penalty”:0.5,“frequency_penalty”:0.5,“n”:1,“stream”:false,“logprobs”:1,“stop”:”\n\n”}

1 Like

Thank you for the kind words and support along the way. I must confess I’m at a bit of a loss on what to try next. This doesn’t work for me:

{“prompt”:“Test test hello yo\n\n “,“max_tokens”:100,“temperature”:0.75,“top_p”:1.0,“presence_penalty”:0.5,“frequency_penalty”:0.5,“n”:1,“stream”:false,“logprobs”:1,“stop”:”\n\n”,“model”:“curie:ft-user-mmuxbljf5hokaxayweacscpj-2021-07-30-06-40-45”}

If I remove “model” and try the old endpoint, it works fine.

1 Like

What error message are you getting? Does the model work for you on the playground?

Rachel, you can find a detailed description of the problem and repro steps here:

1 Like