Davinci fine-tune model Can not return results, error code 500


and playgroud reported is internal error
Is anyone like this too?

Updated GPT-3 models

In July, we announced that the original GPT-3 base models (ada, babbage, curie, and davinci) would be turned off on January 4th, 2024. Today, we are making babbage-002 and davinci-002 available as replacements for these models, either as base or fine-tuned models. Customers can access those models by querying the Completions API.

These models can be fine-tuned with our new API endpoint /v1/fine_tuning/jobs. This new endpoint offers pagination and more extensibility to support the future evolution of the fine-tuning API. Transitioning from /v1/fine-tunes to the updated endpoint is straightforward and more details can be found in our new fine-tuning guide. This deprecates the old /v1/fine-tunes endpoint, which will be turned off on January 4th, 2024.

1 Like

I didn’t understand this announcement to mean that fine-tunes of these models would be going away as well. Yikes!

Also, my fine-tunes based on davinci-002 are also not loading, which makes me suspect something is wrong with the way fine-tune models are loading currently (it’s happened before).

1 Like

Yeah, it could be. Worth it to reach out to help.openai.com if you haven’t yet.

I believe the latest is gpt-3.5-instruct…

1 Like

None of my davinci-002 fine tunes are loading currently. For that matter, neither are my fine-tunes of the earlier models.

I’m assuming this is some kind of temporary outage. That said, I understand that the earlier davinci has been deprecated, but does that mean we won’t be able to call our older fine-tunes in the API?

thanks, Michael

1 Like

Merged your thread here to keep things tidy. Hope you don’t mind.

OK, thanks for your reply, I have converted davinci to davinci-002 and it was successful :heartbeat:

1 Like

It’s really strange: davinci-002 is available in the playground, but still none of my davinci-002 finetunes. Have had bad luck getting any reply from “help.”

Fine tunes based on GPT-3 models should be considered long-gone. The models have been shut off. models endpoint still returns ada and curie, but calls are 404-model_not_found.

The error in the initial screenshot is 500, usually internal server error. This can happen from API failure to parse the AI output.

A reproducible 500 example is to stimulate a function call with max_tokens=5 or such. A bad fine-tune and its prompt getting a bad function send triggering sequence emitted from a model could also cause this, or the function specification not existing in the API call.

Here’s where I’m confused—is it not still possible currently to fine-tune davinci-002? If so, then why would earlier fine-tunes of 002 be gone?

They should not be gone. They should still be available as models named ft:davinci-002:yourorg:1234567

The playground may not be the right place to expect to “play”, other working models have been omitted by OpenAI previously.

See if they are listed in the models endpoint results:

from openai import OpenAI
starts_with = "ft:"

client = OpenAI()  # uses OPENAI_API_KEY from environment
model_obj = client.models.list()  # API call

model_dict = model_obj.model_dump().get('data', [])
model_list = sorted([model['id'] for model in model_dict])
filter = [model for model in model_list
          if model.startswith(starts_with)]

print("\n".join(filter))

Thanks for your reply. Yes, the fine-tunes still show up in the list, but they no longer function. I think it’s because even the davinci-002 finetunes were done with the earlier fine-tune framework.

Davinci-002 fine-tunes were only possible with the new fine-tuning endpoint and should still be accessible.

2 Likes

Thank you for your reply.

That’s interesting. Not sure how to square that with my experience so far. I have fine-tunes that I made in December using davinci. davinci-002 was the only davinci available then, yes? At least in playground, when I select those fine-tunes from the menu, they do not work (though they did until this week.) I haven’t had a chance yet to call from the API and see what kind of error they give.