Fine-tuned model API returns 404

Hi all!
This is my first attempt at working with fine tuning models. Help me please.
After training the model on the playground, the model works and responds. But when I make API requests, I get an error. Tell me what am I doing wrong? A simple query for ‘gpt-3.5-turbo’ works fine:

`const response = await this.openai.createChatCompletion({
model: ‘gpt-3.5-turbo’, // Specify the model to use
messages, // Provide the array of messages
});

         // Return the generated message from the API response
         return response.data.choices[0].message;`

But as soon as I insert the name of the trained model, I get a 404 error:
`const response = await this.openai.createChatCompletion({
model: ‘ft-6I8RjswBp8gcP1jC26QPC2Vk’, // Specify the model to use
messages, // Provide the array of messages
});

         // Return the generated message from the API response
         return response.data.choices[0].message;`

Hi @i.moseich , I believe you’re requesting the model via the Chat API. However fine-tuning should work with the Completion API. OpenAI API
Is this the case?

2 Likes

Welcome to OpenAI community @i.moseich

There are a couple of discrepancies with your understanding of fine tuned models:

  1. As of now fine-tuned models can only be accessed through the completions endpoint not the Chat Completions endpoint.
  2. Use fine_tuned_model not id to consume your fine-tuned model.

e.g.

"fine_tuned_model": "curie:ft-acmeco-2021-03-03-21-44-20"
"id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F"

Read API reference for fine-tunes to know more.

You can use retrieve fine tune request to have detailed information about the fine-tune job id where you’ll find the corresponding fine_tuned_model.

1 Like

Hi @Enrico you’re right, I changed the method to createCompletion and everything went. Thank you!

1 Like

Hi @sps! I did not expect that the model id is not used in the request, the name of the model is exactly what is displayed on the playground. Everything worked out! Thank you!