Fine tuning of GPT Models

I have fine tuned the davinci model with additional prompt/collection. How do I now find out if a certain response is generated using the additional training of the fine tuned model. Is there some metadata that I can return


you need to select your model when doing the completion request. Each fine-tune results in a new model with its own id

Hi @rajibdeb76

I’d recommend making a call to the list fine-tunes endpoint . It’ll list all your fine-tune jobs.

You’ll receive a response like:

  "object": "list",
  "data": [
      "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
      "object": "fine-tune",
      "model": "curie",
      "created_at": 1614807352,
      "fine_tuned_model": null,
      "hyperparams": { ... },
      "organization_id": "org-...",
      "result_files": [],
      "status": "pending",
      "validation_files": [],
      "training_files": [ { ... } ],
      "updated_at": 1614807352,
    { ... },
    { ... }

If the status is succeeded for a given job, there will be a fine_tuned_model name in the response for the same job, which you can use to call the completions endpoint.

You should definitely take a look at docs


Hi, I know about the list. I am not asking question on the list function. I am trying to find out once I create a fine tuned model, how do I know that the the response if from my fine tuned model and not the residual knowledge of the initial model


Because every time you call the completions endpoint you specify the name of the model.

Since you are specifying the name of your fine-tuned model then the API uses that model.

There is no “residual” model. There is only the name of the model you send to the API which is a required param.



1 Like