Prompting and Fine tuning Together


I am trying to use prompting and fine tuning together on an existing Assistant. If the Assistant has been created via OpenAI’s UI and the jsonl files required to train a particular model have also been uploaded via the UI, how can I check if the trained model is being used in this assistant? Is it enough to only set the assistant to use the corresponding main model, i.e. GPT 3.5 turbo 1106?

On this occassion I have not created the assistant programatically but via the UI at OpenAI platform and this is why I am asking.

Thank you

1 Like

Only fine-tuning models based on gpt-3.5-turbo-0125 are supported in Assistants.

The model name you wish to use would start with ft:, and of course you could have several with different behaviors, needing you to choose the correct one when you create or modify an Assistant ID.

Attempt and this is what you get:

   "error": {
      "message": "The requested model 'ft:gpt-3.5-turbo-1106:orgorg::83852n3r' cannot be used with the Assistants API.",
      "type": "invalid_request_error",
      "param": "model",
      "code": "unsupported_model"

Thanks for letting me know about the model version that is currently supported.

How do I know that an existing assistant is using this model via the UI?
Thanks again.

I expect the fine tune model name would appear in the drop-down just as it does in the chat completions playground.

Unfortunately, this is not the case.

You can create your assistants by API then, as the API is ultimately for making API calls, not chatting in a playground. Python:

import os, json
import urllib3

apikey = os.environ.get("OPENAI_API_KEY")
headers = {"OpenAI-Beta": "assistants=v2",
           "Authorization": f"Bearer {apikey}"
assistants_base_url = ""

body_dict_create = {
    "name": "Assistant 0125 xx",
    "description": "Demonstration of fine tuning in Assistants",
    "model": "gpt-3.5-turbo-0125",  # your fine-tune model name
    "instructions": "You are an AI assistant",
   "tools": [
         "type": "code_interpreter"
   "top_p": 0.8,
   "temperature": 0.8,
   "tool_resources": {
      "code_interpreter": {
         "file_ids": []
   "metadata": {},
   "response_format": "auto"

    http = urllib3.PoolManager()
    encoded_body = json.dumps(body_dict_create).encode('utf-8')
    response = http.request('POST', assistants_base_url,
    response = json.loads('utf-8'))
    print(json.dumps(response, indent=3))
    assistant_id = response['id']
except Exception as e:

Thanks a lot for your detailed reply.

The above will create a new assistant. What if we would like to edit an existing assistant id (created via the UI) and change the model is using to a specific fine tuned model?

You can include the behavior of managing the fine tune model in playground in your bug report tthat you send through your account’s help->messages; API->feedback.

When one selects to fine tune a model, say gpt-3.5-turbo-0125 by uploading the files from the UI, the corresponding model appears in the drop-down list when editing the assistant, as shown below. Sorry for not confirming earlier.

However, in my case when running the assistant it gives an error.

I think my training data is structured well. What could be the cause or how to debug this?