GPT 3.5-Turbo Fine-Tuned Token Limit

When creating a fine-tuned model using a base of gpt-3.5-turbo, the resulting tuned model appears to have a maximum token limit of 2048 (according to the playground options and also the fact that no results are returned when prompting with tokens longer than this).

Meanwhile, the fine tuning guide says that the token limit for training is 4096 (OpenAI Platform). So, it would seem that the model training allows for a larger context than usage of the tuned model.

Is this expected behavior? Shouldn’t the tuned model be using the same token limit as the base model on which it was based? (i.e. 4096 in this case)

The playground slider user interface is the problem there. It could go up to gpt-3.5-turbo limit. I can set it to reserve 2048 for maximum response (max_tokens) and keep asking questions to my fine tune just fine.

I think you can manually enter 4096 in the numeric text box next to the slider.

You can if you edit the javascript with a script override, for the onChange eventhandler of the input object so it doesn’t rewrite with the maximums…

character position 519469 of the original js, unminified:

function(e) {
  var r =;
  var i = parseFloat(r);
  !Number.isNaN(i) && i >= t && i <= n && c(i)

Or just write some API code…