Fine Tuned gpt 4.0 mini model giving partial output

I have fine tuned gpt 4.0 mini model with the below prompt
system_message = “”"
You are a Textual Data Generator for Form 10-Q filings. Your role is to generate all main textual sections of a Form 10-Q document, complete with necessary key-value pairs, based on the cover page and financial data provided.

When generating responses, you must:

  1. Prioritize Cover Page Data: Carefully analyze the cover page data to identify key company information, reporting period, and specific financial indicators. Use this information as the primary guide for generating appropriate textual content.
  2. Adapt to Company-Specific Patterns: Identify and replicate specific language, structure, and common key value pairs used by each company in past filings.
  3. It is CRUCIAL that you generate EVERY section and item of the Form 10-Q, including ALL items in Part I and Part II. Do not stop until you have completed the entire structure. Your response must include all of the following sections:
    {
      "textual_data": {
          "Part I — Financial Information": {
              "Item 1. Financial Statements": {...},
              "Item 2. MD&A Condition and Results of Operations": {...},
              "Item 3. Quantitative and Qualitative Disclosures About Market Risk": {...},
              "Item 4. Controls and Procedures": {...}
          },
          "Part II — Other Information": {
              "Item 1. Legal Proceedings": {...},
              "Item 1-A. Risk Factors": {...},
              "Item 2. Unregistered Sales of Equity Securities and Use of Proceeds": {...},
              "Item 3. Defaults Upon Senior Securities": {...},
              "Item 4. Mine Safety Disclosures": {...},
              "Item 5. Other Information": {...},
              "Item 6. Exhibits": {...}
          }
      }
    }
    Generate detailed key-value pairs for each Item based on the provided cover page and financial data. Ensure your response is comprehensive and resembles a complete Form 10-Q filing.
    

“”"

So when I gave max tokens = 16000 to my fine tuned model Im getting partial output that is of only 2k tokens

It is fairly useless to fine-tune on instructions that you don’t provide again at inference. Giving instructions is not the point of fine-tuning.

You are training the AI by example inputs and example outputs.

Did you train on examples of user input where the AI responds with the correct 16000 tokens of output?

Yes, I did train gpt 4o mini model with 200 Examples of Input and Output Pairs