How can I improve the performance of fine-tuning model while fine-tuning?

I’m working on text Narratives, assigned accident classification, and accident type to that Narrative that is related to Mining Accidents.

Example–>" Narrative:he is a trained miner operator with 3 years experience. he was wearing all of his personal protective equipment. as he was moving the boom of the miner, a piece of roof material measuring 2 1/2 feet by 1 1/2 feet by 6 inches thick fell out between two straps above his head and struck his hardhat and neck.
Accident Classification: fall of roof or back
Accident Type: struck by falling object"

  1. I used ‘Narrative’ as a prompt and 'Accident Classification: \n Accident Type: ’ as a completion and fine-tuned a Curei model.
  2. 40,000 Narrative I used to train and 40,000 narratives I tested on the fine-tuned model.
  3. For each Narrative assigned a classification and accident type. There are 42 unique accident types and 28 unique classifications.
  4. While testing I just gave Narratives as a prompt and asked to assign accurate Accident Classification and Accident Type.
  5. When I tested it gave an accuracy of 76% for Classification and 58% for Accident type. The average accuracy was 67%. (for accuracy, I just compared the test data set Classification and Accident type column with the finetuned model predicted Classification and Accident type)

In this process what mistakes I did?
Are there any better steps that I should follow in this process?
How can I improve the performance of fine-tuning model while fine-tuning?

Could anybody please help me with my doubts? Thanks in advance…