Suitability to fine tune a GPT model


I am not sure for my case, is suitable to fine tune a GPT model to produce the required objectives.

We have thousands of scripts.
Each script is of similar nature, with different variation and ideas.

We would like to input all these script content to a GPT model to train the model as to the nature and the style of the scripts I would like to have.

we hope to achieve the following using the trained model

  1. we hope to use natural lanagage to query the relevant contents of all the past scripts being input to the model, and hopefully it can made some simple logical deductions to produce answers

  2. we hope to have the trained model to generate ideas on how to produce similar scripts with different styles.

Is this feasible to train the GPT model to achieve these objectives?

If yes, what format of test data to produce to train the model?

Your help is much appreciated