
rex.vanhorn
My name is Rex VanHorn, and I am in the final semester(s) of my AI degree at the University of Georgia. My master’s thesis focuses on the opportunities and limitations of fine-tuning on GPT-3. Specifically, I want to find out if it’s possible, and to what degree, we can fine-tune GPT-3 to provide answers that trend in the direction of the fine-tuning. Taken to the extreme, can GPT-3 generate text that substantially mirrors the style and content of an author.
For example, if you tuned GTP-3 on the complete works of Shakespeare, and then asked questions in the domain of his works, to what degree would the answers gravitate towards Shakespeare’s (theoretical) answers? Moreover, would it be possible to fine-tune GTP-3 so that its responses would be semantically (and factually) similar to the kinds of responses we would expect from Shakespeare, if we could ask him the same questions now.