I have recently received access to GPT-3, and I have been working on testing different prompts for generating fiction. My goal is to generate very short (500 - 1000 word) short stories, or if that doesn’t work generate long stories that I can split up into separate short “Chapters”.
The actual generating of the fiction is working amazingly - I have created prompts that actually generate some excellent prose. Some stuff that is so scarily close to real writing, imitating certain styles and writers and genre conventions, that at first, I dismissed it as being existing work being regurgitated by the model (I had to do some pretty extensive research to convince myself that GPT-3 wasn’t just regurgitating pre-written text, haha).
But there are several issues that I am having:
I find it very difficult to generate fiction that ends. While what GPT-3 is generating would be perfect in a long book with many chapters, I just haven’t been able to structure it in a way that could be cut to “end” a story. The writing just continues and continues. I have tried creating prompt data to force a limit using a bunch of different tricks, but no luck.
I tried using the Instruct models to explicitly give it a limit, but no luck there. I even tried to create a sort of story structure, breaking down things into specific sections with explicit storytelling functions to create some sort of story arc, but that didn’t work very well.
Ahh well, I decided if I can’t limit a story, why not just keep generating interesting fiction, and just break it into chapters? The trouble I was getting into was the 2500 token limit. I tried several workarounds… Deleting a paragraph from the beginning, of the story, generating the next paragraph, etc… But the story seemed to slowly degenerate over time (as more and more of the established “history” is lost from those paragraphs).
I tried an experiment to see if I could use a separate Instruct prompt and ask it to create a short summary of the story that I am about to remove, remove those paragraphs and replace it with the summary for the prompt for the next round of text generation, but that didn’t work.
I have heard that AI Dungeon does something where it appends a “Game State”, and re-appends certain fixed world data, to the prompt, in order to keep the world more consistent, but I wasn’t able to find out to much information about that (as well, AI Dungeon seems to have much of the same problems I have been having, so I am not sure how to solve that).
Has anyone had any luck with these sorts of issues? I was looking for maybe some more advanced articles on engineering prompts, but I couldn’t really find anything. Would love to hear about others’ experiences.