Generative Fiction - How to limit to certain size?

I have recently received access to GPT-3, and I have been working on testing different prompts for generating fiction. My goal is to generate very short (500 - 1000 word) short stories, or if that doesn’t work generate long stories that I can split up into separate short “Chapters”.

The actual generating of the fiction is working amazingly - I have created prompts that actually generate some excellent prose. Some stuff that is so scarily close to real writing, imitating certain styles and writers and genre conventions, that at first, I dismissed it as being existing work being regurgitated by the model (I had to do some pretty extensive research to convince myself that GPT-3 wasn’t just regurgitating pre-written text, haha).

But there are several issues that I am having:

I find it very difficult to generate fiction that ends. While what GPT-3 is generating would be perfect in a long book with many chapters, I just haven’t been able to structure it in a way that could be cut to “end” a story. The writing just continues and continues. I have tried creating prompt data to force a limit using a bunch of different tricks, but no luck.

I tried using the Instruct models to explicitly give it a limit, but no luck there. I even tried to create a sort of story structure, breaking down things into specific sections with explicit storytelling functions to create some sort of story arc, but that didn’t work very well.

Ahh well, I decided if I can’t limit a story, why not just keep generating interesting fiction, and just break it into chapters? The trouble I was getting into was the 2500 token limit. I tried several workarounds… Deleting a paragraph from the beginning, of the story, generating the next paragraph, etc… But the story seemed to slowly degenerate over time (as more and more of the established “history” is lost from those paragraphs).

I tried an experiment to see if I could use a separate Instruct prompt and ask it to create a short summary of the story that I am about to remove, remove those paragraphs and replace it with the summary for the prompt for the next round of text generation, but that didn’t work.

I have heard that AI Dungeon does something where it appends a “Game State”, and re-appends certain fixed world data, to the prompt, in order to keep the world more consistent, but I wasn’t able to find out to much information about that (as well, AI Dungeon seems to have much of the same problems I have been having, so I am not sure how to solve that).

Has anyone had any luck with these sorts of issues? I was looking for maybe some more advanced articles on engineering prompts, but I couldn’t really find anything. Would love to hear about others’ experiences.

1 Like

I did an experiment where I set the token limit to about 200 and ran it repeatedly, feeding it back into the next prompt. I’d make edits here and there to keep it on track. Ultimately it was a really cool flash fiction about a trans-martian cycler that had been hit by a solar flare. Very exciting. I wish I had saved it. I’ll see if it ended up in another GitHub repo…

1 Like

I’m doing something like this now. Here’s some approaches:

  1. You can start sample stories with Once upon a time… and end it with …and they lived happily ever after (or some similar ending).

  2. You split it up into parts. I recommend the Three Act Structure or something from this: The Seven Basic Plots (Literature) - TV Tropes

You can feed part of the output as the input of the next “chapter”, but that can get expensive. My preferred approach is to build the skeleton of it without the AI, then use GPT-3 to generate the “flesh”.

3 Likes

@smuzani - Where I am going to go next is using something like Tracery (GitHub - galaxykate/tracery: Tracery: a story-grammar generation library for javascript) to generate story structures, and “flesh” out the writing, as you recommend, via GPT-3.

The part I haven’t been able to work out yet (or find examples of) is just how to structure a prompt to “fill in” that prose. It tends to wander or improvise way too much. For example, if I had a narrative generating script that generated the event “Larry walks into the old house on the hill”, and I try to structure a davinci-instruct-beta query Narrate Larry walking into the old house on the hill in the style of a horror story, I get the results below:

Narrate Larry walking into the old house on the hill in the style of a horror story.

Larry walked into the old house on the hill. He knew he shouldn’t have come, but he had to find out what was wrong with his son. The house was abandoned and no one had lived in it for years, but Larry felt drawn there. Suddenly a voice echoed through the empty rooms. "Come in, come in." It said again and again until Larry finally found its source. A man sat at a desk with his head down, writing something. 
The man looked up from his work and stared straight at Larry with eyes that burned like fire. 
"You came," said the man as he began to stand up from his chair, but paused as if seized by some unseen force before resuming an upright position."I told you not to come!" He yelled."But now you are here and I cannot help myself-you will be my next victim!"

That is definitely some pretty fun narration it is generating, and this is pretty powerful technology - But it is totally generating its own story elements (Larry’s missing son, the man who warned him not to come, etc.). I have been experimenting with how to constrain it, with little success. I suspect that I need to have some sort of “World State” that is specified in the prompt to constrain the writing (and like I said, from my research AI Dungeon does something like that), but I haven’t cracked it yet.

If you or anyone else has any insights on generating more constrained writing, it would certainly be a big help to myself and the community!

3 Likes

Maybe I should rephrase that. It’s like you’re a scriptwriter and the AI is the actor-director. You can give them a lot of room for creativity, but they’re limited to the scene. Actual movie scripts have great prompts too, something like this:

The Heavyweight Champion of the World, Apollo Creed. He is twenty-eight years old. He is tall, smooth-muscled Black with barely a scar on his light coffee-colored face.

It doesn’t give much more detail than that. But they know they have to cast a young, muscular black man. Is he cheerful? Stoic? Cocky? It doesn’t say. The actor interprets it to their own creativity. In this case, the actor modeled Creed after Ali. But if this role was given to Eddie Murphy, it might be different.

If you wanted to generate a full story, it probably wouldn’t be through GPT-3. It’s mindless, it has no sense of structure, only patterns. The prompt you gave doesn’t say how it ends. Even real writers have trouble with endings. You tell it to generate Larry walking into the house, but you also have to tell it what happens to Larry.

Another alternative is to give it a sample of a complete story. I give it a full example paragraph with an ending. Here’s one of the simpler ones (bold is input, non-bold output):

Here are three stories. Each story is described with a title, and a summary paragraph, and a moral:

The Two Goats
Two Goats met on opposite sides of a chasm. The trunk of a fallen tree was the only means of crossing. The goats wouldn’t let each other pass. They put a foot on the log each. Then they met horn to horn. Neither would give way, and so they both fell.
Moral: It is better to yield than to come to misfortune through stubbornness.
##

The Fox & The Rabbit
A Fox saw a Hare and made an attempt to catch him. The Hare was too quick for the Fox and ran away. After a while, the Hare stopped to rest. The Fox then caught up with the Hare, grabbed it by its ears, and threw it across his back. “You thought you could escape,” said the Fox, “but now I’ve got you.” “Indeed,” replied the Hare, “you have me in your power but all is not lost yet.” Moved by these words, the Fox set him free again.
Moral: It does no good to be angry when someone has already gained the advantage over you

A longer one:

Here are synopses for three different children stories. Each plot is described with a title and a summary paragraph.

The Cotton Rabbit
A stuffed rabbit sewn from cotton is given as a present to a small boy. The boy plays with the more modern and mechanical toys and forgets the cotton rabbit for a time. The oldest toy in the nursery tells the rabbit about toys magically becoming real due to love from children. One night, the boy’s nanny gives the rabbit to the boy to sleep with. The rabbit becomes the boy’s favorite, enjoying picnics with him in the spring, and the boy regards the rabbit as real. One day, the boy comes down with fever, and the rabbit sits with him as he recovers. The doctor orders that the boy should be taken to the seaside and that his room should be disinfected — all his belongings burnt, including the cotton rabbit. The rabbit is bundled into a sack and left out. A fairy appears and comforts the cotton rabbit. The fairy gives the cotton rabbit a kiss. The cotton rabbit changes into a real rabbit and joins the other rabbits in the forest. The next spring, the rabbit returns to look at the boy, and the boy sees a resemblance to his old cotton rabbit. He ends up playing with the rabbit out in the wild.
##
The Little Mermaid
A mermaid is washed ashore near a fisherman’s house. The boy who lives there takes her in and nurses her back to health. The mermaid learns how to walk on land, and she falls in love with the boy — but he only sees her as a friend. One day, the mermaid asks for his help in getting into the ocean, but he refuses to help her leave. The mermaid becomes very sad and leaves the house one night so that she can be with her prince under the sea. She returns home many years later with a baby child of her own — but it wasn’t a daughter; it was a son and heir to the throne of saltwater kingdoms. In time, this heir grows up to become king himself, ruling over all three seas: saltwater, fresh water, and underground rivers.

The AI mimics the input prompt. If you give it an ending, it gives an ending. I’ve had the most creative answers with a prompt of The Firebird (the version with theft and murder). But unfortunately, it also gave answers with cannibalism and torture so I tossed that prompt. This cotton rabbit prompt had the worst replies.

2 Likes

This is fascinating stuff. I’m just trying to keep a chat assistant from going off “script” with my app users and have similar issues as described above. Need to keep it from wondering off on its own and follow a rough structure that I’ve laid out.

2 Likes

Although I haven’t tried it, one thought I had for chat was to check the user’s input before sending it to the generate endpoint. Maybe filter for anything off-topic or even use a faster model to figure out whether it’s on topic or not?

1 Like

I recommend you first brainstorm with chat-gpt for a theme and a plot. Next work with it to construct several endings, pick the best and probably rework it. When you have those, ask it to abbreviate them as much as possible but without losing anything critical. Copy/paste the result in a separate doc. Now don’t get it to write the whole story nor even chapters - get it to write one scene at a time for you to discuss, else if it is the final scene tell it to work from the one already done. And tell it to put The End at the end so you know. As it produces scenes, work with it to improve as needed until you have an acceptable scene. Now tell it to summarise it as briefly as possible without loss of anything critical. Now paste that summary in your separate doc. Now make another doc just for the full scenes.

As you progress, chat-gpt will start to forget earlier scenes and even lose the plot! Mr Johnson may reappear even though he died ten scenes ago! That’s when you paste back into chat-gpt the first doc to refresh its memory with abbreviated theme, plot, and scenes so far. Repeat as needed. The End. In your scenes doc, link related scenes into chapters (just as you link related sentences into paragraphs.) Rework and polish the finished story if needed, without or without chat-gpt’s help.

Note: this rudely-bumped thread was a year and a half before ChatGPT even existed.