A theoretical novel writing tool

Some thoughts on how GPT-3 could be used to write a novel.

For starters, embrace the idea that writing a novel is an iterative process involving the production of a sequence of discrete temporal units (chapters) that accumulate over time. I’m way less interested in an insta-novel tool than one that takes into account how a novel actually grows, piece by piece, over time.

Here’s how I imagine it working. I write a chapter through an iterative process of prompt/completion/revision/resubmission. Once I am happy with chapter 1, the AI stashes it in a sort of ongoing directory, let’s call it the Log.

As I continue on to Chapter 2, the AI remains aware of what I have stashed in the log so far. This way, if I continue to write about a character I introduced in chapter 1, it can refer back to that chapter for consistency’s sake.

As chapters accumulate in the log, I can return to previous chapters and make any edits I see fit. Once I make these edits to previous chapters, the AI notices whether it affects subsequent chapters. For instance, say I go back to chapter 1 and change a character’s name from Sam to Steve. Every subsequent instance of “Sam” would get flagged by the AI. While a reader’s experience of reading a novel is sequential, the writer experiences it, in revision, as a simultanaeity.

One of the satisfactions we derive from narratives depends on the concept of the arc. Events that are set in motion in the early part of the novel are resolved through character action and transformation in the second half. We tend to feel ripped off when writers use deus ex machina, new elements introduced later in the narrative, to resolve conflicts. By making the AI aware of the novel’s dynamic, growing data set, I imagine it can help generate the sorts of satisfying climaxes and denouements that are a structurally engrained element of the novel as art form.

8 Likes

I have some thoughts on iterative process for novel writing.
If you read something like Larry Brooks’ “Story Engineering” , or Syd Field’s “Screenplay” which breaks down the structure of a novel / screenplay respectively. Down to where “percentage wise” those events need to happen in the story

I imagine there are a few ways to write it iteratively by tackling the novel from a structure perspective.

This is all speculative, but it’s the approach I would try first.

I’m going to use Larry Brooks’ example in this case.

STEP 1: THE OVERALL STRUCTURE
You’ll need a few shot examples of stories summarized according to one specific structure. That hit all the key points on the story (inciting incidents, plot points, etc).

In case of Story Engineering it would be this:

From there, I feel that you could start generating original summaries that follow the same structure.

I think, this summary will need to be part of the prompt of every completion from then on, in order to keep the following summaries on topic,

STEP 2: LONG-FORM SUMMARIES

Like in the first step, now we need to get into more detail for each section of the book. (SETUP, RESPONSE, ATTACK, RESOLUTION)

I think at this point it is still feasible to “few-shot” it but I suppose a better approach would be to fine-tune models to generate each part from the Summary:

eg. “Write the introduction for the following story:”+[Structure generated on step 1]

In Larry Brooks’ books, he talks about the approach of laying out the foundations, or skeleton of your novel, each scene in the book has a purpose, which is to lead to setup the milestones. Once you know all of the requirements the scenes practically “write themselves” as they have a purpose. I think that’s a pretty good way to look at it I think.

The shape of this Summary would have to be something like a list of scenes that occur in the book.

SCENE 1: “Jasmine arrives at the airport of a new country”
SCENE 2: “Jasmine drops her bag and a handsome man, Steve picks is up, and returns it they exchange smiles”
SCENE 3: “Jasmine arrives at passport control, can’t find her passport.”

I speculate that you could get summaries as long as 1000 tokens here.

STEP 3: Expand the Scenes

I guess, here things start getting complicated. It will be hard to keep the book “on topic”. because of token limitations. And how connected things are in a book, how do you set up things you’lll pay off later on, etc.

But theoretically, you could, with the summary of the book, create prompts to expand scenes.

But with each prompt creating you create you’ll need to devise the mechanisms to keep GPT3 on topic.

Ok, I rambled enough, but I hope that this was useful or at least some food for thought.

1 Like

Lots of food for thought here, indeed. There are so many different ways to write a novel and I can see how GPT-3 could be used with this approach. Roughly speaking, I think writers fall into two broad categories, plotters and seekers. Plotters start out by developing a structure, an outline, a pre-determined set of themes. Seekers blindly plow forward and see where the characters and language take them and let the structure rise subconsciously as a byproduct of the process. For what it’s worth, I’m the latter sort of writer, so I’m looking at GPT-3 thinking about how to incorporate it into my process.

I never work with an outline ahead of time, though in later drafts I make endless lists of scenes and chapters to try to detect the latent structure that’s already there. I’m a fan of the late-draft outline rather than the first draft outline. I’ve had numerous occasions when I only start realizing what a story or novel really “means” after it’s been published. I find it impossible to engineer subtext; for me it has to appear as a byproduct.

I can see how GPT-3 could be integrated into either sort of approach, which is part of what makes it so cool.

2 Likes

GPT-3 does better if you give it a concept or theme around which to organize the story. For many authors, the theme emerges as we draft because we have to excavate our own consciousness to see what’s resonating. In some cases, such as Andy Weir’s The Martian, the theme is simple and also up front. He wanted to do a thought experiment about how an astronaut could survive on Mars, and do a deep exploration of the technology, the science, and the psychology required. In other cases, such as Tolkien’s Lord of the Rings, the central theme that he was conscious of was his desire to explore language. But what truly emerged was a sword-and-sorcery fictionalization of his trauma from World War I. That being said, I wonder if or how GPT-3 (or future technologies) could imbue a novel with these deep psychological currents? Or if it would just emerge by virtue of the fact that these machines are trained on hundreds of lifetimes worth of text.

2 Likes

I guess a different approach would be alternating between generation and summarization to keep the AI on topic.

GENERATE:

Use a prompt starting with:

  • Overall summary - The story so far
  • Medium Term Summary - A summary of recent events
  • Short Term Summary - A summary of the last completion

It gets harder as you move forward if you don’t want to blow through the token limitations.

You’d have to set budgets for prompts/completions I guess the limit for davinci is what 2048 tokens right?

For example:
500 tokens for The overall summary
150 tokens for the Medium Term (optional)
398 tokens for Short Term summary
200 tokens for any structural wrapping needed (e.g. character list, setting description)
800 tokens for completion (that should give you 2 pages of a book? maybe?)

This budget will likely have to be a little flexible because "The overall summary will grow as you progress, so I’d keep a buffer for it, but I think in the early stages you could get away with longer completions.

Store the full completion as book pages .

SUMMARIZATION

  1. Summarize the returned completion add to “Short Term Summary”
  2. Summarize the “Short Term Summary” Add to the bottom of the Medium Term Summary (remove the top), and to the overall summary.

This would be a fun and relatively quick python script to write.

5 Likes

Fascinating. I’d love to play around with such a tool if you were to code it. Definitely let me know how that goes.

One thing I’m trying to be mindful of is using GPT-3 to write things the way they’re “supposed” to be. I think of it kind of like playing an electric guitar plugged into an amp after playing an acoustic guitar. Screeches of feedback are technically “wrong” but guitarists quickly incorporated it into their playing on purpose. I actually like the things that GPT-3 does that seem “off” or “incorrect.” The form of the novel is already amazingly diverse, from realist 19th century novels, through the stream of consciousness experiments of modernism, postmodern weirdness ala Pynchon, magical realism, and now the almost feed-like work of younger writers like Patricia Lockwood.

I guess the thing I’m looking to do is create an ongoing repository of chapters so that the AI can at least be aware of them, and it would be ideal to not have to summarize “our story so far,” although I can already see how that actually might be fun. The structure of the novel as we know it is still largely beholden to Charles Dickens, who serialized his work in newspapers, the cutting edge technology of his day. All of this is to say, I’m looking at GPT-3 to blaze a trail into the future of literature rather than replicate what has come before. Glad to connect with other minds puzzling over the same questions.

2 Likes

I started doing some investigation this morning for fun, the rabbit hole gets deep quick.

This is all in the playground, before starting on the scripting tool, just to gather requirements for the code.

BACK COVER BLURB
I started with generating a Back-Cover-Blurb for a book, with a prompt and without a prompt. this step was easy and works either way. (davinci-instruct-beta was really good at the task).

FIRST PAGE
From the back cover blurb, I started on the first page. I could have simply started from scratch, but I wanted to see how much I could stick to what was in the blurb (this will be important for keeping track of the book as a whole without being prescriptive of what happens.

The first page differs from the following pages as it doesn’t have anything other than the back-blurb, to start from, so I started with

BACK-COVER-BLURB: +[whatever I generated previously]+CHAPTER 1:

as prompt, using davinci, returning 400 tokens to get about a page of a book.

If the page ends in a period. I assume it’s the end of a paragraph. and store that entire page.

If the page ends in an incomplete paragraph, I trim it and store to use as part of the prompt of the next page.

SUMMARIZING THE PAGE (this is a function that will get repeated with every page generation)
The next step was to Summarize the first page output. This needs to be done with a lower temperature, I tried instruct, but the best results I got was using davinci with a prompt like this:

After reading the following text, the teacher asked the student to summarize it.

  • [PAGE CONTENTS] +
    The student provided the following summary:

SUBSEQUENT PAGES
My plan here is to loop through this step 200 times, this will most likely not lead to any recognizable narrative arc, but it will spit out 200 pages.

  • The prompt will contains:
  • The back-cover-blub
  • The recent events summary
  • The last complete OR incomplete paragraph (as start)

FINDINGS

I feel like I will have to run grammar correction steps after every generation as I have noticed that the engine can start dropping the ball if your text contains grammatical or spelling mistakes, it will incorporate it into its styling.

If your frequency penalty is high the engine will start penalizing punctuation from occurring in the later parts of the text. I don’t quite know how to stop that from happening other than keeping the generations short.

Is there a mechanism where I can exclude tokens from being penalized? I think I’m going to open a thread on this.

3 Likes

Fascinating. I didn’t realize that about the high frequency penalty penalizing punctuation. I’ve noticed that a number of my completions end with strings of words that fractal out into word salad. I first started to just lop these off and discard them, but lately I’ve been sifting through those parts of the completion looking for interesting word combinations that I can then incorporate into the next iteration of the prompt.

1 Like

Good thread. I’ve been too busy with other stuff to read a lot here, but thanks for this… I’ve been experimenting with a few things over the last year… everything from children’s stories (chapter books) to never ending narratives based off character backstories generated by GPT-3 at LitRPG Adventures…

One of the things on my todo list is to fine-tune for blurb generation… and some other stuff… oh for more time and money!

ETA: I’d love a bigger context window in the next version…

3 Likes

@PeculiarPath I believe that with good prompt design and preparatory work (I like your idea), the current version of the OpenAI interface might suffice to write ‘a novel’ (I think of approaches like the snowflake method), but the lack of an over-arcing ‘log’ (as @boudinot calls it - the actual writer’s “memory”), will almost certainly result in less interesting fiction (and almost certainly not in “literature”).

1 Like

There are several methods out there for fiction writing, I like the structured approach (regardless of what method) because the groundwork of codifying the requirements has already been done for the most part.

Choose a method, turn it into an algorithm. Of course that, like with any creative work, a lot of the “formulas” out there contain a great deal of subjectivity, ambiguity and are very often vague and open to interpretation. Even with Larry Brooks’ story engineering, and story physics with titles that suggest a more exact approach, creativity is hardly an exact science, and humans cannot be beaten when it comes to that.

The processes I describe there are mere attempts to use those methods to deal with the limitations of the System (memory, planning, plotting) in order to mimic the structure present in those books.

My crude understanding of the nuts and bolts behind GPT-3 is that it is a highly trained prediction engine that calculates what is the most probable word (token actually) that will come next, base on what has come before. with the definition of “Before” being your immediate, limited, prompt, and the whole of the corpus used for the pre-training, which generated the 175B parameters that make davinci.

Predicting the next token is hardly planning ahead, let alone being able to set up things to pay off in later chapters, keep track of character traits and personality, to know how a character is likely to behave.

I keep going back to the analogy of synthesizers and drum machines. When I started using GPT-3 I started listening to Nine Inch Nails again. No one could ever accuse Trent Reznor of making music that wasn’t emotional or expressed some sort of fundamentally human spirit.

In my opinion, if you look to GPT-3 and AI in general as a means of replacing the process of writing, then you’ll end up disappointed. Writing to me is a process that can never and maybe should never be completely understood to the point that it can be reverse engineered. It exists in the same category as dreaming and sex, an experiential rather than theoretical activity. Susan Sontag in Against Interpretation said “In place of a hermeneutics we need an erotics of art.” My understanding of this is that reducing the writing process to merely an algorithm negates the very reason it exists in the first place, as something to be experienced subjectively, emotionally, viscerally.

It’s the collision of algorithms and the subjective nature of art that excites me most. I love all this thinking around how to bend the tool to produce a certain kind of output, while recognizing that it allows for the creation of entirely new art forms and genres, like electronic music, that wouldn’t exist otherwise.

1 Like

Kinda had the same idea when I saw all those articles about GPT-3 and journalists using the AI to write their paper.
While waiting to be granted permission, I tried experimenting with GPT-2. With the last version, I was able to feed my novels and short stories to the AI (because, also, I’m in Québec and I write in French, so I had no idea how it would react). I wanted to see if an AI could write with my style.
Could it be used to write a short story juste like it did with the articles ?
Short answer : no.
Long answer : hell no !
Even though I could recognise names or words from my novels, the phrases made no sense (at all!). I would basically have to rewrite everything myself, which is not the point.
GPT-3 gave me more hope for about a second. The writing is better, but it goes nowhere, says nothing, doesn’t have substance.
The thing is, part of the stories we write isn’t even on paper. It’s in between the lines.
Plus the AI works with rules. You teach it phrase or story structure, or grammar, and such, and it will reproduce that. But artist are constantly breaking the rules. Can you write a rule for the AI to break the rules and be original ?
I have to agree with @Romolo. You can get lots of words, but it won’t be litterature. Not yet.

Literature is a slippery concept- it contains a judgement, and fiction written by AI has no pretentiousness.

However, I’m more hopeful than you, @py.villeneuve! The creative powers of GPT-3 for fiction writing are insane. It can lead to literature, but not in a push-the-magic-button way. A writer is needed, a writer who understand how fiction works ànd how prompts work, a writer who controls the rodeo bull. Any non-writer will be tossed off immediately.

I believe AI-enriched fiction will be a thing in a few years to come. It will produce novels that are literary and revolutionary. They will be written by human and AI together.

I think it comes down to whether or not we consider GPT-3 as a “labor saving device.” If so, then it’s going to disappoint us by not autonomously producing works of coherent literature. But if we think of it as an instrument that requires the same amount–but a different kind–of work from the writer, then I’m with @Romolo in feeling it can spark a whole new kind of fiction.

Dali could be used to provide a graphical overlay for the novel using heat maps for comparative analysis alongside thousands of other successful books.

I am not anticipating that any one model will do all of the work of story writing. I see potential for a set of task specific fine-tuned models to help with stuff like: paraphrase this / finish the scene / or something like make an analogy of highlighted text.

I’ve found that a “reverse summarizer” works very well for expanding on a block of text, which can then be expanded on again. Now I’m fiddling with controlled generation that has something equivalent to the “log” you mentioned, basically if you create a fine tuned model that is trained on all sections of story, if there is 1 tying element that connects the sections it keeps the output coherent.

1 Like

I’m super curious about your reverse summarizer and controlled generation. Can you share any examples?

Yeah sure, there’s three of them. First was is just the reversed summarizer, other’s are the attempts at controlled generation. The third ones… Interpretive lol.

3 Likes

Hey everyone! New to the chat! On the topic of Transformers being able to write a cohesive “long term” novel, Transformers must have some sort of differentiable memory attached.

Another issue is recurrence slowing things down. Ie when you train GPTm (m for memory I made it up!), you don’t want to individually input the output of the old prediction into the next training step - ie you don’t want any correlation between training step 1 and N. In GPTx/BERT/variants, they just assume the output is right (teacher forcing).

So the issues are:

  1. How to add differentiable memory to GPT?
  2. How to do this without a recurrence relation?

I first thought of something boring (haven’t tested). You make a matrix M (called memory) of size (m, f). f is the original embedding dimension (like 768). m can be massive say 20,000. For every batch of text, you pass the MH attention layers, dense layers all the way to the final softmax layer, then somehow copy BERT’s CLS approach and “extract” the CLS 768 dim vector and perform v=M*(CLS) which will get u a tall skinny vector (20,000 by 1).

Then, perform a long tailed sigmoid ie 1/(1+exp(-0.5v)) onto v. Then element wise multiply the sigmoid output with v^T. You’ll get a (20,000 by 768) matrix the size of M.

Then M(t+1) = M + 1/(1+exp(-0.5v)) * v^T. Then append M onto X (which can be very problematic), or somehow “summarise” M (ie say via a Clarkson Woodruff Transform shrinking M(20000,768) to say (500,768). You can even train the summarisation weight matrix S so we get:

M(t+1) = M + 1/(1+exp(-0.5v)) * v^T
X(t+1) = concat[ X(t+1) , S * M(t+1) ]

The CW Transform will just “freeze” S as a hash table.

This has 2 benefits:

  1. Incorporates long term attention. Ie the dot product makes similar memories remember even more often, and discounts not so important memories.
  2. Fixes catastrophic forgetting. The use of a long tailed sigmoid allows long term memories to stay inplace and not vanish.

However there are is a clear issue with this approach:

  1. Recurrence comes back! Batches must now be sequential… Ie previously u can have 1,000,000 books scramble each page, and GPT would be fine. Now GPTm needs to train ONLY on book 10 then 103 then 12039 with page orderings intact.
2 Likes