Echoes of Oblivion: a 33k word novel written 100% by o1

What I really want to do is a tool like they made in Best Seller Code…

You upload your novel and it tells you whether it will sell or not… Or edits it to make it a best seller…

1 Like

worth noting that Echoes of Oblivion is no edits. The output is a single forward only pass. If you even made just one additional editing pass I think you could dramatically improve the quality of the final result.

2 Likes

Yeah, what I noticed is that if I edit as I go it results in better output because it’s basing new content off what it’s written already… so if you correct it early, it’s smoother all around…

I know what you mean about LLMs only wanting to write about 1200 words at a time. o1 is willing to push it, one time I got o1-preview to output 4500 words in one go, but it was reluctant and noted that its guidelines suggest a max output of 500 words.

In the novella project I’m working on, we’ve simply subdivided the chapter and fed in the previous chunks as part of the context window when writing part 2 or 3. That’s worked well, since then we’re not fighting its impulse to write in 1200 word blocks. Similar to the outline you provided, we don’t at this point give the system the entirety of the novella as context, just a detailed plan of where the story has been and where it’s going. That fits into your comment about breaking up the novel and writing parts separately - that’s working well for us!

1 Like

I’m assuming your outline has to drill into the specific scenes that should playout within a chapter then… Do you see cohesion issues when stitching the subparts of a chapter together? Or does the model naturally pick up the scene where it left off? I’m assuming you still have to stop at logical break points within the chapter.

1 Like

With technical writing it’s a bit more difficult to break mid section (chapter.) The audience we’re targeting are product/project managers and we’d be doing good to get them to think about the sections they want in a given document. They’re not going to want to drop to the sub-section level.

With Echoes of Oblivion I had o1 generate a detailed summary of each chapter and told it to include details about characters, plot points, story arcs etc… I include the last chapter in full and then a summary of the chapters before that along with the outline. I saw around a 50% compression using this technique but in retrospect I wouldn’t recommend it…

The issue is that, as the story progresses you’ll notice formatting from the summaries start to creep into the output. These models are just pattern matchers and they naturally want to mimic the patterns they’re shown in their output. So when you reach a tipping point that you have more cliff notes then content, any new content starts to reflect the structure of the cliff notes.

I suspect that if you’re willing to spend the tokens, the best approach is to show the model the outline plus everything it’s written so far. The outline helps it know where it’s going so you could limit this to just “what’s coming” and seeing the full content of everything it’s written will give you better cohesion across chapters. A full novella is under 64k tokens so you can easily fit it plus the outline in a context window.

1 Like

Could I share your science fiction novel in my forum?

Of course. It’s open source. I’ll generate a v2 after I finish my tweaks to the generation algorithm. I think I’ve got the cohesion issues ironed out.

1 Like

o1-mini ChatGPT is definitely not the model for this.

Complete outline and instructions given, to only get 60% to where a “continue” would happen from max-tokens (but that’s not much different than other models’ need to wrap up). However, it just goes off the rails into AI cliche, and then the premature closure after ambiguity and joyfulness is bizarre “about the author” that has absolutely nothing to do with the prompt.

ChatGPT’s model might have seen that there is no need for reasoning, with one step shown, just turning over output to the final AI.

A dump out of some footnote of nonsense

…In the grand tapestry of the universe, our stories might seem like mere threads, but together, they weave a narrative of resilience, exploration, and the relentless pursuit of understanding. The Draco Tavern stood as a microcosm of that larger journey, a place where the extraordinary met the ordinary, and where, amidst the chaos and the calm, the true essence of our collective existence was revealed.

As I prepared to leave, the first light of Neo-Orion’s moons began to filter through the windows, casting a serene glow over the tavern. I took one last look around, committing the scene to memory—the friends who stood beside me, the remnants of chaos now subdued, and the silent promise that, come what may, the Draco Tavern would continue to be a haven for those seeking both solace and adventure in the endless expanse of space.

With that thought, I stepped out into the cool night, the echoes of the evening’s events lingering like the fading notes of a well-played melody. The Draco Tavern awaited my return, ready to welcome the next chapter in its storied existence.


Rick Jansen is a freelance reporter and chronicler of interstellar events, often found at the heart of Neo-Orion’s most intriguing locales. His tales capture the essence of life on the fringes of the galaxy, where every night holds the promise of the extraordinary.

For some reason, it reminds me of Mixtral 46B, where you get little imagination beyond what you input.

1 Like

Earlier this year, I found Goliath to be quite creative for fiction but it was “clunky” and slow…

I’ve not tried any models from the last few months, but I might be back to writing fiction soon. Someone should do a write-up on the creativity of all the models when it comes to fiction…

Sharing my progress on Echoes of Oblivion v2:

This is just the prologue and the first 3 chapters but I wanted to share the progress because it’s really good… I mean really good…

I fixed the cohesion issues but beyond that I had it define its own writing style based on the great science fiction writers. I also had it generate the first few chapters, think about what was working and not working, re-work its outline to include detailed plot points, and then we started over. What’s being generated now is night and day better then v1.

Get beyond 20k tokens and o1 really starts to struggle. It starts rushing through plot points and it essentially blurring chapters together. I’ve seen this with gpt-4o as well. It’s definitely improved with o1 but it’s still there.

I’m trying everything I can think of to get o1 to just slow down it’s thinking… That’s what its supposed to be good at right? It’s just a difficult ask…

I spent several hours (and about $50) working on this last night and o1s reasoning just starts breaking down when the input context gets above 20k tokens. It’s subtle but the model starts getting confused about facts and struggles to follow instructions. It’s probably task specific and it’s not like it doesn’t try to do what it’s asked. There’s just a clear loss of focus.

Every model I’ve tested has the same issue but it’s more disappointing with o1 because it’s just so damn good up to 20k. :frowning:

2 Likes