Seeking advice on improving narrative continuity with custom GPT

Hello everyone,

I’m reaching out to this community for some advice regarding an issue I’ve encountered with my custom GPT, designed for a unique storytelling project. In this project, I write narrative fragments from the perspectives of various characters, and the GPT responds from the perspective of a single character. The setup includes general instructions for the GPT, character traits, basic plot rules (like the season, time, and setting), and language and style guidelines. In the Knowledge section, I’ve uploaded three PDFs containing descriptions of other characters, settings, and a summary of what has been created in previous conversations to give the GPT some form of “memory.”

Lately, I’ve noticed a challenge with maintaining narrative continuity. For example, after describing a farewell scene between characters in the evening, the next scene involves characters meeting at work in the morning. Despite this, the GPT sometimes continues as if it’s still in the previous scene, disrupting the flow of the story. I find myself having to constantly remind the GPT of the current scene and what it should focus on, which is quite disruptive as I aim to guide the overall narrative.

This issue wasn’t present initially, and even with ChatGPT 3.5, the continuity was more manageable despite its more limited memory capabilities.

My question is: How can I improve the situation so that the GPT can more smoothly continue the story without needing constant reminders of the context? I’d like it to follow the narrative flow naturally, based on the ongoing story, without getting lost in the transition of scenes.

Thank you in advance for any suggestions or advice you can offer. I’m looking forward to making my storytelling project as cohesive and engaging as possible with your help.

1 Like

One really fun technique that I’ve used for that when I’ve run into that issue is creating a logging system for all of the interactions that your assistant goes through. That then uses natural translation combined with the output and input of the previous run to create a better longer running narrative. And when you take and use a paradigm like guidance.js to put constructed flow together of how you would like the story to proceed, it can help keep it in a consistent format and help it keep its context.

Hello,

Thank you very much for your swift response and for sharing your insights. I must admit, the technical aspects of your suggestion are a bit beyond my expertise. I’m not a tech-savvy person; my interactions with my Custom GPT are primarily through inputting instructions directly, and I do not use the API, which I believe your solution might require.

In my instructions to the GPT, I’ve specified that it should continue the narrative based on the ongoing story. However, I lack the tools or knowledge to implement a system where the GPT could remember previous sessions or interactions, as you’ve described.

Is there a more straightforward way to address the issue of narrative continuity without diving deep into technical solutions? Perhaps something that can be managed directly through the instructions I provide to the GPT or any other non-technical means?

I truly appreciate your help and would greatly value any further advice you could offer that suits my limited technical capabilities.

Thank you once again!

If you’ve been granted access to the new @ command to call gpts I’m pretty sure you could use the at command to call a Gpt that you tell the previous GPT to output his story so far to and have that GPT that receives that right? The start of a continuation that you would then feed into a third GPT you could feed it back into the first GPT through the second gbt and a loop to keep the contacts going by basically pre-fronting it in between each generation agent in a sense so that you would just take the actual output Of the second, GPT compiled into a story when the stories finish running maybe? Without using the API and having additional resources, it can be difficult in a situation like the one you’re in. Can you tell me a little bit more about the project? Maybe I could help you with the API implementation? Sorry for the improper formatting. Miss spellings and grammar. This was text to speech.

Thank you so much for your detailed suggestions and your willingness to assist further. I must admit, the technical aspects of using commands to call between GPTs or integrating APIs are quite beyond my current understanding and capabilities. I only have one GPT for my project, and I use the ChatGPT version available through a web browser interface, not engaging with API or similar technologies.

I’ve noticed the issue of narrative discontinuity becoming more pronounced in recent weeks, and it seems that I’m not alone, as several users on this forum have shared similar concerns. There’s speculation that OpenAI might have reduced computational power, which could be contributing to my GPT’s difficulties in maintaining narrative continuity. Furthermore, the problem seems to escalate with longer conversations. I also suspect that the files in the Knowledge section might be impacting response generation, even though these three PDFs are only a few pages each.

I wonder if converting these Knowledge section files from PDF to DOC format might lighten the load, although I’m unsure if that would significantly affect performance. It’s possible that my instructions might not be precise enough, but it’s curious because, for a long time, everything worked smoothly, even when my instructions were less detailed than they are now.

Could the issue indeed be related to the format of the Knowledge files, or might it be something else? I’m searching for a solution that doesn’t involve deep technical adjustments, given my limited tech background.

Thank you once again for your patience and for offering your help. I really appreciate it.

By the way, please don’t worry about any spelling or grammatical errors in your messages. To be completely honest, if it weren’t for ChatGPT, I wouldn’t be able to write my posts in English at all, since it’s not my native language. So, in a way, we’re both leaning on a bit of tech support to communicate. Isn’t technology wonderful? :wink:

I’m struggling with some similar issues. It can take a while to hone in on the issues. I don’t want to take the time to manage via an API even though I probably have the skills to do it, I just don’t want to spend the time because the work is tied to a specific platform or I have to spend more on self hosted solutions which adds complexity.

Here are some things I’ve done to attempt to REDUCE the problem.

Instruct it to always give version numbers to each response. Partial success, but I can sometimes refer back to an outline version if it’s not too far back.

Tell it to always repeat/restate any corrections and clarifications when revising, AND why they matter AND how it changes things. For iterative improvement on scenes, this is really important to combat the AI skipping back into details I rejected. It’s not foolproof but it works very well.

Here’s something I’ve been working with and seems promising so far: “show the main and sub plots in a mermaidjs diagram.”. This gives it motivation to be concise without saying so, while also creating separate adjustable flows for subplots. It seems to want all sub plots to start at chapter 1, but I think I could tell it to fill in diagram blocks with “not started yet” or something.

Things I want to experiment with:
“At the start of each chapter/section, ask the user to paste the outline again”

“Restate the remaining outline of this and the next chapter after each response”

“After each response, show how it might not follow the outline”

“State the scene, location, and time elapsed since the last scene, with one sentence describing where each character went between the last scene and this one” - might force it to keep scene separation in context. But it could backfire.

1 Like

Just wanted to share a couple of things that I found during storing writing.

  • creating the settings, story outline and story-chapter expansion helps significantly in creating and maintaining coherence of every chapter.
  • ignoring certain chapters when the focus is specific edits in one chapter helps
  • deleting previous edits of a chapter also helps

The techniques are better illustrated in a video here (https://youtu.be/MAn3_eGuyfM)

After six months, I can say that I’ve solved the problem like this: in the instructions for my custom GPT, I wrote:

“Your responses should be based on the current context and develop the story. Always focus on describing new scenes so the plot moves forward.”

The key is not to give GPT negative instructions, such as “Don’t repeat previous scenes”; AI doesn’t respond well to negative instructions. If you really need to add them, it’s better to say “avoid”, but I still recommend focusing on what GPT should do, rather than what it shouldn’t.

1 Like