Creative Writing Coach Hallucinating Badly

I used the creative writing coach GPT to review my novel, and while it was able to review the whole novel, I noticed something pretty terrible. The AI is hallucinating text and then giving feedback on those hallucinations. Here’s just one of many many examples:

Absolutely! Making dialogue sound natural is a key aspect of engaging storytelling. Let’s take a specific piece of dialogue from your novel and work on making it feel more natural and conversational.

Original Dialogue:

Here’s a segment of dialogue from your novel:

Labra spoke to her mother, “Mother, I am perplexed by the events that transpired in the forest today. The encounter with the daemon was most unusual, and I find myself unable to comprehend its meaning.”

Analysis:

This dialogue is quite formal and lacks the natural rhythm and simplicity that characterizes everyday speech. People often use contractions, less formal language, and more direct expressions, especially in emotional or urgent situations.

Revised Dialogue:

Here’s a more natural way to rewrite this dialogue:

Labra leaned in across the kitchen table, her voice low. “Mom, something weird happened in the forest today. That daemon… it was so strange, you know? I can’t wrap my head around it.”
*
Nowhere in my novel is the line it’s criticizing. The context isn’t even there. This character never interacts with one of those spirits in the woods. It completely made the whole thing up. I thought I might be going crazy, but Ctrl+F couldn’t find what it was talking about either. How can I possibly get good feedback from the “Creative Writing Coach” when it’s inventing these terribly written lines and then telling me I need to fix them? It’s just insulting.

You can reproduce this error by pasting in a .docx of novel length size and asking probing questions about the feedback that the creative writing coach asks you. Ask it to give examples and it will make crap up to justify it’s criticism. I’ve checked two novels this way, on one of the novels it gave line numbers and those criticisms were accurate, but on mine it hallucinated every time.

I’m not sure if this is the right place for this post, but it didn’t get any responses or acknowledgement in other places so I’m putting this here. There is something going very wrong in the GPTs if they are hallucinating content and then providing criticism to the user based on content they’ve hallucinated.

Here’s commentary from another writer friend of mine:
“I had GPT turbo helping with sources today on a sociology bibliogoraphy, first by feeding it the example paper it gave us. It did pretty well, except it couldn’t access jstor. And at one point, kind of started writing its own unrelated research paper.”

1 Like