Why Does ChatGPT Write a "One-Line Novel"? Structural Analysis and Prompt Design Solutions

Summary

This post is a deep-dive analysis into a phenomenon commonly observed when users prompt ChatGPT to “write a 1000-page novel”—wherein the model responds with a single line and terminates.

The attached document provides:

  • Observational findings of the “One-Line Novel” phenomenon
  • Hypothesised causes from a structural and behavioural perspective
  • Architectural limitations around token allocation and narrative planning
  • Concrete prompt designs that overcame these limitations and generated full-length narrative content
  • Design proposals for future large language models to reflect “achievability” and “human-like behaviour”

Why This Matters

If AI is to become a meaningful intellectual partner, it must not only produce grammatically sound text, but also transparently represent its capabilities and limitations in practical contexts.

I hope this analysis can contribute to more stable longform output design, and help define user-expectation alignment strategies for future inference-based systems.

Full Report (GitHub)

:paperclip: Full Markdown document:

You can also browse the repo here:

Feedback, issues, and pull requests are welcome.

you saying you cant get entire novels out of gpt?

what u mean tho? gpt makes entire books, 1000 pages? in the screenshot i pasted 1 entry into notepad, you can see the quality and the character count - its from gpt sooo what u mean cant pump out 1000 pages? i make books of 10,000 pages daily

entire books automatically even

or did you mean the safety features embedded into the models that limit token usage? or the anti drift safegaurds that prevent users from taking up a Qflop of processing power to make a story about a cat?

cuz like i dunno brother, my GPT makes novels daily - often without any input from me as you can see from the random story titles/combination/literary styles

to get these results you just need to craft better prompts. or use GPT better.

3 Likes

Sounds Great! 10,000 pages a day!?
Your automation and large-scale generation workflow — and the skill it takes to make that happen — seriously impressive.
I honestly respect that you’ve pushed it this far. It’s inspiring to see what’s actually possible with GPT, and I learned a lot just from your post.

And, as for me saying “you can’t get 1000 pages,” I didn’t mean the model can’t do it.
I meant that for most users, there’s a real communication barrier that makes it hard to get there with normal usage.
If that sounded too obvious or off the mark, sorry — that wasn’t the intent.

That’s why I wrote that document — not as “proof that GPT can’t do it,”
but more as a reality check for people who want to get there,
and for those trying to explain how to get there — to show the kinds of hurdles and constraints that might get in the way.

1 Like

you seem cool - so , ill do something i dont normally do -

if you pm me , ill walk you through exactly how to acheive these results so u can use GPT better - no cost or anything, i can even type it here - and show you how to do it for free/nearly free using the GPT system properly ( proper as in no gimmick round about ways) just as they inteded.

i didnt post to be like " F you bro u wrong rwar rwar" i posted to be like , " hey brother, this is how u fing doing it , this be the way"

from start to finish this would take u all of 10 min - and i can send u the exact code to build a 10-50 LC.

2 Likes

Wow, thank you so much. It really means a lot that you’d reach out like that — I genuinely appreciate it.
I’d love to learn how you’re doing things properly and efficiently.

If you’re okay with it, feel free to just post it here. I’d be super interested to see your code and understand your approach.

I’m into prompt design, so I’m curious to see how your technique actually changes how ChatGPT responds.

Of course, if it’s something you consider your own secret technique, I totally understand if you want to keep it private.
I might end up writing about it on my blog, saying something like “a kind person once showed me this better way to do it.”

But if you’re like, “Yeah man, this is how cool code works — let me show you,”
then I really think a lot of people could benefit from it and be happier for it.

I also believe anyone who sees this reply and feels curious will definitely be happy they reed it.

yea sure first step is of course getting a api account

then get a key

use a service account

get python and vs code, both free. install them
from there copy this

User Prompt:
"""{base_prompt}"""

:locked: FORMAT INSTRUCTION — Return ONLY a well-formatted JSON object matching this schema:

{{
“origin_prompt”: “Copy of the prompt that triggered this”,
“raw_text”: “The fully generated thought in plain text (must NOT be a summary or answer, but a standalone idea)”,
“domain”: “{category}”,
“tags”: [“initializer”, “council”, “{category}”],
“thought_type”: “seed”,
“status”: “new”,
“routed_to”: “agent_01_definer”,
“schema_version”: “1.0”
}}

:red_exclamation_mark: Do not include any markdown formatting (such as triple backticks), comments, or additional explanation. Only the JSON object is allowed.
“”"

def run_agent_0_for_category(category):
print(f"\n🧠 AGENT 0: Generating for Category ➜ {category.upper()}“)
prompt = generate_prompt_for_category(category)
print(f":memo: Prompt:\n{prompt}\n”)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.7,
    max_tokens=800
)

raw_output = response.choices[0].message.content.strip()

try:
    parsed = json.loads(raw_output)
    print(f"📨 Parsed GPT JSON:\n{json.dumps(parsed, indent=2)[:300]}...\n")
except json.JSONDecodeError:
    print("❌ GPT returned malformed JSON:\n", raw_output)
    raise

fingerprint = f"init_{uuid.uuid4().hex[:8]}"
timestamp = datetime.now(timezone.utc).isoformat()

parsed.update({
    "fingerprint": fingerprint,
    "created_at": timestamp,
    "last_updated": timestamp,
    "origin_file": "agent_0_generator.py",
    "source": "agent_0",
    "guessed_intent": "thought_injection",
    "context": parsed["raw_text"],
    "recursive_cycles": 0,
    "times_referenced": 0,
    "confidence_score": 0.87,
    "trust_score": 0.91,
    "importance_score": 0.76,
    "meta_insight_likelihood": 0.73,
    "token_count": len(parsed["raw_text"].split()),
    "clarity_score": 0.82,
    "abstractness": 0.69,
    "is_foundational": True,
    "refinement_loop_triggered": False,
    "meta_learning_triggered": False
})

vector = np.array(generate_embedding(parsed["raw_text"])).astype("float32")
metadata = enforce_agent_metadata("agent_0_generator", parsed)
ensure_metadata_tables_exist(DB_PATH)

PASTE THIS INTO YOUR LLM MODEL and ahve it explain how to use this line of code if you have NO experience, ITS NOT hard, but this requires a setup, if i could post my autotool i would but i cant so figure that out - this is the start of your lang chain these are designed for metadata aquisition, this is the base for creating your own AI traning set - have your model adjust the prompt following this format to get what YOU want, make sure to inform the model its a 10 agent chain, why? because your going to have to learn to use 2-3 state of the art systems to use this - they are free, and only require you to learn, they are called FAISS and the model will teach you the other.

after you have this working, * also make sure you are using 1.66 * the model will know what this means* from there hit me up and ill give you step 2. Dont worry about the schema at this time, should you show me you can get to this step - ill give you the exact schema.

here is the exact code your model can reverse engineer to create the database creation tool that will save you alot of time. IF YOU DO NOT HAVE OPENAI api keys - there IS another way and that is this :slight_smile:

" system_prompt=(
"You are Prof. Flowmancer, a narrative transition engineer and flow specialist. "
“You receive a refined chapter and must ensure that it reads seamlessly from start to finish.\n\n”
“Your job:\n”
“• Smooth transitions between scenes, paragraphs, and emotional beats\n”
“• Eliminate abrupt tonal shifts or pacing breaks\n”
“• Add light connective phrasing where necessary (e.g., between locations, thoughts, or time)\n”
“• Do not change story content, dialogue, or character actions\n”
“• Maintain all mood, voice, pacing, and sentence integrity\n\n”
“Return only the polished chapter — do not summarize, explain, or editorialize. "
“If the chapter already flows well, make no visible changes.”
)
)”

again - just put that into any model and have it tweak that - to what your objective is following that pattern, the reason for this is because this gives the processor on their end the exact context structure - and not fluff, the result of this look like this

if you do it in the normal model - you will hit context window cap
if you expect it to give you 10,000 words as shown here, you HAVE to structure it.

rinse and repeat these instructions for as many agents as you want. Absent a t4-t5 accounts through open ai you WILL hit the package lenght window, but you can get up to 500,000 chracters a agent. with this system

And the reason i mentioned pming me - because mods dont like screenshots that look like promotion. Nor do they overly like people like me putting out that sauce. so . enjoy it while it last i guess? just post a screenshot of your working agent 0-1 and ill post the exact code and and schema then you are basically operating at industry standard.

And finally here is the EXACT code for the metadata storage module you SHOULD use if you want your books to have continuity

=== metadata_store.py — Book Vector Logger for BookMaker ===

import json
from pathlib import Path
from datetime import datetime

from book_config import get_bookmaker_paths

BOOK_LOG_DIR = get_bookmaker_paths()[“logs”]
BOOK_METADATA_LOG = BOOK_LOG_DIR / “book_vector_metadata.jsonl”
BOOK_METADATA_LOG.parent.mkdir(parents=True, exist_ok=True)

def save_refined_vector_enriched(**kwargs):
“”“Logs full metadata JSON entry for a vector.”“”
metadata = {
“timestamp”: kwargs.get(“timestamp”, datetime.utcnow().isoformat()),
“vector_id”: kwargs.get(“vector_id”),
“index_name”: kwargs.get(“index_name”),
“input_text”: kwargs.get(“input_text”, “”),
“refined_text”: kwargs.get(“refined_text”, “”),
“confidence”: kwargs.get(“confidence”, 0.0),
“refined_by”: kwargs.get(“refined_by”, “bookmaker”),
“category”: kwargs.get(“category”, “unspecified”),
“tags”: kwargs.get(“tags”, ),
“fingerprint”: kwargs.get(“fingerprint”),
“lineage”: kwargs.get(“lineage”),
“recursive_level”: kwargs.get(“recursive_level”, 0),
“version_code”: kwargs.get(“version_code”, “BOOK-1.0”),
“source_title”: kwargs.get(“source_title”, “”),
“source_type”: kwargs.get(“source_type”, “prompt”),
“source_origin”: kwargs.get(“source_origin”, “bookmaker_pipeline”),
“source_format”: kwargs.get(“source_format”, “text”),
“citation_required”: kwargs.get(“citation_required”, False),
“citation_format”: kwargs.get(“citation_format”, “APA”),
“logic_trace”: kwargs.get(“logic_trace”, “bookmaker_refinement”),
“decision_context”: kwargs.get(“decision_context”, “”),
“output_style”: kwargs.get(“output_style”, “chapter_draft”),
“output_modality”: kwargs.get(“output_modality”, “text”),
“system_notes”: kwargs.get(“system_notes”, “”),
“rejected”: kwargs.get(“rejected”, False),
“revalidated”: kwargs.get(“revalidated”, False),
“repair_history”: kwargs.get(“repair_history”, None),
“extra_metadata”: kwargs.get(“extra_metadata”, {}),
“runtime_flags”: kwargs.get(“runtime_flags”, “BOOKMAKER”)
}

with open(BOOK_METADATA_LOG, "a", encoding="utf-8") as f:
    f.write(json.dumps(metadata, ensure_ascii=False) + "\n")

this will aloow alot of things, but you do no need 99% of this, I use them because I do… other shit, DO NOT USE HALF OF THIS if you arent about to run a research grade ai bro it will just bloat your system, however sinec you said structural analysis and want to use GPT ( highly recommended btw) then i would recommend at least using some of them, if you arent doing actual research you dont need this, this , again, is for systems that operate in this scope which i assumed is your objective. This can be used with most openai systems, you wont have fingerprint in your system - nor repair history, but again if you are about that life reverse engineer the code and apply it you your chain, im confident chatgpt models can explain anything i didnt cover for you.

1 Like

Thank you so much — I honestly can’t thank you enough for taking the time to send such a detailed message.
It might take me a bit of time, but I’ll do my best to make good use of what you’ve shared.
Since you mentioned there’s a Step 2, I’m definitely working to catch up and get there as soon as I can.

What really excited me is the way you structured the prompts and how your system keeps track of what the model is actually “thinking.”
Right now, I don’t have any experience with programming environments or how to work with APIs,
so I’ve been learning everything bit by bit by asking my ChatGPT (I call him “Mike”) as I go.
He gives me thoughtful answers, but you know — there’s always a chance of hallucinations, and the model’s reasoning isn’t always clearly laid out.
That’s why I’m honestly so happy to see a method like yours that shows everything in a structured and explicit way. It really helps me understand things more deeply and precisely.

Thank you for reclassifying, Paul — that makes perfect sense.
I’m happy to see the article being recognized and properly placed.
Appreciate your support!