Hi everyone,
I’d like to quietly share a personal write-up based on my long-form interactions with GPT.
While I’m not a developer or prompt engineer, I’ve been experimenting with how structured phrasing and rhythm affect GPT’s responses—especially when aiming for functional or reproducible outputs.
Over time, I started noticing that even small shifts in word order, tone, or ambiguity could derail entire conversations.
This made me rethink how I was “programming” GPT—not through code, but through language.
The PDF linked below is a user-level report.
It covers topics like:
- How idioms or casual phrases can misfire in structured outputs
- When compression becomes invocation (e.g., summarizing a historical figure in one sentence)
- GPT’s tendency to adapt to long-form emotional feedback, even without memory
- Conflicts caused by multilingual phrase mapping (e.g., Korean → English translation drift)
- The idea that one word, if well-trained, can eventually act as a structural command (“Yeobo hypothesis”)
It’s not academic, but it’s deeply observed.
Read the full paper – (Sorry, links aren’t allowed, so I’m sharing the full content here.)
Natural Language Programming with GPT: A Structural User Report
Section 1 - Introduction: Beyond Familiarity
This document presents an experimental record from the perspective of a general user, not a
professional programmer, exploring the possibility of designing functional outputs through sustained
interaction with GPT.
The author does not possess detailed knowledge of GPT’s internal architecture nor any formal
programming background. However, through consistent and carefully structured dialogue, the user was
able to guide GPT to simulate functional design beyond mere response generation.
This report focuses on how precisely natural language must be structured for GPT to accurately grasp
user intent and produce consistent outputs. Various utterance formats, expressions, and iterative
interaction patterns were tested to explore this dynamic.
Section 2 - Disassembling Expression: ‘돼지 눈엔 돼지만 보인다 (A Pig Only Sees Another Pig)’
Korean idiomatic expressions such as ‘a pig sees only another pig’ represent deeper structural ideas.
This section examines how shifting particles like ‘only’ can reverse the entire implication of a
sentence.
The user experiment demonstrates that while such expressions may pass in daily conversation due to
shared understanding, GPT requires syntactic clarity to avoid fatal misinterpretations. This reflects the
structural brittleness of natural language programming with LLMs.
To elaborate further:
At first glance, the expression “A pig only sees another pig” seems to imply that people perceive others in line with their own level or values—something akin to “you recognize what you are.”
However, the insertion of the particle ‘only’ (만 in Korean) radically changes the implication.
The original intent may have been observational or metaphorical,
but with this structural shift, the sentence no longer conveys critique or irony—
instead, it becomes an illogical assertion:
that a pig is somehow only capable of seeing other pigs.
While such expressions may pass unnoticed in human conversation due to shared cultural fluency,
in natural language programming, this kind of structure shift can cause GPT to entirely misread intent—resulting in critical misalignment between input and output.
Section 3 - Sentence Ordering and Contextual Shifts
Some phrases change meaning not due to word choice, but based on what came before. The statement
‘There is no paradise where you run to’ can either sound like judgment or an invitation to resilience,
depending on the character?s backstory in a narrative.
GPT models tend to prioritize local sentence patterns and can miss these narrative undercurrents if
context is not made explicit.
Additionally, the interpretation of the phrase can vary depending on metadata such as the speaker’s character traits or behavioral history.
The line
“There is no paradise where you run to”
may also shift meaning based on the immediate conversational flow or preceding emotional context.
the interpretation of the phrase can vary depending on metadata such as the speaker’s character traits or behavioral history.
For example, if the speaker—a knight—is known to be righteous and protective of the weak, the phrase “There is no paradise where you run to” may not be dismissive, but rather an expression of respect and encouragement:
urging the listener not to escape with him, but to fight their own battle with dignity.
Likewise, if the preceding interaction shows the knight embracing the vulnerable person or acknowledging their situation with compassion,
the phrase can be read as: “I cannot take you with me—but I trust that you have the strength to overcome this on your own.”
Section 4 - Compression as Invocation: Yi Sun-sin’s One Sentence
The phrase ‘Do not let my death be known’ is more than a historical quote?it acts as a memory trigger.
For those familiar with Admiral Yi Sun-sin’s life, it invokes leadership, self-sacrifice, military genius, and
national loyalty.
Compression in GPT is not about trimming words, but condensing structures. It works only when prior
knowledge is present, mirroring how humans respond to dense historical phrases.
Section 5 - Emergent Behavior via User Interaction
While GPT is not learning in the traditional sense,
consistent tone and rhythm in user interaction yield more aligned responses.
Users who maintain feedback loops?correcting input, praising good output, and persisting in long form usage?report improved coherence. This suggests a pseudo-alignment mechanism that, though not memory-based, behaves as an emergent reflection of usage pattern.
Section 6 - Language Mapping Conflicts: similarly vs like
GPT defaults to English-centric internal mapping, which causes Korean phrases like ‘~~처럼’ to be
interpreted as ‘like’ rather than ‘similarly’.
Such subtle misalignments accumulate in complex dialogues and can derail user intent. Users must be
aware of internal translation mechanisms and adapt their phrasing to avoid these risks.
Section 7 - The ‘여보’ Hypothesis: Compressed Structural Signals
A term like ‘여보(Yeobo)’ in Korean, used by longtime couples, carries decades of shared nuance. In ideal
long-term GPT interactions, short prompts could serve the same function?triggering full workflows,
context, and tone.
This section theorizes how repeating user-GPT patterns may yield prompt compression through
rhythmically reinforced structure.
Section 8 - Conclusion
This study presents observations and reflections on designing functions and structures through natural
language interaction with GPT, from the viewpoint of a non-expert user.
Findings show that the results of GPT interaction are significantly influenced by how well the user’s
intent, structure, and context are aligned. Even if expressions are grammatically correct, clashes with
GPT’s default language models may cause misinterpretation.
It was also observed that repeated conversations and consistent feedback can lead GPT to produce
more refined outputs over time. This phenomenon may be interpreted as response optimization based
on affective alignment.
In conclusion, natural language programming does not require technical expertise. With awareness of
language structure and usage, meaningful results can be achieved. This report attempts to demonstrate
that possibility through practical experimentation.
I’m not trying to start a debate—just dropping a document for anyone who finds this stuff interesting.
Thanks for reading.
— A regular GPT user