@include-style Prompt Presets & Memory Modularity for Better UX

Hi OpenAI team and community,

I’d like to share a feature idea that could significantly improve ChatGPT’s usability, transparency, and efficiency—with very low implementation cost.

Proposal: @include(“style”) Prompt Snippets

Allow users to insert prompt presets like @include(“concise”) to guide behavior clearly and consistently.

Examples:

  • @include(“concise”) → short answers, avoid deep dives
  • @include(“polite”) → polite tone in Japanese
  • @include(“no_deep_dive”) → prevent unsolicited long responses

These would map to system prompts behind the scenes. This approach is minimal, transparent, and easy to maintain.

Extend to Memory: Modular and Transparent

Memory is powerful but often feels opaque. Why not allow users to explicitly include memory modules?

Example:
@include(“my_profile”)
@include(“preferred_tone”)

Benefits:

  • Greater trust and user control
  • Easy to enable/disable
  • Prevents undesired recall

Design Philosophy

As we move beyond “just making LLMs talk,” I believe the next frontier is user experience and intentional control.

Jim Keller and GeoHot have emphasized elegant, minimalistic, structure-first design in systems architecture. This idea is inspired by that same spirit—solving things by structure, not brute force.

“Instead of burning GPU on controlling behavior, why not let the user guide it?”

Summary of Key Suggestions

  1. Add @include-style prompt presets
  2. Modularize Memory with explicit includes
  3. Offer UI-based system prompt defaults
  4. Clarify output structures like TeX (input-processing-output)
  5. Invest more in usability features than brute-scale

Thanks for considering.

keep_robot

Just a small follow-up thought that might help illustrate the intent behind this idea.

It occurred to me that what I’m trying to propose with @include("...") is conceptually quite similar to how TeX handles document structure. In TeX, we use \include{...} or \input{...} to insert modular parts of a document—encouraging clarity, reuse, and structure.

Likewise, in prompt design, a symbolic @include(...) could serve as a way to insert reusable behavioral instructions—concise responses, tone preferences, memory modules, etc.

TeX and GPT both process human language, but apply it in different domains:

  • TeX transforms language into visual structure
  • GPT transforms language into semantic output

Both rely on hidden engines to “make sense” of our intent.
And both benefit tremendously from user-guided structure.

Would love to hear your thoughts on this analogy!