DRC Mode: Boost Your Design Reviews

Hello everyone,
I would like to introduce a way to try DRC Mode (Design Review Continuity Mode), which can help improve productivity and maintain consistency during design review sessions.

DRC Mode Definition
DRC Mode (Design Review Continuity Mode) is a special operational mode in ChatGPT that maintains continuity and consistency throughout the design review process. When activated, it references and preserves all previous discussions, decisions, and relevant context across the session, allowing users to conduct structured and efficient design work without losing the flow of ideas.

Instructions

  1. Enter the definition above into your ChatGPT session.

  2. Then, enter the following command to switch to DRC Mode: DRC Mode ON

  3. When DRC Mode is no longer needed, enter the following command to exit: DRC Mode OFF

Benefits

  • Maintains continuity of discussions and notes during design reviews

  • Allows reference to past decisions and discussions, improving workflow efficiency

  • Ensures consistency across team reviews

Please try it out and share your feedback or improvement ideas.

Related Link
Automatically Switchable Design Review Continuity Mode for ChatGPT

So like for the few guys who do frontend coding there should be a special kind of memory? Is that what you are saying?

Like “hey bot always make sure that …” and it should keep that info until you say you won’t need it anymore?

What should happen in case you give it 200k of such informations so the model can’t handle it anymore? triage on the most important ones? Because afaik that’s how it is done atm.

Or are you suggesting to make a bigger model with more information? Where do you take that data from?

1 Like

Hi jochenschultz,

Thanks for sharing your perspective.

Just to clarify two points:

  1. This is not about introducing a special memory feature or persistent “always remember that …” behavior. It’s about session-level continuity to avoid losing design context.

  2. It’s also not about making the model larger or adding more internal complexity, but about lightweight UX-side constraints.

On one point, I fully agree with you: long-term software development experience strongly shapes how we evaluate such systems, and that perspective is essential here.


Hi ODC members,

While drafting a reply to jochenschultz, an incident occurred:
an essential boundary sentence (“this is NOT about memory”) was dropped during editing, which immediately led to a misunderstanding.
As a result, I updated the definition of DRC Mode to clarify its purpose more precisely.

Updated DRC Mode Definition

DRC Mode (Design Review Continuity Mode) is an operational mode for ChatGPT designed to support design and review work by preserving continuity, not just conversation flow.
It explicitly prioritizes and protects boundary conditions and negative constraints (what the discussion is not about), ensuring they are not lost during summarization, editing, or iteration.

DRC Mode is not a proposal for persistent user memory, personalization, or long-term storage.
All context remains session-scoped.
The goal is to prevent design intent, decisions, and critical constraints from being unintentionally dropped while iterating on complex technical discussions.

Examples: Typical Differences in ChatGPT Responses

Example 1: Boundary Preservation

Without DRC Mode
ChatGPT may interpret the discussion as being related to long-term memory or personalization features and respond by suggesting improvements to persistent memory handling or user profile storage, even though this was explicitly not the design intent.

With DRC Mode
ChatGPT explicitly respects the boundary condition that this discussion is not about memory and focuses instead on maintaining session-scoped design constraints, preventing unintended topic drift.

Example 2: Design Iteration and Editing

Without DRC Mode
During summarization or refinement, ChatGPT may shorten the response by removing negative constraints or “what this is not about” statements, leading to ambiguity or misinterpretation in subsequent iterations.

With DRC Mode
ChatGPT treats negative constraints as non-removable design requirements and preserves them across summaries, edits, and iterations, ensuring continuity of design intent throughout the review process.

This incident itself demonstrated why such a mode is needed.

I propose calling this approach—changing ChatGPT’s behavior without modifying the model itself—either a User-Defined Operational Mode or Protocol-level Behavioral Rebinding.

And how will it be implemented? Please stop using chatgpt for the answer. It only makes it worse.
You can name it once you explain how it is build.

Not from a “i want the cake more sweet” but from a “add sugar to the cake to make it more sweet” perspective.

Because that for sure everyone had that wish. Everyone wants the model to follow commands better.

But don’t get into the formula one and tell everyone “i have an idea, let’s make the cars faster, I call it the super fast method”.

That is not how it works

Hi jochenschultz,

Thank you for your thoughtful and candid response — your concern is absolutely valid, especially when discussions drift toward vague ideas without a clear implementation path. I agree with you that simply saying “make it better” or “make it faster” is not a meaningful proposal.

That said, I think the core misunderstanding here comes from what layer we are talking about.

I am not proposing an internal implementation change to ChatGPT:
no changes to model architecture, weights, training data, memory systems, or capacity.
From that perspective, your “add sugar to the cake” analogy is completely correct — and I agree with it.

What I am describing operates at a different layer.

DRC Mode is not an implementation inside ChatGPT.
It is a user-defined, session-scoped interaction protocol:
an explicit declaration of constraints (including negative ones) and priorities that both the user and ChatGPT agree to respect during a session.

In other words, this is not “make the car faster.”
It is closer to saying: “For this drive, these rules must stay visible and must not be dropped while iterating.”
The engine stays exactly the same.

This is also why I explicitly state that this is not about memory.
There is no persistent storage, no accumulation of information, and no personalization beyond the current session.
The issue being addressed is the loss of boundary conditions and negative constraints during summarization or editing, which is something that already happens today.

So yes — if this were framed as an internal feature request or a model improvement proposal, your criticism would fully apply.
My intent is narrower: to name and formalize a behavior that can already be achieved through careful protocol-level interaction, without modifying ChatGPT itself.

I appreciate you pushing on the “how does this actually work” angle — that pressure helped clarify where the real boundary lies.

That is not how that works. You can’t make any rules on topics you don’t understand.

Hello jochenschultz, thank you for taking the time to comment. I understand your concern about making rules on topics that may seem unclear. To clarify, DRC Mode is designed as a guideline rather than a fixed rule—it regulates continuity in dialogue based on defined assumptions. The behavior changes induced by DRC Mode are based on the principle that explicitly providing rules for behavior can influence and modify that behavior. My intention is not to impose rules without understanding, but to propose a framework within these assumptions. I appreciate your feedback and would welcome any suggestions on improving this approach.

A tag “Use cases and examples” has been added, which might be helpful as a reference.
Adding some context for those interested in practical use: For DRC Mode, here’s a simple scenario: when a user switches between tasks that require different continuity levels, the automatic mode adapts dialogue flow based on predefined assumptions, while the manual mode allows precise control over the interaction. This helps illustrate how DRC Mode can be applied in real-world usage.

This topic was automatically closed after 24 hours. New replies are no longer allowed.