[OSS] Mapping the Prompt (MTP) — A lightweight UI to share intent as coordinates (model-agnostic)

TL;DR
Instead of long prompts, MTP lets teams align tone/focus/flow with coordinates.

  • Model-agnostic (works across GPT/Claude/Gemini, etc.)
  • Minimal SVG/CSS/JS (no retraining)
  • Not about numeric benchmarks — it makes ambiguity operable via UI

What it does

  • Map dialogue into 20 nodes (Side A/B)
  • Move points on a UI and use the Gizmo (average) to form team consensus
  • Adjust intent by direction & balance rather than rewriting paragraphs

Docs & questions

What feedback would be most helpful?

  • Node definitions & UI operability (clarity vs. redundancy)
  • Practical use cases (CS, business writing, creative support, etc.)
  • How it plays with prompt libraries/templates in real workflows
  • Accessibility & multilingual considerations (color, symbols, wording)

License: MIT
Assets: SVG/PNG available in ASSETS.md.

1 Like

Just to add one clarification:
MTP isn’t a benchmarking or scoring system. It’s more like a map for pointing at intent — ambiguity is not a weakness here, but a way to make collaboration with AI more flexible.

I’m curious to hear from others:
:backhand_index_pointing_right: Do you prefer working with precise, fixed prompts, or do you also see value in a shared but ambiguous coordinate system that leaves room for interpretation?

Happy to try mapping examples from different domains (education, product, code review, etc.) if anyone wants to see it in action.

have you added the abiity to link this to a elk or postgres?

what layers of logic are on the backend. im interested in several usecases based off the idea, i have not visited the git. I prefer asking. for inputs, can you define what you attribute to as model?

im not a fan of wasting time or energy - here is what i want to do - take your MTP and map it to this system

as a explanation - the above is a thought module, its part of a much larger ecosystem but is a ecosystem itself, professors are agents, tied to dockerized terraform mcp controlled ray clusters comprised of many professors, who work in tandum

they autoexpand their own knowledge base - often choosing based off weights

currently i operate hundreds of these professors in ray clusters - i havent bothered to map their intent, nor have i had a reason to -but since im creating a dev platform in unreal engine using metaprogramming

and a reflex based behavioral programed system based off my own DSL. all documented following ISO IEC best practices.

my stack is also comprised [ as i said earlier ] of several ecosystems close to 20 at this point, that operate in docker containers with various task and forge their own MCP servers which create additional ecosystems
with full telemetry and action reaction objective reason context purpose, [ weight data] and heavily influeced by other OSS such as openMDAO https://openmdao.org/ U.E 5.6 source code, and gitlabs CE,

we have also mapped those agents to local llms as well , we currently use 10 variants, qwen 2.5 coder , minstral 7b, gpt oss 20 b

we have a continuous bot integrated into the system llms and discord for translation and project management and its also linked into docker and the UE project and serves as our project manager, its just me and my japanes homie he made that translator program we use, and its linked to several local llms in the network, which is a NNC - that i own. ID love to learn more about your system if you are interested. screenshots to show where im at.


our bot has project tracking baked into it, with system alerts, and our current objective is metaprogramming into unreal engine, if you check my other post you can see we are already past game asset generation - story creation, image, art, my history paints a fairly verifiable image of our overall stack. we also have a self hosted citlab as i stated, with ci/cd built in for automation. something like this would have alot of data to map and eat and play with

1 Like

@dmitryrichard
Thank you for your comment.
MTP itself does not depend on any backend or database layer — it is designed as a lightweight layer limited to coordinate transformation and UI interaction.

Therefore, there is no direct link to Elk or Postgres, but the coordinate data can be exported in JSON format, making it easy to send into a logging or metrics pipeline.

By “model,” I mean the target generative model, whether an LLM API or a local LLM, since MTP is model-agnostic by design.

In complex ecosystems like the professor-agent clusters you described, mapping intent and balance as “coordinates” could be especially effective. If you are interested in re-implementation on the Unreal Engine side, it should also be possible to integrate MTP by replacing or adapting the UI layer.

For reference, the MTP framework has already been tested and confirmed to work with ChatGPT, Grok, Claude, and DeepSeek.

If you present either the Concept.md on GitHub or the official page (Mapping the Prompt – I'm Kohen a User) to these LLMs, they are able to understand the concept.

The playlist generation for classical music with Grok and Claude has been particularly impressive.

For more details, please see the following Discussion:

I had the same thought, but i wanted to know how much back end wiring would be required.

It looks legit useful as you said those coordinates are something i use because i map things as equations in openmdao, for my math engine and ive built a science engine too.

I was hoping that you had layers of logic i could compound with my own.

@dmitryrichard
Thanks for the thoughtful follow-up.
MTP was intentionally designed to minimize backend wiring — the core logic is only coordinate transformation and UI interaction. The output can be exported as JSON or passed through a lightweight adapter layer, so it should integrate into existing engines with minimal extra wiring.

Your point about mapping the coordinates as equations (like in openmdao) is a great perspective. In fact, the framework leaves room for exactly that kind of interpretation — treating the coordinates not just as visual nodes, but as mathematical variables or state vectors that can be fed into a science/math engine.

Regarding “layers of logic”: there are no heavy built-in layers, by design. The idea was to keep MTP as a thin mapping layer so it can compound easily with external systems, rather than enforce a fixed logic stack. If you’d find it useful, it would be straightforward to add optional transforms (e.g. exporting coordinates in equation form, or tagging them with additional semantic labels).

Dude fr fr

Im so glad you know whats up, do you have plans to add terraform directions , because moving a slider or having that slider adjusted from the mdao tx equations and tform would be like …my dream

Does it also have the ray cluster support like could i map it to the swarm? Sorry if these questuons are redundant. I really be like this.

Thank you. Ill be checking this out. Lmk if you want access to datasets or maps grids. On the back end my stack has been designing a new domain specific language that we currently use in the programming. The conversation from nlp to enmuerations to python to c# , was only recently accomplished programically so my use case with your grid was to track their metrics and with hope mass adjust using the plots. Thank you again. Ill see what i can do with it.

@dmitryrichard
I really appreciate this detailed follow-up — your ideas are very aligned with how MTP was meant to be extended.

Terraform directions and Ray clusters aren’t built-in targets, but the design leaves space for that kind of wiring: the coordinates can be exported in JSON or transformed into parameters that Terraform or Ray can consume. In other words, the “slider” values could absolutely be sent outward into your IaC or cluster orchestration as state inputs.

Your vision of having the slider adjusted directly from MDAO equations is exactly the kind of mapping MTP was intended to support — treating coordinates not just as UI gestures, but as variables in a broader system.

Regarding Ray: while MTP doesn’t include swarm logic itself, you could map cluster states into the grid or feed the grid’s consensus point (the Gizmo) back into your swarm as a control signal. That way the grid becomes both a visualization and a control surface for distributed agents.

I should note that my own knowledge of backend wiring is limited — so I would love for you to experiment with connecting MTP into your stack and share the results. Where I can go much deeper is in questions about the conceptual side of MTP and the framework’s design philosophy, and I’d be glad to answer those in detail.

I find your idea of using the grid to track metrics and perform mass adjustments very compelling. MTP was deliberately kept thin so that it could be compounded with exactly this kind of external logic and DSLs. If you’d like, I can also share more details or examples of exporting the coordinate data in structured forms that might fit well with your science engine.

i would love to use your software in my stack and provide data, it will take me a few days to actually get around to integration and wiring - in connection to the last message i also plan to plot the thought tracking like this - this is what it looks like in current form, its quite bad, i think your method would +++S it.

but this was before i added multiple models to the mix - i will gladly stay in contact. I also had to build my own database just to support the data.

it will take a few days - also, because i dont manually inject that code anymore - the system will track the diff and automatically adjust on detection of extension, but it maps that too using ast + mdao it adds new stages everyday, today its up to 60 stages, but luckily i track that data too - so im sure mixing the 2 would be fun.

@dmitryrichard
Thanks a lot for sharing all these details and screenshots — it’s impressive to see how you’re already tracking and visualizing complexity at scale.
I really appreciate you taking the time to integrate MTP into such a rich stack, and I’m curious to see how it behaves once mixed with your current pipeline.
Please keep me posted!

One more thought I wanted to share, connected to the recent update:
ChatGPT now has conversation branching built in.

This feels like a natural fit with the MTP framework. Since MTP maps each session as coordinates on a grid (across 20 intent/tone nodes), each branch could carry its own coordinate snapshot. That would allow:

  • Color-coding branches by intent (e.g. Grow = green, Focus = white, Power = red).
  • Seeing how different branches drift, converge, or expand visually over time.
  • Using the grid not only as a control surface, but also as a history of reasoning depth.

I’ve attached an image to illustrate how a session can be represented as a coordinate space. With branching + MTP combined, the exploration of prompts could become both navigable and visual.

Would be curious to hear how you’d imagine applying this in your kind of distributed/clustered setup.

BRUH that first image is a 5 pane? so what action → plan→ logic → response → outcome mapping? thats legit bro i need that

i got something for that

i think image 2 can then be mapped to the learning outcome , the weights, right? then they compound? i would have to wire my own audit eh?

big white dot would be my neural network controller, the OSS runs- ah yea man i got this gimmie a few days fr fr.

@dmitryrichard
Glad you noticed the layers — let me clarify how that visualization is structured:

  • The front-most layer is the Gizmo, i.e. the control surface for adjusting intent.
  • The bottom layer is the MTP grid itself.
  • The 3 layers in between each represent a single generation (output step). They are generic snapshots, so you could treat them as records of “what happened” during each pass.

So in total it’s not a fixed 5-step workflow, but rather: Grid (base) + Generations (stacked) + Gizmo (control). The design is meant to be general, but I can see how it maps nicely to action/plan/logic/response/outcome in your framework.

yea im scoping it out now - im thinking i want to map lineage, and ai decisions to identify knowledge gaps and operational weaknesses. and compound that by applying that to the swarm stack per node. i think if i can do that, with your software, then i can use the slider idea to adjust to smarter → dumber, like temputure for llms but tied to decision making using the metaprogramming i have embedded. im insane, so prolly more ambitious then i can currently phathom, but now that ive seen someone create something like that, my interest is piqued. with enough rendering i think i could make this mirror “ no man sky galaxy map” rendering for thought, wherein each white bubble = galaxy, and every ai agent = planet, shipping lanes thoughts. my bad bruh im a stoner, but im effective. ima give it a try!!! cuz if i can do that, i think it means i can map cognition.

if i can map congition then mdao can plot those weaknesses and engine 2 can comeonline and build that.

@dmitryrichard
Thanks for sharing your thoughts and energy — it’s inspiring to see how far you’re already taking these ideas.

And don’t worry at all. MTP’s own approach to color might also look a bit “out there” to some people. After all, mapping the spectrum of a prism or rainbow directly into a UI is a little crazy — but that’s exactly what makes it powerful.

I’ve just added this new diagram to the Concept page.

It shows how colors are treated in MTP:

  • Prismatic Colors (left): directional vectors corresponding to each node quality (Grow, Power, Focus, Flow, etc.). They are also the spectrum colors of nature — directly linked to the rainbow.
  • HSL Color Wheel (center): a continuous reference to keep mapping consistent and interpretable across models.
  • Foundational Colors of MTP (right): the fixed palette of 8 core tones (plus the special magenta node “Return”) that appear on the grid and in the UI.

The idea is to balance flexibility (any hue in HSL space) with stability (a small set of foundational tones), so intent can be generalized yet quickly recognized.

This thread is fire AF! Thanks guys!

Just spitball’n off the cuff here, but what about expanding by recursively diving inward instead? I’m thinking along the lines of making multiple svg templates specified for even more granular intent mapping. Like a template for mapping philosophical shifts, mood shifts, ton/language shifts, etc… I also think it would be worth exploring applying the mapping to the LLM’s responses, then cross referencing that with the users to give an actual way of measuring “alignment.” If the categorized template versions existed too, you could even break said alignment scoring down by categories.

You all are out of my league here, and normally I’m Donnie, just staying in my lane, but I’ve played around a bit with svg’s, nothing like you all, but I think there is a lot of potential there, so I’ll throw my lil’ guy in here haha!.. a recent “just for fun” svg template:

<?xml version="1.0" encoding="UTF-8"?>
<svg width="1024" height="1536" viewBox="0 0 1024 1536" xmlns="http://www.w3.org/2000/svg">
  <!-- defs: same control surface as base; include rainbowMap -->

  <defs>
    <style>
      :root{
        --bg-a:#0b0c13; --bg-b:#141a28;
        --hue:0deg; --sat:1.12; --gamma:0.96;
        --glow:2.2; --noise:0.06;
        --alphaFloor:0.0; --alphaGain:1.0;
      }
    </style>
    <linearGradient id="bg" x1="0" y1="0" x2="0" y2="1">
      <stop offset="0%"  stop-color="var(--bg-a)"/>
      <stop offset="100%" stop-color="var(--bg-b)"/>
    </linearGradient>
    <linearGradient id="rainbowMap" x1="0" y1="0" x2="1" y2="0">
      <stop offset="0%"  stop-color="#ff0047"/>
      <stop offset="16%" stop-color="#ff7a00"/>
      <stop offset="33%" stop-color="#ffd300"/>
      <stop offset="50%" stop-color="#25d366"/>
      <stop offset="66%" stop-color="#00a3ff"/>
      <stop offset="83%" stop-color="#7a5cff"/>
      <stop offset="100%" stop-color="#ff00e5"/>
    </linearGradient>
    <filter id="opsFilter">
      <feColorMatrix type="hueRotate" values="0"/>
      <feComponentTransfer>
        <feFuncR type="gamma" exponent="var(--gamma)"/>
        <feFuncG type="gamma" exponent="var(--gamma)"/>
        <feFuncB type="gamma" exponent="var(--gamma)"/>
      </feComponentTransfer>
      <feColorMatrix type="saturate" values="var(--sat)"/>
    </filter>
    <filter id="glow"><feGaussianBlur stdDeviation="var(--glow)"/><feBlend mode="screen"/></filter>
  </defs>

  <!-- lower half: spiral sea -->
  <rect width="1024" height="1536" fill="url(#bg)"/>
  <g id="sea" transform="translate(0,768)" filter="url(#opsFilter)">
    <rect x="0" y="0" width="1024" height="768" fill="url(#rainbowMap)" opacity="0.18"/>
    <!-- parametric swirl field -->
    <g stroke="#7bd2ff" stroke-opacity="0.18" fill="none" stroke-width="2">
      <path d="M 32 720 q 64 -48 128 0 t 128 0 t 128 0 t 128 0 t 128 0" />
      <path d="M 32 640 q 64 -56 128 0 t 128 0 t 128 0 t 128 0 t 128 0" />
      <path d="M 32 560 q 64 -64 128 0 t 128 0 t 128 0 t 128 0 t 128 0" />
      <!-- (repeat/clone more swirls as needed programmatically) -->
    </g>
  </g>

  <!-- top half: transparent sky with glyphs -->
  <g id="sky" transform="translate(0,0)">
    <rect x="0" y="0" width="1024" height="768" fill="black" opacity="0"/>
    <g id="glyphs" transform="translate(128,120) scale(1.0)">
      <!-- slot for sigils; default dummy circles -->
      <g fill="none" stroke="#79ffda" stroke-width="6" opacity="0.9">
        <circle cx="64" cy="64" r="22"/>
        <circle cx="256" cy="128" r="28"/>
        <circle cx="512" cy="128" r="36"/>
        <circle cx="768" cy="64" r="28"/>
      </g>
    </g>
  </g>
</svg>
<?xml version="1.0" encoding="UTF-8"?> :root{ --bg-a:#0b0c13; --bg-b:#141a28; --hue:0deg; --sat:1.12; --gamma:0.96; --glow:2.2; --noise:0.06; --alphaFloor:0.0; --alphaGain:1.0; }

IDK man :man_shrugging:t2::rofl::rofl::rofl:

2 Likes

aint the point to make all of us big? zero enemy strategy, its a actual protocol in my system

2 Likes

See, I didn’t even realize it was trying to render itself. I got it on second glance at least, but I’m still on that learning curve over here haha

Again though, thanks for share.

2 Likes