Experimental naturalization work has started…
And it’s a real nightmare ^.^
if this completes, version 6 is going to be only needing polish before it’s good enough to present without shame ^.^
Version 5 just dropped…
We’re a long ways from Final but good enough to show yer mum now.
This is a full image prompting suite without the nice wrapper and tooltips. (Windows Native)
Updates will be weekly.
Full Features list - very long
Prompt Forge
Prompt Forge is a desktop prompt-crafting studio built for people who want more than a blank text box and a pile of vague sliders. It turns prompt building into a guided creative workflow, giving you structured control over style, composition, mood, rendering language, artist influence, output formatting, and negative constraints without making the process feel technical or brittle.
It is designed for creators who want prompts that feel intentional, legible, and reusable. Instead of guessing at wording, Prompt Forge helps shape the language for you, with lane-aware prompt generation that adapts to different visual intents like Anime, Cinematic, Photography, Comic Book, Children’s Book, Watercolor, Concept Art, 3D Render, Pixel Art, Architecture / Archviz, Product Photography, Food Photography, Vintage Bend, Custom, and Experimental modes.
Core Experience
Prompt Forge gives you a live prompt preview as you build. You define subject, action, relationship, visual language, and output goals, and the app assembles a polished positive prompt in real time. When enabled, it also builds a matching negative prompt to help steer away from common failures like muddy detail, bad anatomy, clutter, weak materials, or text artifacts.
The app is structured around semantic controls rather than raw parameter dumps. Instead of forcing users to think in disconnected prompt fragments, it organizes creative direction into meaningful categories:
- style
- composition
- mood
- lighting and color
- image finish
- lane-specific semantic packs
- artist influence
- output handling
- negative constraints
Intent Modes and Semantic Lanes
One of Prompt Forge’s strongest features is its intent-driven language system. Each intent is not just a cosmetic preset. It changes how prompt language is phrased, which descriptors are preferred, what defaults feel natural, and how modifiers are interpreted.
Examples include:
- Anime for stylized illustration language, era tuning, and anime-specific modifiers
- Cinematic for film-still logic, framing, practical lighting, haze, anamorphic accents, and dramatic image language
- Photography for observational realism and camera-based phrasing
- Comic Book for panel logic, bold ink, halftone shading, speed lines, and dynamic pose language
- Children’s Book for soft story-first illustration phrasing and accent-based rendering choices
- Watercolor for wash behavior, paper texture, ink interplay, and painted atmosphere
- 3D Render and Concept Art for presentation language, production framing, material breakdown, and render-focused cues
- Pixel Art for sprite-scale readability, palette constraints, dithering, and pixel-scene phrasing
- Architecture / Archviz, Product Photography, and Food Photography for domain-specific image language
- Vintage Bend for period-documentary realism and analog texture language
- Custom and Experimental for users who want less constrained control
These intent lanes can include their own lane cards with subtype selectors and modifier accents, letting each visual family expose targeted controls without losing the shared global prompt system.
Smart Slider System
Prompt Forge includes a broad semantic slider suite that shapes the output language in ways creators actually care about. These controls include:
- Stylization
- Realism
- Texture Depth
- Narrative Density
- Symbolism
- Surface Age
- Framing
- Camera Distance
- Camera Angle
- Background Complexity
- Motion Energy
- Atmospheric Depth
- Chaos
- Focus / Depth of Field
- Image Cleanliness
- Detail Density
- Whimsy
- Tension
- Awe
- Temperature
- Lighting Intensity
- Saturation
- Contrast
What makes these sliders valuable is that they do not just dump labels into the prompt. They are interpreted through the active intent, so the same slider can generate different phrasing for Anime, Photography, Cinematic, or Watercolor. That means the prompts stay stylistically coherent instead of feeling like generic settings pasted together.
Lane Panels and Modifier Cards
Prompt Forge supports shared lane panels and custom lane cards that attach semantic packs directly to intents. These lane panels can include:
- subtype dropdowns
- style family selectors
- era selectors
- modifier checkboxes
- weight-group logic
- defaults
- prompt descriptors
- sidecar behavior
This allows each visual domain to feel purpose-built. An Anime lane, for example, can expose cel shading, clean line art, expressive eyes, dynamic action, cinematic lighting, stylized hair, and atmospheric effects. A Children’s Book lane can focus on soft palette, textured paper, ink linework, decorative details, and gentle lighting. The result is domain-specific control without forcing users to hand-write every prompt nuance themselves.
Artist Influence Tools
Prompt Forge includes artist influence handling with primary and secondary influence slots. You can control how strongly each artist shapes the image language, and the system can generate artist-aware phrasing without making the prompt collapse into repetitive or clumsy citation.
Supporting features include:
- adjustable influence strength
- dual-artist blending
- artist phrase override support
- artist pair guidance
- matrix-aware artist combination help
- structured phrase generation rather than naive name stacking
This makes it much more useful for creators who want style blending with actual control.
Prompt Compression
Prompt Forge includes a multi-stage compression system designed to shorten prompts without destroying their meaning. Compression is not just a single blunt toggle. It supports staged tightening so users can progressively reduce repetition and prompt bulk while preserving the core signal.
The compression system can:
- remove weak meta phrasing
- reduce repeated lane-root wording
- trim repeated long-word clutter
- preserve high-value prompt anchors
- keep the prompt readable while making it more efficient
This is especially useful when working within prompt length limits or when trying to clean up layered prompts that have become verbose.
Manual Output and Negative Prompt Control
Prompt Forge includes manual output controls for aspect ratio, print readiness, transparency handling, and output-focused prompt clauses. It also supports negative prompt generation, with manual negative constraints that can be toggled on when needed.
Negative prompt behavior is built to feel deliberate rather than always-on. Users can decide whether to include negative prompt output, and when enabled, add tailored manual exclusions beneath the main output workflow.
Prompt Preview and Export
Everything flows into a live prompt preview card so users can see the result immediately. Prompt Forge is built around fast iteration:
- adjust a control
- watch the prompt regenerate
- copy the result
- refine again
This makes it useful both as a creative drafting environment and as a production tool for repeatable prompt design.
Preset and Workflow Support
Prompt Forge also supports saving and reusing configurations, making it practical for recurring workflows rather than one-off experiments. Users can build repeatable prompt systems around favorite looks, lane combinations, artist blends, and output settings.
That makes it a real prompt workstation, not just a prompt toy.
Licensing and Demo Flow
The app supports both demo and unlocked usage states. In demo mode, users can explore the product with a controlled export allowance. In unlocked mode, the full workflow opens up for unrestricted use. This gives the product a structured onboarding path while keeping the full app available for serious users.
Why It Stands Out
Prompt Forge is not just a prompt generator. It is a structured authoring environment for visual prompting. Its value comes from combining:
- intent-aware phrasing
- domain-specific semantic packs
- artist blending
- camera and composition controls
- mood and finish shaping
- prompt compression
- negative prompt tooling
- live preview and export workflow
In practice, that means users can move faster, get cleaner prompts, and spend less time manually rewriting tangled prompt language.
Short Sales Version
Prompt Forge helps creators build stronger image prompts with less guesswork. It combines live prompt generation, intent-specific semantic lanes, artist blending, composition and mood controls, prompt compression, and negative prompt tooling into a single desktop workflow. Instead of wrestling with raw prompt text, users shape prompts through meaningful creative controls and get polished, reusable results in real time.
Updates as I posted them on the forum
^.^
Current Road Map for planned upgrades.
It’s got Product Photography at a level that could easily put product photo people into a job…
Or out of a job.
ArchViz also added… need a graphic designer to test.
will probably gate those types of semantic packs behind an additional commercial license…
Currently looking for a safe place to host this as I’m not risking my source Code out on Git after the switching around…
Anime styles got locked in…
implemented new architecture to take to the commercial level
The intent drop-down has fully migrated away from the original ‘slider presets’ that came stock with Codex’s original writes, to 8 full semantic packs to do the shown types of prompt work.
If I understand things correctly, I can start adding semantic packs up, to about 30 or 40… before I start to bloat my lane structuring and need to re-evaluate.
It’s started keeping a registry to streamline the implementation of new semantic packs, and the door is left open for national/language swaps at the end.
Complete Architectural Overhaul - live in dev version
- Building the framework for dozens of semantic packs…
We’re just going to go for the gold and get ALL the major asks, and niche asks in the Ai world for image creation, and house it under one roof before anyone else even can hustle it all together.
implemented successfully in the work environment
Expanded architecture to hold another 18-20 semantic packs/styles safely (long and boring)
3D toy-like / CGI render styles, Anime/manga character art, Watercolor / children’s-book illustration, Pixel art / retro game style, Comic-book / graphic-novel paneling
put on pause
Flat vector branding/logo/iconography
Cartoon/sticker/mascot styles
Social-media portrait beautification styles
Architectural/interior visualization
Clean commercial product photography
So far it’s been centered around art/fine art… but the basic engine is build and it’s time to start absorbing all the main asks of image creation out there in the world.
If you want a specific style of image generation to be included in this app so you don’t have to manually prompt, or would like speed up your image generation prompts (up to 10x faster than waiting on the model), you post a few pictures in this thread, and I’ll add it to the semantics database, for your style to be available.
It will be found under the intent drop-down for this new tool.
Make sure your images capture the essence of the style you want, and all the images stick to that lane.
What you’ll get is that lane, added to the UI, with adjustable sliders enabling you to bend your style to the typical prompt additions we all love and abuse in the image threads.
Happy Prompt Forge’n !!!
VB’s Cold War Era style is live
It can be found under Vintage Bend
Original Product Introduction
Let me introduce you to Prompt Forge, a (windows native) app that makes some very important statements to Ai communities in general. (ports are planned after all features to be added)
This App is a tool that very thinly veils more nuanced abilities. I’m going to give the fuller tool description, and its intended use, here to the community rather than forcing all of it into the documented release notes.
I’ll explain the philosophy behind that as briefly as I can:
As of the last time I looked across the internet for a similar app and could find none, a sobering realization started surfacing in my mind: humans are becoming very accustomed to relying on AI-governed compute to handle every little thing that needs computing. I do not think that is the best way forward, philosophically, structurally, or productively.
This App stands in quiet opposition to that drift. It is a reminder that we can share the load with AI instead of letting more and more of our own computing capacity sit idle while adding unnecessary burden to the frontier models already carrying an expanding share of humanity’s requests. It also generates prompts faster with meaning than an Ai chat can complete a similar output.
Recent remarks from Sam Altman on just how aggressively AI-use costs have fallen only reinforced something I had already been thinking about: users can do their part too. Better tooling should not only make prompting faster; it should also reduce avoidable inference waste by moving stable prompt structure into governed systems rather than asking a live model to rebuild it from scratch each time. Sam Altman wrote in February 2025 that the cost to use a given level of AI fell about “1000x in 18 months.” If developers spent a bit more time thinking in local governance we could help compound that reduced cost by avoiding redundancy in our computational asks.
In my view, one of the quietest wastes in current AI use is the repeated act of asking a frontier model to reconstruct prompt language from minimal human input, over and over again. What looks like a small convenience to the user often forces the model to infer intent, invent missing structure, resolve ambiguity, and package the result into language that feels complete. That burden adds up quickly at scale.
In this use case, some of the best image prompts were written by the AI itself, which made the next realization harder to ignore: once enough of that language had been surfaced, studied, and broken into stable parts, much of the burden no longer needed to remain on the model. A governed in-app system can carry much of that load locally instead, reserving live model compute for work that actually requires open-ended intelligence rather than repeated reconstruction.
That is part of what Prompt Forge is really testing.
That said, the current live version of the app does not yet carry that reduced friction intuitively in its design. If a user blindly clicks the randomize feature in Custom mode, it will usually produce a prompt that is harder on the machine because of contradiction. That is not an accident. It is also a gate.
Custom mode is the true mode of the Forge. Each slider affects the image, and when you adjust them manually, you are already working from some kind of blueprint in your mind. Even if that blueprint is vague it’s not likely to contradict itself, and the sliders are arranged to teach the user better prompt awareness while building. Users who lazily click randomize, without the follow-up experimental layer, will usually produce worse prompts than those who have learned, or are learning, what makes a prompt work.
Users who want to explore more styles quickly while using the randomize button should, switch the intent dropdown to Experimental after the prompt has been randomized.
Experimental mode collapses the prompt into stronger semantic winners instead of leaving every parameter in whatever contradictory position it landed. These produce less detailed prompts but allow the model to easier explore the artist blend feature included in the Forge. Remember to return the drop down to custom after copying and using your prompt.
As I had just mentioned, this ‘quick and easy’ modality exists partly for testing the artist pairings baked into the release. It’s gated so that Chatgpt can learn these pairings first in the Ai world. Many artist pairings are difficult for models to resolve because most people never ask for those combinations in the first place. That sort of workload is acceptable friction. It pushes into areas that can actually teach models something, instead of wasting compute on preventable prompt chaos.
There is also a more adversarial reason for some of this design. Someone who steals the software and uses AI to break around the license without understanding the system will not get the best results from it. In practice, careless use produces more burden, more contradiction, and worse prompts. I consider that an appropriate consequence.
The negative prompt feature is separate out of respect for personal preference, and partially hidden by default. The app is also meant to remain usable across a wide range of LLM-plus-image workflows by relying on the most common semantic variables in image generation.
The current working version on my computer streamlines the flow as I’ve described to you, has some more quality of life features/tooling and will be released soon.
It is intended for use with Chatgpt, while the other models are still being used in less advantageous ways.
I hope to better apply my understanding in embedded metaphor density and cluster in the future, but this is a pretty decent step forward in understanding the basics of clustering as-is.
Here is the current release on Git, that requires you at least read the ‘secrets to it’s workflow’ as detailed here.
It provides 100 free prompt exports to test out the various combos before asking you to buy me lunch. If OpenAi takes an interest in the vector I’m working down, the structure of the project is open to change.
Thank you for taking the time to read this, and thank you all for sharing space with me on your community forum…
I’ve learned so much from you all, already, and wish to keep doing so into the future.










