My current iteration of prompt engineering: Mermaid logic

When you write a set of instructions for a large language model - it is capable of contextualizing instructions. Its not only capable of generating the next word, as text completion models are, but they have attention mechanisms in place that allow them to autocomplete by segregated context as well as words, and from different perspectives via patterns.

So these models see patterns, in patterns, in patterns…

Mermaid is a markdown format that can among other things create flow charts.
GPT can perceive each step of these charts, which make it useful for checking logic and looking for missing components.

You can also use mermaid to algorithmically introduce and emulate more complex cognitive processes.

All you need to do is contextualize the science of cognitive psychology; and formulate a continuum accordingly.

elsewise, you can just have it check its logic in mermaid, and it will be a lot better at giving answers.

1 Like

So, i tried to explain this to someone earlier, and they insisted they did not get it; So I would like to rexplain it here, just in case im the “bad guy”

(not really)

But Mermaid is an easy way to get your GPT to use a flow chart to visualize logic.
What do i mean by visualize?
Mermaid is perfect for cause and effect expressions.

So with a model containing attention mech, its obvious.

I mean that truely.