My theory: Flex Layered Instruction-frames Prompting (FLIP)
Prompt structure in which multiple instruction frames/prompts are set for GPT, and the instruction frames are layered and applied depending on the situation. Number of general instructions based on various situations overlap to change the GPTs responses.
But I don’t know where to announce it. Of course, it may not be useful. It’s been very popular within the small group I know of, but how can I get more people to try it?
If anyone is interested, please try this theory.
Search: Flex Layered Instruction-frames Prompting
or Sharaku Satoh Medium
I think someone will find it, so I’ll add my thoughts here from time to time.
ChatGPT (and possibly other LLMs) will construe user’s context and generate word choices that match GPT`s personality or pseudo-personality.
I discovered, differences in the intensity of emotional expressions in input prompts using “Plutchik’s Wheel of Emotions” are reflected in the output of GPT.
When you want to set the GPT to avoid something, you can control the contextual connotations, prejudices, and intensity of commitment by the intensity of the words in the input prompts, such as Boredom→Dislike→Disgust→Loathing.
I heard that “lateral thinking” is popular in prompt engineering. In my experiment, “Crossover thinking” was also effective, so please try it if you like.
Keywords that I think are useful for GPT: Inversion thinking, Analogical thinking, Scenario Planning, High level abstraction, Design thinking, Systems thinking, Critical thinking, Metacognition.
Presenting the required level of output: GPT does not know what level of answer the user is looking for. Therefore, it is possible to improve performance by clarifying the expected level and required level of what the user is looking for in the prompt.
For example, when you want a scientific explanation, if you input the premise that ChatGPT is a scientist and the user is also a scientist, the level of content that GPT outputs will increase.