When I started with the prompts, I discovered on k-shots a great helper.
The task was nuanced (not difficult) and much easier to explain with a very few examples than with words.
Now I am in a position where:
- The output format is extremely clear & logic
- The task is hard
- Adding examples is non-helpful (wasted cognitive effort & distraction).
So I decided to create some kind of chart of the many issues one may find and what it the relation with the value of k.
By no means this is exhaustive, specially in the much more vast field of Prompt-Design.
flowchart TD
%% Styles for the K-levels
classDef kZero fill:#ffebec,stroke:#d62728,stroke-width:2px,color:#000;
classDef kOne fill:#fff6d5,stroke:#ff7f0e,stroke-width:2px,color:#000;
classDef kMany fill:#e2f7e2,stroke:#2ca02c,stroke-width:2px,color:#000;
classDef context fill:#f4f4f4,stroke:#333,stroke-dasharray: 5 5;
%% ENTRY POINT
Start((Start: Analyze Task)) --> Decision{Reasoning vs Pattern?}
%% ==========================================
%% BRANCH 1: COMPLEX REASONING (The "Logic" Path)
%% ==========================================
Decision -- "Deep Reasoning / Analysis<br/>(e.g., Semantic Overspecification)" --> LogicPath[<b>Logic Priority</b><br/>Value is in the 'Thinking']
LogicPath --> InputCheck{Input Characteristics}
InputCheck -- "High Variance / Dense Text<br/>(Requires 100% Attention)" --> AttentionIssue(<b>Issue: Attention Dilution</b><br/>Context Clutter reduces focus)
InputCheck -- "Nuanced / Abstract<br/>(Edge Cases likely)" --> AnchorIssue(<b>Issue: The False Anchor</b><br/>Model ignores 'Sad Path' anomalies)
LogicPath --> FailCheck{Observed Failures}
FailCheck -- "Model copies tone/bias<br/>instead of rules" --> Syco(<b>Issue: Sycophancy</b><br/>Content Overfitting)
FailCheck -- "Model skips logic steps<br/>to mimic example length" --> Shortcut(<b>Issue: Reasoning Shortcut</b><br/>Logic Degradation)
%% All these issues point to k=0
AttentionIssue --> K0
AnchorIssue --> K0
Syco --> K0
Shortcut --> K0
%% ==========================================
%% THE SYNTAX EXCEPTION (The "Structure" Path)
%% ==========================================
LogicPath --> SyntaxCheck{<b>Exception Check</b><br/>Is Output Format Rigid?}
SyntaxCheck -- "Yes (e.g., YAML/JSON)" --> SyntaxFail{Is Syntax Failing?}
SyntaxFail -- "Yes (Lists/Nesting broken)" --> K1
SyntaxFail -- "No (Syntax perfect, Logic weak)" --> K0
%% ==========================================
%% BRANCH 2: GENERATIVE TASKS (The "Style" Path)
%% ==========================================
Decision -- "Generative / Pattern<br/>(e.g., Paraphrasing, Grammar)" --> StylePath[<b>Style Priority</b><br/>Value is in the 'Format']
StylePath --> CalibNeed{Need Calibration?}
CalibNeed -- "Unsure of Verbosity/Register" --> K1_Candidate(Initial k=1)
K1_Candidate --> BiasCheck{<b>Issue: The False Pattern</b><br/>Is 1 example creating bias?}
BiasCheck -- "Yes: Model mimics sentence type<br/>(e.g., Only asks questions)" --> KMany
BiasCheck -- "Yes: Model copies specific words<br/>(e.g., Lexical Anchor)" --> KMany
BiasCheck -- "No: Output is stable" --> K1
%% ==========================================
%% OUTCOMES
%% ==========================================
%% K=0 NODE
K0[<b>k=0 : Zero-Shot</b><br/><i>Use Instructions & System Prompt</i>]:::kZero
K0 --- K0_Note["<b>Strategy:</b><br/>• Use Structured Chain-of-Thought<br/>• Define Schema in System Prompt<br/>• Avoid semantic contamination"]:::context
%% K=1 NODE (The Cognitive Firewall)
K1[<b>k=1 : The 'Cognitive Firewall'</b><br/><i>Hybrid / Sanitized Example</i>]:::kOne
K1 --- K1_Note["<b>Strategy:</b><br/>• Use 'Inline Schema Examples'<br/>• <b>CRITICAL:</b> Use Placeholders for Logic<br/>• e.g., reasoning: '[Detailed analysis here]'<br/>• Prevents 'Effort Anchoring'"]:::context
%% K=MANY NODE
KMany[<b>k = 3 to 5 : Few-Shot</b><br/><i>Diversity / Calibration</i>]:::kMany
KMany --- KMany_Note["<b>Strategy:</b><br/>• Ensure High Variance in examples<br/>• 1 Question, 1 Statement, 1 Negative<br/>• Dilutes 'Lexical Anchors'"]:::context
sequenceDiagram
participant Z as k=0<br/>(Pure Instruction)
participant O as k=1<br/>(Hybrid / Anchor)
participant F as k=3 to 5<br/>(Pattern / Diversity)
rect rgb(255, 245, 245)
Note left of Z: CONTEXT: Deep Reasoning<br/>Input Variance: High<br/>Example Variance: Low<br/>Task: Why > What
Note over Z: ISSUE: Sycophancy (Content Overfitting)<br/>Mechanism: Variance Mismatch<br/>Detail: Model adopts example tone or bias<br/>instead of applying abstract rules.
Note over Z: ISSUE: Reasoning Shortcut (Logic Degradation)<br/>Mechanism: Skipping Inference Compute<br/>Detail: Model mimics Input-to-Output jump<br/>ignoring Input-to-Thinking-to-Output chain.
Note over Z: ISSUE: Attention Dilution (Context Clutter)<br/>Mechanism: Token Weight reduction<br/>Detail: K-shots steal focus from Dense Input<br/>Subtle contradictions in input are missed.
Note over Z: ISSUE: The False Anchor (Edge Case Blindness)<br/>Mechanism: Expectation Bias<br/>Detail: Example shows Happy Path (Standard)<br/>Model fails to flag Sad Path (Anomalies).
Z->>O: EXCEPTION: Syntax Failure<br/>Logic is variable but Schema YAML is rigid<br/>Model consistently breaks format or indentation.
end
rect rgb(255, 252, 240)
Note over O: DANGER ZONE: The Effort Anchor<br/>Mechanism: Cognitive Ceiling<br/>Detail: Example is simple (two birds)<br/>Input is abstract (freedom)<br/>Model truncates reasoning to match example brevity.
Note over O: DANGER ZONE: Semantic Contamination<br/>Mechanism: Domain Bleed<br/>Detail: Example is about Numerals. Input is Verbs<br/>Model hallucinates Numeral rules for Verbs.
Note over O: DANGER ZONE: Instruction Conflict<br/>Mechanism: Example Authority<br/>Detail: LLMs trust examples more than comments<br/>If example list is short model ignores<br/>list all instruction.
Note over O: SOLUTION: The Cognitive Firewall (Sanitized 1-Shot)<br/>Technique: Inline Schema Example<br/>Detail 1: Keep Structure (Boolean or Lists)<br/>Detail 2: Use Placeholder for Logic<br/>reasoning: [Detailed analysis of semantic boundaries...]<br/>Prevents Implicit Instruction (blank = optional).
end
rect rgb(240, 245, 255)
Note over O: CONTEXT: Generative or Calibration<br/>Task: How much? > Why?<br/>Need to set Verbosity and Register.
Note over O: BENEFIT: Calibration Vectors<br/>Detail: Examples define Exhaustiveness<br/>(e.g. How many variations? Slang allowed?)
O->>F: FAILURE: The False Pattern (Syntax Overfitting)<br/>Mechanism: Hidden Rule Inference<br/>Detail: Example was a Question<br/>Model thinks ALL outputs must be Questions.
O->>F: FAILURE: The Lexical Anchor (The Parrot)<br/>Mechanism: Low Semantic Distance<br/>Detail: Example used Tener (To have)<br/>Input is Ser (To be)<br/>Model copies Tener explanation text.
end
rect rgb(235, 255, 235)
Note right of F: SOLUTION: High Variance Diversity<br/>Strategy: Dilute Attention Weights<br/>Detail: Provide 1 Question 1 Statement 1 Negative<br/>Forces model to generalize structure<br/>not copy specific words.
end