I am confused by the difference between multi-shot and Chain of Thought prompting.
Does Chain of Thought prompting involve supplying the prompt with many examples that go through it logically step by step? How would this be different than multi-shot prompting?
For example, in this chain of thought paper example, the blue text is where the chain of thought comes in. In reality, does each of my prompts need to contain an example of the blue text? Or is the blue text the output of the LLM model? Thanks
The provided example is one-shot, in-context learning. What makes it a chain-of-thought prompt is that it gives the model the “thought process” it should use.
In essence, this is telling the model it should break the problem down into simpler steps, what those steps are, and how to solve them.
Whereas one-shot to few-shot learning prompts in general don’t need to give the model a specific thought process to follow.
Can someone suggest prompts to solve this: " Suppose I have a cabbage, a goat and a lion, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the cabbage and lion alone together, and I am not allowed to leave the lion and goat alone together. How can I safely get all three across?" ?
Search for Theory of Mind, LLM and Prompt, it should put you in the ball park and help you understand why such task are currently more likely to fail than succeed with LLMs.
Side note, you really should have started a new topic to ask that question.
It seems even 3.5 can answer it just fine, which stands to reason as it’s a known problem and was almost certainly represented in the dataset numerous times.