Is it possible to adapt Graph of Thought as a prompt engineering technique, without coding? The original paper: [2305.16582] Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
See, questions like this one is exactly why I don’t like how we’ve categorized both prompt engineering and our vocabulary for certain subjects.
CoT and GoT reasoning appears to be two fundamentally different things, however I don’t blame you for getting confused, as even I had to check and make sure what was going on. The short answer is no, the information presented in this article cannot be turned into a prompt technique, although that would be quite nice if it could. However, it may be possible to construct a program that allows you to programmatically create queries based on these frameworks. Note though that doing so would not be the same thing as what this paper is describing, and you would likely just be creating Tree of Thought reasoning instead (see here: Advanced Prompt Engineering Techniques: Tree-of-Thoughts Prompting | Deepgram). ToT is actually pretty good though, and may close enough to what you’d like I’m not sure.
To help distinguish things, CoT is a prompting technique that’s basically a simple, linear logic flow. If this, then this, then this, etc. This is a technique that is easily demonstrable because chat formats are linear. If there is logic that can be followed sequentially, it can be represented by language, and thus easily understood by the LLM. Conversations inherently follow a linear flow, so this is seen as a technique because it is easy to replicate in conversation formats. Because of this, it is also easy to train and fine tune models to say “hey, see this conversation? Do this more.” enhancing its capabilities and ability to follow along with this kind of logic pattern.
GoT, on the other hand, at least from the article, is slightly different. It’s more like an alternative model for AI reasoning, and it appears that this is some kind of framework to implement into an LLM more directly to help it reason better. Basically, its a way to represent the more subconscious processes that are present in human cognition into an AI model. Essentially, it sounds like they pretty much built a new model that’s capable of this kind of reasoning. They needed to fine tune a small model for 50 epochs that exemplifies this kind of reasoning they proposed. LLMs are not inherently capable of this kind of reasoning, so they need to be trained to understand this way of thinking.
It’s not really a technique like CoT; it’s more like a methodology that maps out an alternative way for a model to reason in a way that better mirrors human cognition. Philosophically speaking, this would be extremely difficult and time consuming to turn into a technique, if it’s even possible. While it mirrors cognition well, that doesn’t mean it mirrors the way in which we express our reasoning very well. Now, you could construct your own “thought graph” to basically conglomerate prompts/queries from different nodes, but that creates something complex, hard to follow, and difficult to use. Tree of thought would be a much better technique to engineer prompts with, mostly because it achieves much of the same thing without the complexity and confusion. You get the best of both worlds basically.
In short, this is great for AI reasoning, and terrible as a prompting technique. In fact, if implemented well inside a model, GoT should have the capability to remove complex prompt engineering requirements, because it would be able to subconsciously reason better with far less instruction, much like how a human does.
I haven’t read the paper, but you can use RAG to lock onto specific chunks of text, and then traverse up and down the nodes if it’s organized like a graph, and then present the entire relations (and upstream/downstream chunks) to the LLM in the prompt to then draw a “reasoned response”.