Tree of Thoughts — prompting method that outperforms other methods

Today it has been released: GitHub - ysymyth/tree-of-thought-llm: Tree of Thoughts: Deliberate Problem Solving with Large Language Models

In their paper (linked from the github repo) they write:
“Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only
solved 4% of tasks, our method achieved a success rate of 74%”.

Also see these tweets:

Interesting thanks. Can this be applied by using a specific prompting technique or is it an algorithm leveraging OpenAI API? Have you tried it already?

It is a general technique that you can use with ChatGPT, but you can also try to automate it via API calls. Their Python repo uses the openai lib.

I don’t have time to read the paper now, do they speak to how much this inflates token use?

1 Like

This is key, imho, esp for llms like GPT-4 that charge double for output tokens.
I’ve found Prompts as psuedo-code - where are the limits? - #21 by bruce.dambrosio that you can have an llm do a lot of ‘reasoning’ without necessarily requiring it report all the details in the output.

1 Like