In their paper (linked from the github repo) they write:
“Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only
solved 4% of tasks, our method achieved a success rate of 74%”.
Interesting thanks. Can this be applied by using a specific prompting technique or is it an algorithm leveraging OpenAI API? Have you tried it already?