Tree of Thoughts (ToT) treats problem-solving as a search problem. Instead of committing to a single chain of thought, the agent generates multiple candidate "thoughts" at each depth level, scores them with a judge LLM, and only expands the top-K highest-scoring branches. This is Breadth-First Search applied to reasoning.
In LangGraph, each expansion cycle is: `expand` (generate `BRANCHES=3` new thoughts from each active node) → `score` (judge LLM rates each thought 1–10) → conditional routing. If `depth < MAX_DEPTH`, the top-K thoughts become the seeds for the next round. Otherwise, the best-scoring leaf becomes the answer.
ToT significantly outperforms single-chain reasoning on tasks that require exploration and backtracking — complex puzzles, creative writing, or multi-step planning where early decisions constrain later options. The trade-off is cost: you're running N× more LLM calls than a single chain.