Preview — full styling will appear after the next deploy completes.

agentic-ai-patterns

Reflection

Generate → Critique → Revise

The agent generates an answer, a separate critic evaluates it, and the agent revises until the critic is satisfied or the maximum iteration count is reached.

flowchart TD
    S([__start__]) --> G[generate]
    G --> R[reflect]
    R -->|approved| E([__end__])
    R -->|revise| V[revise]
    V --> R

Reflection adds a quality feedback loop to single-shot generation. After the agent produces a draft answer, a dedicated "critic" LLM evaluates it against explicit criteria — completeness, accuracy, relevance. The critic outputs either an approval or a structured critique that the agent uses to revise.

In LangGraph, this is a three-node loop: `generate` → `reflect` → `revise` → back to `reflect`. A conditional edge after `reflect` either routes to `revise` (if critique found issues) or exits to `__end__` (if approved). A `MAX_ITERATIONS` guard prevents infinite loops.

Reflection is the right choice when answer quality matters more than latency and you can articulate what "good" looks like. It is commonly combined with tool access — the generator queries a database, and the critic checks that claims are supported by the retrieved evidence.