Preview — full styling will appear after the next deploy completes.

agentic-ai-patterns

Chain of Verification

Systematic claim-by-claim fact-checking

The LLM generates a draft, then systematically extracts verification questions, answers each independently from a ground-truth source, and revises only the claims that were wrong.

flowchart TD
    S([__start__]) --> D[draft]
    D --> CK[make_checklist]
    CK --> V[verify_each]
    V --> CR[compare_and_revise]
    CR --> E([__end__])

Chain of Verification (CoVE) is a structured hallucination-reduction technique. After generating a draft answer, the model extracts a checklist of factual claims as verification questions ("What is the price of X?", "Who authored Y?"). Each question is answered independently — without the draft in context — by querying a ground-truth database.

The independent verification step is critical: by answering each question without seeing the draft, the model cannot be anchored to its own prior claims. The verified answers are then compared to the draft's claims. Any discrepancy triggers a targeted revision of just that claim, preserving the rest of the draft.

CoVE is especially valuable for domain-specific question answering where the model might confabulate plausible-sounding but incorrect details. The structured checklist makes the fact-checking process transparent and auditable — you can inspect exactly which claims were verified and corrected.