Preview — full styling will appear after the next deploy completes.

bmad-method

Adversarial Review

Red-teaming stories and designs before implementation

Adversarial Review is a BMAD grooming pattern where one persona is explicitly activated to challenge, break, and find failure modes in an artifact produced by another. It is the structured equivalent of a red team and prevents the optimism bias that single-persona review produces.

flowchart TD
    AR([Artifact<br>PRD, Story, or Design]) --> ADP[Adversarial Reviewer Persona<br>activated with critic prompt]
    ADP --> Q1[Under what conditions<br>does this fail?]
    ADP --> Q2[What assumptions<br>could be wrong?]
    ADP --> Q3[What edge case<br>was not considered?]
    Q1 --> FML[Failure Mode List]
    Q2 --> FML
    Q3 --> FML
    FML --> OA[Original Author<br>resolves each item]
    OA --> RQ{All resolved?}
    RQ -->|Yes| RD([Hardened artifact<br>ready for grooming])
    RQ -->|No| HR([Escalate unresolved<br>items to human review])

Every persona in BMAD has an optimism bias for its own artifacts. The PM wants the story to be complete. The Architect wants the design to be sound. These personas find what they look for. Adversarial Review deliberately breaks this pattern by activating a dedicated critic whose job is to find holes — not to be constructive, not to suggest fixes, but specifically to identify every way the artifact could fail, be misunderstood, or produce a bad outcome.

The Adversarial Reviewer persona operates with a different prompt from the standard review personas. It asks: "Under what conditions does this fail?" "What assumption does this make that could be wrong?" "If someone implemented this exactly as written, what would go wrong?" "What edge case was not considered?" The output is a list of failure modes, not a list of improvements.

The failure mode list then goes back to the original author for resolution. Each item must be resolved — either by modifying the artifact or by providing a documented argument for why the failure mode is not a real risk. Items that cannot be resolved and cannot be dismissed go to human review. Adversarial Review is most valuable for stories touching security, data integrity, or irreversible operations where silent failures are expensive.