Preview — full styling will appear after the next deploy completes.

bmad-method

QA as a Cross-Cutting Concern

Quality at every stage, not the end

BMAD has no explicit QA persona. Instead, each persona applies a quality gate to its own output before handing off. Quality in AI-driven development is a prevention model, not a post-implementation catch model.

flowchart LR
    BA[BA Phase<br>QA gate:<br>requirements<br>testable?] --> PM[PM Phase<br>QA gate:<br>failure paths<br>covered?]
    PM --> AR[Architect Phase<br>QA gate:<br>interfaces testable<br>in isolation?]
    AR --> DV[Developer Phase<br>QA gate:<br>tests against AC,<br>not from code]

Traditional QA sits at the end of the development pipeline, catching problems after the code is written. BMAD shifts this to a prevention model: each persona is accountable for the verifiability of its own artifact before handing off to the next. The result is that most issues QA would have caught are eliminated upstream, at the stage where they are least expensive to fix.

The BA's quality gate: are the requirements complete enough to be testable? Every acceptance criterion belongs in the BA or PM phase, not the QA phase. The PM's gate: do stories cover failure paths? The Architect's gate: are interfaces defined clearly enough to be tested in isolation? The Developer's gate: do tests exist for every acceptance criterion, written against the spec rather than inferred from the implementation?

The specific risk in AI-driven development is that the AI writes tests confirming what it implemented, not what the requirement asked for. The safeguard is acceptance criteria written before any code exists — in the BA or PM phase — that become the independent benchmark for test writing. Tests that can only be explained by reading the implementation are not acceptance tests.