BMAD structures development around a chain of personas — Business Analyst, Product Manager, Architect, Developer — each narrowing the scope of what the next step needs to decide. There's no explicit QA persona in the model. That's not an oversight. It's a clue about where quality actually lives in an AI-driven workflow.
Why QA isn't a separate persona
In a traditional development process, QA is a checkpoint after the code is written. A human developer interprets requirements, writes something, and QA verifies it against expectations — acting as a catch for the gap between what was asked and what was built.
BMAD changes the structure of that gap. When each persona produces a documented artefact — requirements, user stories, architecture, implementation — the decisions are explicit and reviewable at each stage. QA work moves upstream. By the time code exists, a lot of what QA would have caught should already be resolved.
This doesn't mean QA disappears. It means the role shifts from "find the problems after the fact" to "prevent them from reaching the next stage."
Where quality work actually happens in BMAD
At the Business Analyst stage, quality looks like: are the requirements complete enough to be testable? Acceptance criteria belong here, not at the end of the pipeline. If a BA artefact can't be verified — if there's no way to know when a feature is done — that's a quality problem, and it should be caught before the PM starts writing stories.
At the Product Manager stage, quality looks like: do the user stories cover the failure paths, not just the happy path? AI tools are very good at implementing what they're told. If a story doesn't mention what happens when a user submits an empty form, the AI will make something up. The PM persona is the right place to close those gaps.
At the Architect stage, quality looks like: are the interfaces defined clearly enough to be tested in isolation? Component boundaries, API contracts, data validation rules — these are testing prerequisites. An architecture that's hard to test is a design problem, not a QA problem.
At the Developer stage, quality looks like: does the implementation match the spec, and is there test coverage for it? When AI writes the code, this is where a human needs to be most deliberate. AI-generated code tends to look correct and plausible while quietly missing edge cases or introducing assumptions that aren't in the spec. Code review with tests is the check.
The QA mindset as a cross-cutting concern
Rather than a persona that sits at the end of the chain, QA in BMAD is better understood as a question that each persona applies to their own output: how would someone verify this?
That shift matters because it changes incentives. When QA is a separate stage, the incentive is to pass the QA gate. When each persona is accountable for the verifiability of their own artefact, the incentive is to produce something that actually holds up — because the next persona inherits it.
What changes when AI writes the tests too
AI can write tests as easily as it writes implementation code. That creates a specific risk: tests that pass because they test what the AI implemented, not what the requirement actually needed. The AI writes a function, writes tests for that function, all tests pass, and the feature is still wrong.
The safeguard is having acceptance criteria that exist independently of the implementation — written during the BA or PM stage, before any code exists. Those criteria become the benchmark the tests are written against, not a summary of what the code does.
This is one of the strongest arguments for doing the BMAD persona work properly. Requirements written after implementation — or shaped by it — don't give you quality assurance. They give you documentation of what you already have.
Practical setup
For teams running BMAD, the most straightforward way to handle QA is to add a verification step to each persona's definition of done:
- BA artefacts must include acceptance criteria for every requirement.
- PM stories must cover both success and failure paths.
- Architecture documents must identify testability constraints.
- Developer output must include test coverage for the implemented scope, with tests written against acceptance criteria — not inferred from the code.
No separate QA persona needed. The work is distributed, which is where it was always most effective.
---
If you're building a team process around AI-assisted development and want a second opinion on where it's working and where it isn't, book a call — it's a short conversation and usually there's a clear path forward.