Corrective RAG (CRAG) addresses the most common failure mode of standard RAG: generating answers based on retrieved documents that are irrelevant to the question. A grader LLM evaluates each retrieved chunk and produces a YES/NO relevance decision. If the documents don't pass the bar, a fallback retrieval path is triggered before generation.
In this implementation, the fallback retrieval uses a genre-based broader query — when the initial keyword search misses, the system falls back to domain-level retrieval to ensure the generator always has some relevant context. This prevents confident-sounding hallucinations built on irrelevant evidence.
CRAG is a practical improvement over naive RAG that requires minimal additional complexity: one grader node and one conditional edge. It significantly improves answer quality in domains where retrieval precision is imperfect — which is most real-world use cases.