Code Generation with Self-Repair closes the feedback loop that raw code generation leaves open. Instead of generating code and hoping it runs, the system executes the code in a sandboxed subprocess and feeds any error output directly back to the model as revision context. This mirrors how a developer iterates in a REPL.
The execution node captures both stdout and stderr, detects success (exit code 0) or failure, and routes accordingly. If the code fails, the `repair` node receives the broken code, the error traceback, and the original task description — and rewrites the code with the error in mind. `MAX_REPAIR = 3` prevents infinite loops.
A final `summarise` node translates the raw execution output into a human-readable answer. This pattern is the foundation for code interpreters, data analysis agents, and any system where the agent needs to "run and see" rather than reason purely in text.