Preview — full styling will appear after the next deploy completes.

agentic-ai-patterns

Code Generation + Self-Repair

Iterative synthesis with error feedback

The LLM generates Python code, executes it in a subprocess, reads the traceback if it fails, and rewrites the code — looping until success or max retries are exhausted.

flowchart TD
    S([__start__]) --> G[generate]
    G --> X[execute]
    X -->|success| Z[summarise]
    X -->|error| R[repair]
    R --> X
    X -->|max retries| Z
    Z --> E([__end__])

Code Generation with Self-Repair closes the feedback loop that raw code generation leaves open. Instead of generating code and hoping it runs, the system executes the code in a sandboxed subprocess and feeds any error output directly back to the model as revision context. This mirrors how a developer iterates in a REPL.

The execution node captures both stdout and stderr, detects success (exit code 0) or failure, and routes accordingly. If the code fails, the `repair` node receives the broken code, the error traceback, and the original task description — and rewrites the code with the error in mind. `MAX_REPAIR = 3` prevents infinite loops.

A final `summarise` node translates the raw execution output into a human-readable answer. This pattern is the foundation for code interpreters, data analysis agents, and any system where the agent needs to "run and see" rather than reason purely in text.