Preview — full styling will appear after the next deploy completes.

agentic-ai-patterns

Parallelization

Static and dynamic parallel branches

Multiple independent analyst nodes run simultaneously on the same input. Both static (build-time wired) and dynamic (runtime Send) variants are demonstrated.

flowchart TD
    S([__start__]) --> A[analyst_price]
    S --> B[analyst_authors]
    S --> C[analyst_recency]
    A --> M[merger]
    B --> M
    C --> M
    M --> E([__end__])

Parallelization reduces end-to-end latency by running independent operations concurrently. In the static variant, three analyst nodes (price, authors, recency) are all connected from `START` at graph build time — LangGraph schedules them to run simultaneously and waits for all to finish before proceeding.

The dynamic variant uses the `Send` primitive to create parallel branches at runtime. A `fan_out` node examines the query and dispatches one `Send` per relevant topic, creating a variable number of concurrent workers. Each worker's result is merged using the `operator.add` reducer.

Use static parallelization when you know exactly which analyses to run. Use dynamic parallelization when the number of parallel tasks depends on the input data — for example, one analysis per document in a batch, or one query per data source in a federation.