Parallelization reduces end-to-end latency by running independent operations concurrently. In the static variant, three analyst nodes (price, authors, recency) are all connected from `START` at graph build time — LangGraph schedules them to run simultaneously and waits for all to finish before proceeding.
The dynamic variant uses the `Send` primitive to create parallel branches at runtime. A `fan_out` node examines the query and dispatches one `Send` per relevant topic, creating a variable number of concurrent workers. Each worker's result is merged using the `operator.add` reducer.
Use static parallelization when you know exactly which analyses to run. Use dynamic parallelization when the number of parallel tasks depends on the input data — for example, one analysis per document in a batch, or one query per data source in a federation.