How It Works
Architecture deep-dive into parallel execution — task graph analysis, worktree isolation, and sequential merge.
How It Works
Parallel execution uses a wrapper architecture. The ParallelExecutor wraps multiple ExecutionEngine instances — the existing sequential engine stays completely untouched. Each worker runs in its own git worktree with a dedicated branch.
Architecture Overview
Execution Flow
Task Graph Analysis
Ralph fetches all tasks from your tracker and builds a dependency graph using the dependsOn and blocks fields. Tasks are sorted topologically (Kahn's algorithm) and grouped by depth level — tasks at the same depth with no mutual dependencies form a parallel group.
Worktree Creation
For each task in the current group, the WorktreeManager creates a git worktree at .ralph-tui/worktrees/worker-N/ with a dedicated branch ralph-parallel/{taskId}. Each worktree is a full working copy of your repository branched from the current HEAD.
Parallel Execution
Workers start simultaneously, each running the standard ExecutionEngine in their isolated worktree. Workers do not call tracker.getNextTask() — their task is pre-assigned by the ParallelExecutor. The tracker runs only in the main process to prevent concurrent writes.
Sequential Merge
As workers complete, their branches enter a merge queue. Merges happen one at a time: fast-forward first (if no prior merge changed HEAD), then fallback to a merge commit. A backup tag is created before each merge for rollback safety.
Conflict Resolution
If a merge produces conflicts, the ConflictResolver extracts three-way merge data (ours/theirs/base) from git's index stages and sends it to an AI agent for resolution. If AI resolution fails, the merge is aborted and rolled back to the backup tag.
Group Progression
After all tasks in a group complete and merge, the next group begins. Groups execute in topological order — ensuring that upstream dependencies are always resolved before downstream tasks start.
Wrapper, Not Fork
A critical design decision: the existing ExecutionEngine is wrapped, not modified. The Worker class creates an ExecutionEngine with a modified cwd (pointing to the worktree) and a pre-assigned task. All engine events are forwarded to the ParallelExecutor with a worker ID prefix.
This means:
- All existing features (rate limiting, error handling, session resumption) work inside workers
- The sequential execution path is completely unchanged
- Worker isolation is purely filesystem-based (separate worktree directories)
Tracker Isolation
The tracker plugin runs only in the main process. Workers signal task status changes back to the ParallelExecutor, which calls tracker methods on the worker's behalf. This prevents:
- Concurrent writes to the beads database
- Race conditions in JSON file updates
- Inconsistent task state across workers
Failure Handling
| Scenario | Behavior |
|---|---|
| Single worker fails | Other workers continue; failed task marked as failed |
| Merge conflict, AI resolves | Resolved content committed, merge completes |
| Merge conflict, AI fails | Merge aborted, rolled back to backup tag, task re-queued (max 1 retry) |
| All workers fail | Switch to sequential mode for remaining tasks |
| Ctrl+C | Stop all workers, wait for current agent calls, merge completed work, cleanup |
| Ctrl+C ×2 | Force kill immediately |
| Crash/restart | Session state persisted; orphaned worktrees detected and cleaned up |