run
Start the Ralph TUI execution loop to autonomously process tasks from your tracker.
Synopsis
The run command starts the autonomous execution loop. Ralph will select tasks, build prompts, execute your AI agent, detect completion, and repeat until all tasks are done or the iteration limit is reached.
Running ralph-tui without any command also starts the TUI interface, allowing you to navigate and start execution interactively.
Options
Task Source Options
| Option | Description |
|---|---|
--prd <path> | PRD file path (auto-switches to json tracker) |
--epic <id> | Epic ID for beads tracker |
Agent & Model Options
| Option | Description |
|---|---|
--agent <name> | Override agent plugin (e.g., claude, opencode) |
--model <name> | Override model (see Model Options below) |
--tracker <name> | Override tracker plugin (e.g., beads, beads-bv, json) |
Execution Control
| Option | Description |
|---|---|
--iterations <n> | Maximum iterations (0 = unlimited) |
--delay <ms> | Delay between iterations in milliseconds |
--serial, --sequential | Force sequential execution (disable parallel) |
--parallel [N] | Force parallel execution, optionally with N workers |
--task-range <range> | Filter tasks by index (e.g., 1-5, 3-, -10) |
Output & State
| Option | Description |
|---|---|
--prompt <path> | Custom prompt template file path |
--output-dir <path> | Directory for iteration logs (default: .ralph-tui/iterations) |
--progress-file <path> | Progress file for cross-iteration context (default: .ralph-tui/progress.md) |
Display Options
| Option | Description |
|---|---|
--headless | Run without TUI (alias: --no-tui) |
--no-setup | Skip interactive setup even if no config exists |
Model Options
The --model flag accepts different values depending on which agent you're using.
Claude Agent
| Model | Description |
|---|---|
sonnet | Claude Sonnet - balanced performance and cost |
opus | Claude Opus - most capable, higher cost |
haiku | Claude Haiku - fastest, lowest cost |
OpenCode Agent
Models use provider/model format. Valid providers:
| Provider | Example Models |
|---|---|
anthropic | anthropic/claude-3-5-sonnet, anthropic/claude-3-opus |
openai | openai/gpt-4o, openai/gpt-4-turbo |
google | google/gemini-pro, google/gemini-1.5-pro |
xai | xai/grok-1 |
ollama | ollama/llama3, ollama/codellama |
Model names within each provider are validated by the provider's API. If you specify an invalid model name, you'll see an error from the underlying agent CLI.
Examples
Basic Usage with JSON Tracker
Using Beads Tracker
Agent & Model Override
Custom Prompt Template
Headless Mode (CI/Scripts)
Development Workflow
Execution Flow
When you run this command, Ralph:
- Loads configuration from
.ralph-tui/config.toml - Connects to tracker (json, beads, or beads-bv)
- Selects next task based on priority and dependencies
- Builds prompt using the Handlebars template
- Spawns agent with the prompt
- Streams output to the TUI (or stdout in headless mode)
- Detects completion via
<promise>COMPLETE</promise>token - Marks task done and proceeds to next task
- Repeats until no tasks remain or max iterations reached
Session Persistence
Ralph automatically saves state to .ralph-tui/session.json:
- Current iteration number
- Task statuses
- Iteration history
- Active task IDs (for crash recovery)
If interrupted, use ralph-tui resume to continue.
Remote Listener
Enable remote monitoring and control by adding the --listen flag. This starts a WebSocket server alongside the execution engine, allowing you to connect from another machine.
Remote Options
| Option | Description |
|---|---|
--listen | Enable remote listener (WebSocket server) |
--listen-port <n> | Port for remote listener (default: 7890) |
--rotate-token | Rotate server token before starting |
First Run: Token Generation
On first use of --listen, a secure authentication token is generated:
Save this token securely! You'll need it to configure remote clients.
Examples
Connecting from Remote
After starting with --listen, connect from another machine:
Security Model
Ralph uses a two-tier token system:
| Token Type | Lifetime | Purpose |
|---|---|---|
| Server Token | 90 days | Initial authentication, stored on disk |
| Connection Token | 24 hours | Session authentication, auto-refreshed |
Host binding:
- Without
--listen: binds to127.0.0.1(local only) - With
--listen: binds to0.0.0.0(network accessible, requires token auth)
Audit Logging
All remote actions are logged to ~/.config/ralph-tui/audit.log:
Parallel Execution
By default, ralph-tui runs sequentially. Parallel execution is opt-in via --parallel or configuration (parallel.mode = "auto" or parallel.mode = "always"). Each parallel worker runs in its own git worktree for full isolation.
To enable automatic parallel detection on normal run invocations, set parallel.mode = "auto" in your config.
When parallel mode is active, the TUI shows additional views:
- Press
wto toggle the workers view - Press
mto toggle the merge progress view - Press
Enteron a worker to see its detail output
See the Parallel Execution guide for full documentation.
Task Range Filtering
The --task-range flag lets you filter tasks by their position in the task list. Task indices are 1-indexed for user friendliness.
Range Formats
| Format | Meaning |
|---|---|
1-5 | Tasks 1 through 5 (inclusive) |
3- | Tasks 3 to the end |
-10 | Tasks 1 through 10 |
5 | Only task 5 |
Examples
Task range filtering works with both sequential and parallel execution. When combined with --parallel, only the filtered tasks are distributed across workers.