run
Start the Ralph TUI execution loop to autonomously process tasks from your tracker.
Synopsis
The run command starts the autonomous execution loop. Ralph will select tasks, build prompts, execute your AI agent, detect completion, and repeat until all tasks are done or the iteration limit is reached.
Running ralph-tui without any command also starts the TUI interface, allowing you to navigate and start execution interactively.
Options
Task Source Options
| Option | Description |
|---|---|
--prd <path> | PRD file path (auto-switches to json tracker) |
--epic <id> | Epic ID for beads tracker |
Agent & Model Options
| Option | Description |
|---|---|
--agent <name> | Override agent plugin (e.g., claude, opencode) |
--model <name> | Override model (see Model Options below) |
--tracker <name> | Override tracker plugin (e.g., beads, beads-bv, json) |
Execution Control
| Option | Description |
|---|---|
--iterations <n> | Maximum iterations (0 = unlimited) |
--delay <ms> | Delay between iterations in milliseconds |
Output & State
| Option | Description |
|---|---|
--prompt <path> | Custom prompt template file path |
--output-dir <path> | Directory for iteration logs (default: .ralph-tui/iterations) |
--progress-file <path> | Progress file for cross-iteration context (default: .ralph-tui/progress.md) |
Display Options
| Option | Description |
|---|---|
--headless | Run without TUI (alias: --no-tui) |
--no-setup | Skip interactive setup even if no config exists |
Model Options
The --model flag accepts different values depending on which agent you're using.
Claude Agent
| Model | Description |
|---|---|
sonnet | Claude Sonnet - balanced performance and cost |
opus | Claude Opus - most capable, higher cost |
haiku | Claude Haiku - fastest, lowest cost |
OpenCode Agent
Models use provider/model format. Valid providers:
| Provider | Example Models |
|---|---|
anthropic | anthropic/claude-3-5-sonnet, anthropic/claude-3-opus |
openai | openai/gpt-4o, openai/gpt-4-turbo |
google | google/gemini-pro, google/gemini-1.5-pro |
xai | xai/grok-1 |
ollama | ollama/llama3, ollama/codellama |
Model names within each provider are validated by the provider's API. If you specify an invalid model name, you'll see an error from the underlying agent CLI.
Examples
Basic Usage with JSON Tracker
Using Beads Tracker
Agent & Model Override
Custom Prompt Template
Headless Mode (CI/Scripts)
Development Workflow
Execution Flow
When you run this command, Ralph:
- Loads configuration from
.ralph-tui/config.toml - Connects to tracker (json, beads, or beads-bv)
- Selects next task based on priority and dependencies
- Builds prompt using the Handlebars template
- Spawns agent with the prompt
- Streams output to the TUI (or stdout in headless mode)
- Detects completion via
<promise>COMPLETE</promise>token - Marks task done and proceeds to next task
- Repeats until no tasks remain or max iterations reached
Session Persistence
Ralph automatically saves state to .ralph-tui/session.json:
- Current iteration number
- Task statuses
- Iteration history
- Active task IDs (for crash recovery)
If interrupted, use ralph-tui resume to continue.