OpenCode Agent
Use OpenCode CLI with multiple AI providers in Ralph TUI.
OpenCode Agent
The OpenCode agent plugin integrates with the opencode CLI, an open-source AI coding assistant that supports multiple providers including Anthropic, OpenAI, Google, xAI, and Ollama.
OpenCode lets you use any supported AI provider with Ralph TUI, including local models via Ollama.
Prerequisites
Install OpenCode CLI:
Verify installation:
Basic Usage
Run with OpenCode
Use the --agent opencode flag:
Select a Model
Specify provider and model with --model:
Select a Variant
Control reasoning effort with --variant:
Configuration
Shorthand Config
Basic configuration:
Full Config
For advanced control:
Options Reference
| Option | Type | Default | Description |
|---|---|---|---|
provider | string | - | AI provider: anthropic, openai, google, xai, ollama |
model | string | - | Model name within the provider |
variant | string | - | Model reasoning effort (model-specific, e.g., minimal, high, max for Gemini) |
agent | string | "general" | Agent type: general, build, or plan |
format | string | "default" | Output format: default or json |
timeout | number | 0 | Execution timeout in ms (0 = no timeout) |
command | string | "opencode" | Path to OpenCode CLI executable |
For simple configs, you can use the top-level command option instead of the [[agents]] array:
See Custom Command for details.
Providers and Models
OpenCode supports multiple AI providers. Models are specified in provider/model format.
Anthropic
OpenAI
xAI
Ollama (Local)
Model names are validated by the provider's API. Invalid model names result in errors from OpenCode, not Ralph TUI.
Model Variants
You can control the reasoning effort for supported models using the variant option:
Variant values are model-specific and validated by the provider's API. Check your model's documentation for supported variants.
Agent Types
OpenCode offers specialized agent personalities:
| Type | Description | Use Case |
|---|---|---|
general | General-purpose coding agent | Most tasks (default) |
build | Focused on building and implementing | Implementation work |
plan | Planning and architecture focus | Design discussions |
Configure in options:
Using non-default agent types may trigger warning messages in OpenCode output. Ralph TUI filters these automatically, but you may see them if running OpenCode directly.
File Context
OpenCode supports attaching files via the --file flag. Ralph TUI extracts file paths from your task and adds them automatically:
Multiple files can be attached - each gets its own --file argument.
How It Works
When Ralph TUI executes a task with OpenCode:
- Build command: Constructs
opencode run [options] - Pass prompt via stdin: Avoids shell escaping issues
- Stream output: Captures stdout/stderr in real-time
- Filter metadata: Removes OpenCode status lines from display
- Detect completion: Watches for
<promise>COMPLETE</promise>token - Handle exit: Reports success, failure, or timeout
CLI Arguments
Ralph TUI builds these arguments:
Output Filtering
OpenCode emits metadata lines during execution (tool calls, progress indicators). Ralph TUI filters these for cleaner output:
- Lines starting with
|or!(status lines) - Progress indicators like
[1/3] - JSON event objects
- Grep-style file matches
The filtered output shows only the agent's actual responses.
Subagent Tracing
OpenCode supports subagent tracing! Ralph TUI automatically detects and displays Task tool invocations in the subagent tree panel.
When OpenCode spawns subagents (using the Task tool), Ralph TUI:
- Detects subagent spawns: Parses OpenCode's JSONL output for Task tool invocations
- Shows hierarchy: Displays subagents in the tree panel (toggle with
T) - Captures results: Shows each subagent's prompt and result when selected
This works with any provider (Anthropic, OpenAI, Google, etc.) as long as the underlying model supports tool use.
Rate Limit Handling
Configure fallback when providers hit rate limits:
Using Local Models
With Ollama, you can run models locally:
Install Ollama
Pull a Model
Configure Ralph TUI
Run
Local models avoid API rate limits and costs, but may be slower and less capable for complex coding tasks.
Troubleshooting
"OpenCode CLI not found"
Ensure OpenCode is installed and in your PATH:
"Invalid provider in model"
Ensure you're using the provider/model format:
Valid providers: anthropic, openai, google, xai, ollama
"Task not completing"
Ensure your prompt template includes instructions to output the completion token:
"Agent warning messages"
If you see warnings about agent types, either:
- Use
agent = "general"(default, no warning) - Ignore the warnings (they're filtered from TUI display)
Next Steps
- Claude Code Agent - Anthropic-specific CLI integration
- Configuration - Full options reference
- Prompt Templates - Customize task prompts