OpenCode Agent

Use OpenCode CLI with multiple AI providers in Ralph TUI.

OpenCode Agent

The OpenCode agent plugin integrates with the opencode CLI, an open-source AI coding assistant that supports multiple providers including Anthropic, OpenAI, Google, xAI, and Ollama.

INFO

OpenCode lets you use any supported AI provider with Ralph TUI, including local models via Ollama.

Prerequisites

Install OpenCode CLI:

Bash
curl -fsSL https://opencode.ai/install | bash

Verify installation:

Bash
opencode --version

Basic Usage

Run with OpenCode

Use the --agent opencode flag:

Bash
ralph-tui run --prd ./prd.json --agent opencode

Select a Model

Specify provider and model with --model:

Bash
ralph-tui run --prd ./prd.json --agent opencode --model anthropic/claude-3-5-sonnet

Select a Variant

Control reasoning effort with --variant:

Bash
ralph-tui run --prd ./prd.json --agent opencode --model google/gemini-2.5-pro --variant high

Configuration

Shorthand Config

Basic configuration:

TOML
# .ralph-tui/config.toml
agent = "opencode"
 
[agentOptions]
provider = "anthropic"
model = "claude-3-5-sonnet"
variant = "high"

Full Config

For advanced control:

TOML
[[agents]]
name = "my-opencode"
plugin = "opencode"
default = true
timeout = 300000
 
[agents.options]
provider = "anthropic"
model = "claude-3-5-sonnet"
variant = "high"
agent = "general"
format = "default"

Options Reference

OptionTypeDefaultDescription
providerstring-AI provider: anthropic, openai, google, xai, ollama
modelstring-Model name within the provider
variantstring-Model reasoning effort (model-specific, e.g., minimal, high, max for Gemini)
agentstring"general"Agent type: general, build, or plan
formatstring"default"Output format: default or json
timeoutnumber0Execution timeout in ms (0 = no timeout)
commandstring"opencode"Path to OpenCode CLI executable
INFO

For simple configs, you can use the top-level command option instead of the [[agents]] array:

TOML
agent = "opencode"
command = "/custom/path/opencode"

See Custom Command for details.

Providers and Models

OpenCode supports multiple AI providers. Models are specified in provider/model format.

Anthropic

Bash
--model anthropic/claude-3-5-sonnet
--model anthropic/claude-3-opus

OpenAI

Bash
--model openai/gpt-4o
--model openai/gpt-4-turbo

Google

Bash
--model google/gemini-pro
--model google/gemini-1.5-pro

xAI

Bash
--model xai/grok-1

Ollama (Local)

Bash
--model ollama/llama3
--model ollama/codellama
INFO

Model names are validated by the provider's API. Invalid model names result in errors from OpenCode, not Ralph TUI.

Model Variants

You can control the reasoning effort for supported models using the variant option:

Bash
ralph-tui run --agent opencode --model google/gemini-2.5-pro --variant max
INFO

Variant values are model-specific and validated by the provider's API. Check your model's documentation for supported variants.

Agent Types

OpenCode offers specialized agent personalities:

TypeDescriptionUse Case
generalGeneral-purpose coding agentMost tasks (default)
buildFocused on building and implementingImplementation work
planPlanning and architecture focusDesign discussions

Configure in options:

TOML
[agentOptions]
agent = "build"
INFO

Using non-default agent types may trigger warning messages in OpenCode output. Ralph TUI filters these automatically, but you may see them if running OpenCode directly.

File Context

OpenCode supports attaching files via the --file flag. Ralph TUI extracts file paths from your task and adds them automatically:

Bash
opencode run --file /path/to/context.ts

Multiple files can be attached - each gets its own --file argument.

How It Works

When Ralph TUI executes a task with OpenCode:

  1. Build command: Constructs opencode run [options]
  2. Pass prompt via stdin: Avoids shell escaping issues
  3. Stream output: Captures stdout/stderr in real-time
  4. Filter metadata: Removes OpenCode status lines from display
  5. Detect completion: Watches for <promise>COMPLETE</promise> token
  6. Handle exit: Reports success, failure, or timeout

CLI Arguments

Ralph TUI builds these arguments:

Bash
opencode run \
  --model anthropic/claude-3-5-sonnet \  # If model specified
  --agent general \                       # If not default
  --format default \                      # If not default
  --file /path/to/context.ts \           # From file context
  < prompt.txt                           # Prompt via stdin

Output Filtering

OpenCode emits metadata lines during execution (tool calls, progress indicators). Ralph TUI filters these for cleaner output:

  • Lines starting with | or ! (status lines)
  • Progress indicators like [1/3]
  • JSON event objects
  • Grep-style file matches

The filtered output shows only the agent's actual responses.

Subagent Tracing

INFO

OpenCode supports subagent tracing! Ralph TUI automatically detects and displays Task tool invocations in the subagent tree panel.

When OpenCode spawns subagents (using the Task tool), Ralph TUI:

  1. Detects subagent spawns: Parses OpenCode's JSONL output for Task tool invocations
  2. Shows hierarchy: Displays subagents in the tree panel (toggle with T)
  3. Captures results: Shows each subagent's prompt and result when selected

This works with any provider (Anthropic, OpenAI, Google, etc.) as long as the underlying model supports tool use.

Rate Limit Handling

Configure fallback when providers hit rate limits:

TOML
agent = "opencode"
fallbackAgents = ["claude"]
 
[agentOptions]
provider = "anthropic"
model = "claude-3-5-sonnet"
 
[rateLimitHandling]
enabled = true
maxRetries = 3

Using Local Models

With Ollama, you can run models locally:

Install Ollama

Bash
# macOS
brew install ollama
 
# Linux
curl -fsSL https://ollama.ai/install.sh | sh

Pull a Model

Bash
ollama pull llama3
# or
ollama pull codellama

Configure Ralph TUI

TOML
agent = "opencode"
 
[agentOptions]
provider = "ollama"
model = "llama3"

Run

Bash
ralph-tui run --prd ./prd.json
INFO

Local models avoid API rate limits and costs, but may be slower and less capable for complex coding tasks.

Troubleshooting

"OpenCode CLI not found"

Ensure OpenCode is installed and in your PATH:

Bash
which opencode
# Should output: /path/to/opencode
 
# If not found, install:
curl -fsSL https://opencode.ai/install | bash

"Invalid provider in model"

Ensure you're using the provider/model format:

Bash
# Wrong
--model claude-3-5-sonnet
 
# Correct
--model anthropic/claude-3-5-sonnet

Valid providers: anthropic, openai, google, xai, ollama

"Task not completing"

Ensure your prompt template includes instructions to output the completion token:

HANDLEBARS
When finished (or if already complete), signal completion with:
<promise>COMPLETE</promise>

"Agent warning messages"

If you see warnings about agent types, either:

  • Use agent = "general" (default, no warning)
  • Ignore the warnings (they're filtered from TUI display)

Next Steps