Pi Agent

Integrate Pi Coding Agent with Ralph TUI for AI-assisted coding.

Pi Agent

The Pi agent plugin integrates with the Pi Coding Agent (pi) to execute AI coding tasks. Pi is a minimal, extensible terminal coding agent with a tiny core that supports multiple providers and models.

INFO

Pi outputs structured JSONL which enables detailed subagent tracing with tool call breakdowns.

Prerequisites

Install Pi CLI:

Bash
npm install -g @mariozechner/pi-coding-agent

For other platforms, visit the Pi GitHub repository.

Verify installation:

Bash
pi --version
# Should output version like: 0.55.0

You'll also need an API key for at least one supported provider:

  • Anthropic: ANTHROPIC_API_KEY
  • OpenAI: OPENAI_API_KEY
  • Google: GEMINI_API_KEY

Basic Usage

Run with Pi

Use the --agent pi flag:

Bash
ralph-tui run --prd ./prd.json --agent pi

Select a Model

Specify a model via --agent-options:

Bash
ralph-tui run --prd ./prd.json --agent pi --agent-options 'model=sonnet'

Or configure in TOML:

TOML
[agentOptions]
model = "sonnet"

Configure Thinking Level

Set extended thinking level:

TOML
[agentOptions]
thinking = "high"

Configuration

Shorthand Config

The simplest configuration:

TOML
# .ralph-tui/config.toml
agent = "pi"

Full Config

For advanced control:

TOML
[[agents]]
name = "my-pi"
plugin = "pi"
default = true
command = "pi"
timeout = 300000
 
[agents.options]
mode = "json"
model = "sonnet"
thinking = "high"

Options Reference

OptionTypeDefaultDescription
modestring"json"Output mode: "json" for structured JSONL or "text" for plain text
modelstring-Model to use (e.g., sonnet, openai/gpt-4o, anthropic/claude-sonnet)
thinkingstring-Thinking level: off, minimal, low, medium, high, xhigh
timeoutnumber0Execution timeout in ms (0 = no timeout)
commandstring"pi"Path to Pi CLI executable
INFO

A timeout of 0 means no timeout (unlimited execution time). For production or autonomous runs, consider setting a reasonable timeout like 300000 (5 minutes) or 600000 (10 minutes).

Model Selection

Pi supports multiple providers and model patterns:

Shorthand Models

TOML
[agentOptions]
model = "sonnet"      # Claude Sonnet
model = "haiku"        # Claude Haiku
model = "opus"         # Claude Opus

Provider/Model Format

TOML
[agentOptions]
model = "openai/gpt-4o"
model = "anthropic/claude-sonnet"
model = "google/gemini-2.5-pro"

With Thinking Suffix

TOML
[agentOptions]
model = "sonnet:high"  # Sonnet with high thinking

List available models:

Bash
pi --list-models

Thinking Levels

Pi supports extended thinking for supported models:

LevelDescription
(empty)Use model defaults
offNo extended thinking
minimalMinimal thinking budget
lowLow thinking budget
mediumMedium thinking budget
highHigh thinking budget
xhighMaximum thinking

Output Modes

JSON Mode (Default)

TOML
[agentOptions]
mode = "json"

Enables structured JSONL output with:

  • Detailed tool call tracking
  • Subagent tracing
  • Token usage reporting
  • Cost information

Text Mode

TOML
[agentOptions]
mode = "text"

Plain text output without structured data.

Skills Support

Pi CLI supports skills:

LocationDescription
~/.pi/skills/Personal skills (user-specific)
.pi/skills/Repository skills (project-specific)

How It Works

When Ralph TUI executes a task with Pi:

  1. Build command: Constructs pi --print --mode json [options]
  2. Pass prompt via stdin: Avoids shell escaping issues with special characters
  3. Parse JSONL output: Extracts structured events for display
  4. Stream in real-time: Shows thinking and tool calls as they happen
  5. Handle completion: Reports final results with usage stats

CLI Arguments

Ralph TUI builds these arguments:

Bash
pi \
  --print \                    # Non-interactive mode
  --mode json \                # JSONL output (for subagent tracing)
  --model sonnet \             # If model specified
  --thinking high \            # If thinking level specified
  < prompt.txt                 # Prompt via stdin

Subagent Tracing

Pi's JSON mode enables rich subagent tracing:

  • Thinking: Shows reasoning process
  • Tool Calls: Detailed tool invocations with inputs
  • Tool Results: Tool outputs with errors
  • Token Usage: Input/output tokens and cost
  • Turn History: Multiple conversation turns

Troubleshooting

"Pi not found in PATH"

Ensure Pi is installed and in your PATH:

Bash
which pi
# Should output: /path/to/pi
 
# If not found, install via:
# npm install -g @mariozechner/pi-coding-agent

"No API key configured"

Set your provider API key:

Bash
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
 
# OpenAI
export OPENAI_API_KEY=sk-...
 
# Google
export GEMINI_API_KEY=...

"Invalid model"

Check available models:

Bash
pi --list-models

"Execution timeout"

Increase the timeout for complex tasks:

TOML
[[agents]]
name = "pi"
plugin = "pi"
timeout = 600000  # 10 minutes

Next Steps