Gemini Agent

Integrate Google's Gemini CLI with Ralph TUI for AI-assisted coding.

Gemini Agent

The Gemini agent plugin integrates with Google's gemini CLI to execute AI coding tasks. It supports YOLO mode for autonomous operation and streaming JSONL output for subagent tracing.

INFO

Gemini supports subagent tracing via stream-json output - Ralph TUI can show tool calls in real-time as Gemini works.

Prerequisites

Install Gemini CLI:

Bash
npm install -g @google/gemini-cli

Verify installation:

Bash
gemini --version

Basic Usage

Run with Gemini

Use the --agent gemini flag:

Bash
ralph-tui run --prd ./prd.json --agent gemini

Select a Model

Override the model with --model:

Bash
ralph-tui run --prd ./prd.json --agent gemini --model gemini-2.5-pro

Enable YOLO Mode

Configure auto-approve in config:

TOML
[agentOptions]
yoloMode = true

Configuration

Shorthand Config

The simplest configuration:

TOML
# .ralph-tui/config.toml
agent = "gemini"
 
[agentOptions]
model = "gemini-2.5-pro"

Full Config

For advanced control:

TOML
[[agents]]
name = "my-gemini"
plugin = "gemini"
default = true
command = "gemini"
timeout = 300000
 
[agents.options]
model = "gemini-2.5-pro"
yoloMode = true

Options Reference

OptionTypeDefaultDescription
modelstring-Gemini model: gemini-2.5-pro, gemini-2.5-flash
yoloModebooleantrueSkip approval prompts for autonomous operation
timeoutnumber0Execution timeout in ms (0 = no timeout)
commandstring"gemini"Path to Gemini CLI executable

Models

Gemini CLI supports these model variants:

ModelDescriptionUse Case
gemini-2.5-proMost capableComplex tasks, architecture decisions
gemini-2.5-flashFast and efficientQuick tasks, rapid iteration

Select via CLI or config:

Bash
# CLI
ralph-tui run --prd ./prd.json --agent gemini --model gemini-2.5-flash
 
# Config
[agentOptions]
model = "gemini-2.5-pro"

YOLO Mode

When yoloMode is enabled (default), Gemini will skip approval prompts and auto-approve all actions. This is required for Ralph TUI's autonomous operation since it cannot relay interactive prompts.

TOML
[agentOptions]
yoloMode = true
INFO

YOLO mode is named after Gemini CLI's --yolo flag. Despite the name, it's essential for automated workflows.

Subagent Tracing

Gemini emits structured JSONL via --output-format stream-json (always enabled). Ralph TUI parses this to display:

  • Tool invocations and their output
  • Duration and status of each operation
  • Nested operations and their hierarchy

Enabling Tracing

TOML
subagentTracingDetail = "full"

Or toggle in TUI:

  • Press t to cycle through detail levels
  • Press T (Shift+T) to toggle the subagent tree panel

How It Works

When Ralph TUI executes a task with Gemini:

  1. Build command: Constructs gemini [options]
  2. Pass prompt via stdin: Avoids shell escaping issues with special characters
  3. Stream output: Captures stdout/stderr in real-time
  4. Parse JSONL: Extracts structured tool call data (always enabled)
  5. Detect completion: Watches for completion token
  6. Handle exit: Reports success, failure, or timeout

CLI Arguments

Ralph TUI builds these arguments:

Bash
gemini \
  --output-format stream-json \    # Always used for structured output
  -m gemini-2.5-pro \              # If model specified
  --yolo \                         # When yoloMode enabled
  < prompt.txt                     # Prompt via stdin

Model Validation

Gemini agent validates that model names start with gemini-:

TypeScript
// Valid models
"gemini-2.5-pro"
"gemini-2.5-flash"
 
// Invalid - will show error
"gpt-4"
"claude-3"

Troubleshooting

"Gemini CLI not found"

Ensure Gemini is installed and in your PATH:

Bash
which gemini
# Should output: /path/to/gemini
 
# If not found, install:
npm install -g @google/gemini-cli

"Invalid model"

Ensure your model name starts with gemini-:

TOML
[agentOptions]
model = "gemini-2.5-pro"  # Correct
# model = "gpt-4"         # Wrong - not a Gemini model

"Execution timeout"

Increase the timeout for complex tasks:

TOML
[[agents]]
name = "gemini"
plugin = "gemini"
timeout = 600000  # 10 minutes

Next Steps