Terminal-native
Designed as a TUI (terminal user interface). No browser, no Electron, no context-switching away from your shell.
A complete, hands-on guide to installing, configuring, and using OpenCode to ship code faster. Run tasks from plain English, wire up any LLM provider, and stay in the flow — right from your shell.
What you'll learn in this guide
OpenCode is an open-source AI coding agent that lives in your terminal. Unlike closed, IDE-locked tools, OpenCode is provider-agnostic, fully scriptable, and designed around a keyboard-first TUI workflow.
Designed as a TUI (terminal user interface). No browser, no Electron, no context-switching away from your shell.
Use Anthropic, OpenAI, Google, Groq, local models via Ollama, or any OpenAI-compatible endpoint. Switch anytime.
Plans, edits, runs commands, reads files, and iterates — you approve each step with a clear diff view.
Every conversation is saved locally. Resume, branch, or share sessions between projects and teams.
Licensed under MIT. Read the code, audit it, fork it, or contribute back. No vendor lock-in, ever.
macOS, Linux, Windows (WSL). Ships as a single binary — no runtime, no dependencies to install.
Pick the method that suits your platform. All commands are copy-ready.
curl -fsSL https://opencode.ai/install | bash
Installs the latest release into ~/.opencode/bin and adds it to your PATH.
npm install -g opencode-ai
Requires Node.js 18+. Great if you already manage CLIs through npm.
brew install sst/tap/opencode
For macOS and Linux users who prefer Homebrew.
scoop bucket add sst https://github.com/sst/scoop-bucket.git
scoop install opencode
Native Windows install via Scoop — no WSL required.
opencode --version
You should see a version string like opencode 0.x.x. If the command is not found, restart your terminal or add the install path to your shell profile.
From zero to your first AI-generated pull request.
OpenCode stores credentials locally. Use the built-in auth helper to save a key for any supported provider:
opencode auth login
You'll be prompted to pick a provider (Anthropic, OpenAI, Google, Groq, OpenRouter, Ollama, …) and paste a key. Keys are encrypted in ~/.local/share/opencode/auth.json.
Navigate to any repository and launch the TUI. OpenCode indexes the repo and opens an interactive chat:
cd ~/projects/my-app
opencode
On first launch, a lightweight context index is created at .opencode/ so the agent understands your codebase.
Type a task in plain English. OpenCode proposes a plan, shows a diff, and applies it after you approve:
› Add input validation to the signup form and write unit tests for it.
Use y to accept edits, n to reject, or e to edit the patch yourself before applying.
A quick tour of the capabilities that make OpenCode feel like a senior teammate.
OpenCode runs a reasoning loop: plan → edit → run → observe → iterate. You stay in control by approving tool calls, file writes, and shell commands one at a time — or all at once with auto-approve mode.
--- a/src/auth/signup.ts
+++ b/src/auth/signup.ts
@@ -12,6 +12,14 @@
export async function signup(input: SignupInput) {
+ if (!input.email?.includes("@")) {
+ throw new ValidationError("Invalid email");
+ }
+ if (!input.password || input.password.length < 8) {
+ throw new ValidationError("Password too short");
+ }
const user = await db.users.create(input);
return user;
}
Start with a fast, cheap model for exploration, then switch to a flagship model for the hard parts — without leaving your session. OpenCode preserves context across model changes.
/model to pick from any configured provider› /model
● anthropic/claude-sonnet-4 $$ fast, smart
anthropic/claude-opus-4 $$$ best quality
openai/gpt-4.1 $$ reliable
google/gemini-2.5-pro $$ huge context
groq/llama-3.3-70b $ ultra fast
ollama/qwen2.5-coder:14b — local, free
OpenCode speaks the Model Context Protocol, so you can plug in your own tools — database clients, internal APIs, browser automation, or project-specific scripts — and make them first-class capabilities of the agent.
{
"mcp": {
"postgres": {
"type": "local",
"command": ["npx", "-y", "@mcp/postgres"],
"env": { "DATABASE_URL": "postgres://..." }
},
"playwright": {
"type": "local",
"command": ["npx", "-y", "@mcp/playwright"]
}
}
}
Muscle memory that will save you hours each week.
/help/model/new/sessions/diff/undo/share/initAGENTS.md with project conventions.Prefix any message with ! to run it as a shell command and have the output piped back into the conversation. For example: !npm test lets the agent read the failing tests and fix them on the next turn.
Dial in behavior per project with a simple JSON file.
OpenCode reads configuration from opencode.json in your
project root (falling back to ~/.config/opencode/config.json).
The file defines your default model, tool permissions, MCP servers,
and custom commands.
{
"$schema": "https://opencode.ai/config.json",
"model": "anthropic/claude-sonnet-4",
"theme": "white",
"autoshare": false,
"autoupdate": true,
"provider": {
"anthropic": {
"options": { "temperature": 0.2 }
}
},
"tools": {
"bash": { "allow": ["npm", "pnpm", "git", "node"] },
"write": { "deny": ["**/.env*", "**/secrets/**"] }
},
"commands": {
"review": "Review the staged git changes and suggest improvements.",
"tests": "Generate missing unit tests for changed files."
}
}
Drop an AGENTS.md at the repo root to teach the agent your conventions — preferred libraries, code style, test commands, and things to avoid. Run /init to scaffold it.
Any provider key can also be set via environment variables, e.g. ANTHROPIC_API_KEY, OPENAI_API_KEY, GROQ_API_KEY. Useful for CI and containerized workflows.
Run one-shot tasks from scripts with opencode run "your prompt". Exit code 0 means success; stdout contains the agent's response.
Small habits that compound into much better output.
"Add JWT auth because sessions don't scale to our serverless runtime" produces better code than "add auth". Context shapes the plan.
One session per feature. Use /new when switching tasks — shorter context windows mean cheaper tokens and sharper answers.
Permit npm test (or equivalent). The agent will loop until green, which catches regressions before you even review.
Start with opencode --plan to get a written plan before any file is touched. Approve the plan, then switch to edit mode.
Make small commits between agent steps. Easy to bisect, easy to revert, and the agent can read your git log for context.
Every correction you repeat is a line that belongs in AGENTS.md. Invest 10 minutes; save 10 hours.
Everything new users usually ask in the first week.
Yes. OpenCode is MIT-licensed. You pay only for the LLM API calls to your chosen provider — or nothing at all if you run a local model via Ollama.
Copilot and Cursor are tied to specific IDEs and models. OpenCode runs in any terminal, on any OS, with any model. It's designed for engineers who live in the shell and want full control over their stack.
Only to the LLM provider you configure, and only the context needed for your request. There's no OpenCode-owned server in the loop. For maximum privacy, run a local model.
Absolutely. Use opencode run "prompt" in headless mode. It's ideal for automated code reviews, test generation, and release-note drafting.
Every change is staged and shown as a diff before it's written. You can reject it, edit it inline, or apply it and then run /undo to revert.
Yes. OpenCode respects .gitignore and can be scoped to a subdirectory. For large repos, add an AGENTS.md per package to keep context tight.
You've learned the fundamentals. Now install OpenCode and ship something today.