I used Claude Code for most of last year. It was good. Then I hit the moment that eventually catches every heavy user of a vendor-locked tool: I needed to run a task with a different model, and I couldn’t. The model I wanted — better suited for a specific type of reasoning — was available from a different provider. Claude Code does Claude. That’s it.
That’s the moment I started looking at opencode seriously. I’ve been using it daily since, with my Anthropic subscription, same models, same quality. But now I’m not locked in. When a better model ships, I change one line in a config file. I don’t miss Claude Code at all.
This article is about how I actually use opencode — the setup, the workflows, and the features that have genuinely changed how I work with AI coding agents.
What is opencode?
opencode is an open-source AI coding agent available as a terminal UI, desktop app, and IDE extension. It has 127k GitHub stars, 800 contributors, and ships updates fast — the project hit v1.2.27 in under a year. Under the hood it’s a client/server architecture, which means the TUI is just one possible frontend: the same agent can be driven from a mobile app, a web UI, or a remote terminal.
The three properties that matter to me:
- Vendor agnostic — 75+ providers through Models.dev. Switch models with one config change.
- LSP enabled — automatically loads language server context so the model knows about your types, your errors, and your imports before you explain them.
- Fully configurable — agents, commands, MCP servers, permissions, keybinds. Everything is a file.
Using it with your Claude Pro/Max subscription
If you already have a Claude Pro or Max subscription, no API key is needed. opencode supports the same device auth flow you might know from Claude Code. Run /connect inside the TUI, select Anthropic, then choose Claude Pro/Max:
/connect
┌ Select provider
│ Anthropic
└
┌ Select auth method
│ Claude Pro/Max ← pick this
│ Manually enter API Key
└
Your browser opens and asks you to log in to your Anthropic account. Once you authorize, opencode stores the access tokens in ~/.local/share/opencode/auth.json and you’re done. No API key, no environment variables, no config file changes. Every Anthropic model available on your subscription plan is immediately available via /models.
This was one of the small friction-reducers that made switching feel easy. Same login I already had, works exactly the same way as in Claude Code, and the tokens are managed for me.
If you want to pick a default model so you don’t have to select it every session, add one line to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"model": "anthropic/claude-sonnet-4-20250514"
}
opencode also supports Zen — a curated list of models the opencode team has tested specifically for coding agents, if you want to explore beyond your current subscription.
Install
# macOS (recommended — always up to date)
brew install anomalyco/tap/opencode
# Or via the install script
curl -fsSL https://opencode.ai/install | bash
# Or npm/bun/pnpm
npm install -g opencode-ai
AGENTS.md: teaching opencode your project
The first thing I do in any project is run /init. opencode scans the codebase and generates an AGENTS.md file at the project root. This file is included in the LLM context at the start of every session — it’s how opencode understands your project before you type a single message.
Commit this file to git. Your whole team benefits from it, and it persists across model switches. When you move from Claude to GPT-5 or Gemini, the project context travels with you.
Here’s what a good AGENTS.md looks like for a TypeScript fullstack project:
# Project: backend API + Astro blog
## Stack
- NestJS (hexagonal architecture) for the backend API
- Astro + Tailwind for the frontend blog
- PostgreSQL via TypeORM, Redis for caching
- Deployed on Railway
## Architecture rules
- Domain layer (`src/domain/`) has zero infrastructure imports
- Repositories are interfaces in the domain, implemented in adapters
- Controllers only call use cases — never repositories directly
- All public methods have JSDoc comments
- Error handling uses domain exceptions, not HTTP status codes
## Code conventions
- TypeScript strict mode, no `any`
- Named exports only — no default exports
- Utility functions go in `src/shared/`, not inlined
- Prefer `const` arrow functions for handlers
## Testing
- Unit tests for all domain logic using stub repositories
- Integration tests for all repository implementations
- Jest with `--runInBand` for database tests
## Do not
- Add console.log to production code
- Use `@ts-ignore` without a comment explaining why
- Commit `.env` files or hardcoded credentials
The ## Do not section is underrated. It prevents the AI from introducing patterns you’ve explicitly decided against — and it works across every model you ever switch to.
Plan → Build mode: the discipline that stops costly mistakes
opencode has two primary agents you switch between with Tab:
- Build — full tool access. Can read, write, edit, run bash commands. This is the default.
- Plan — read-only. Cannot modify files. Asks permission before running bash commands.
My rule: for anything beyond a single-file trivial change, I start in Plan mode.
The workflow looks like this:
[Tab] → switch to Plan
"I want to add rate limiting to the /orders endpoint.
Look at how auth middleware works in src/middleware/auth.ts
and design the same pattern for rate limiting."
→ opencode analyzes, proposes a plan, lists files to create/modify
"Add Redis-based sliding window, 100 req/min per user ID.
Make sure it hooks into the existing exception filter."
→ iterate on the plan until it's right
[Tab] → switch to Build
"Sounds good. Go ahead."
This costs five extra minutes. It saves hours of untangling changes that went in the wrong direction. The Plan agent can’t accidentally delete a file or run a migration while it’s figuring out an approach.
Custom commands: my favourite feature
This is the feature that made opencode feel like a tool I built for myself rather than a generic assistant. Custom commands let you define reusable prompts — triggered by typing /command-name in the TUI — that can inject shell output, file content, and arguments into the prompt automatically.
If you want to see a complete working setup, I’ve open-sourced my entire opencode configuration at github.com/ridakaddir/opencode.config — it includes the review agent and custom commands I describe below, ready to clone and use.
Commands live in .opencode/commands/ (per-project) or ~/.config/opencode/commands/ (global). Each is a markdown file with frontmatter.
/review-changes — my daily driver
Every morning before standup I run this:
---
description: Review recent git changes and summarise what was done
agent: plan
---
Here are the recent commits on this branch:
!`git log --oneline -15`
And the full diff:
!`git diff main...HEAD --stat`
Summarise what changed, flag anything that looks incomplete or risky,
and suggest anything I should verify before opening a PR.
The ! prefix runs the shell command and injects the output into the prompt. The Plan agent reads everything and gives me a clear picture of the branch state before I write a single word of the PR description.
/pr-summary — instant PR descriptions
---
description: Generate a PR description from the current branch diff
agent: plan
subtask: true
---
Branch: !`git branch --show-current`
Commits:
!`git log --oneline main...HEAD`
Diff summary:
!`git diff main...HEAD --stat`
Write a pull request description with:
- A one-sentence summary of what this PR does
- A bullet list of the main changes
- Any breaking changes or migration steps required
- Testing notes
Use plain markdown, no HTML.
The subtask: true flag runs this as a subagent so it doesn’t pollute my main conversation context. I copy the output directly into GitHub.
/check-file — targeted review with an argument
---
description: Review a specific file for issues and improvements
agent: plan
---
Review the file @$ARGUMENTS
Check for:
- Logic errors or edge cases not handled
- Missing error handling
- Violations of the patterns in AGENTS.md
- Anything that would fail in a code review
Be specific about line numbers and explain the reasoning behind each issue.
Usage: /check-file src/domain/ports/ticket.service.ts
The @$ARGUMENTS syntax includes the file content in the prompt automatically. I run this on any file I’m about to push.
/fix-types — TypeScript cleanup
---
description: Fix TypeScript errors in the current project
---
Current TypeScript errors:
!`npx tsc --noEmit 2>&1 | head -50`
Fix these type errors one by one. Start with the most fundamental
errors first (types that other files depend on). Do not use `any`
or `@ts-ignore` — find the correct type.
The shell command runs tsc and injects the first 50 lines of output. opencode reads the real errors and fixes them in context.
Custom agents
Beyond the built-in Build and Plan agents, I define a code-reviewer subagent that I invoke on any significant PR. You can find the complete version in my opencode.config repository along with setup instructions. It lives in .opencode/agents/review.md:
---
description: Reviews code for correctness, security, and maintainability. Read-only.
mode: subagent
temperature: 0.1
permission:
edit: deny
bash:
"*": ask
"git diff*": allow
"git log*": allow
---
You are a senior software engineer doing a code review. You do not make changes.
Focus on:
- Logic errors and edge cases not handled
- Security issues (injection, auth bypass, data exposure)
- Missing tests for new behaviour
- Violations of the project's architectural patterns (read AGENTS.md)
- Performance problems that will matter at scale
For each issue: state the file and line, explain the problem, suggest a fix.
Do not comment on style or formatting — only correctness and architecture.
I invoke it with @review in a message or it gets called automatically when I ask opencode to “review this before I commit.” The temperature: 0.1 keeps the output deterministic — I want consistent, focused findings, not creative suggestions.
MCP servers
Model Context Protocol servers add external tools to opencode. My approach is situational: I keep a small set of always-on servers, and I enable riskier or heavier ones only when I need them for a specific task.
Always on — low risk, high value:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"context7": {
"type": "remote",
"url": "https://mcp.context7.com/mcp",
"enabled": true
},
"playwright": {
"type": "local",
"command": ["npx", "-y", "@playwright/mcp@latest"],
"enabled": true
}
}
}
Context7 searches library documentation in real time. When I’m working with a new framework version, I add use context7 to my message and opencode pulls the actual docs rather than guessing an API that changed two versions ago.
Playwright lets opencode interact with browsers — useful for testing UI flows, scraping a page for reference, or having the agent verify that something actually renders correctly after a change.
Enabled on demand — when I know the risk is acceptable:
I also use the Figma MCP when building UI from a design file. I enable it for the session, let opencode read component specs and styles directly from Figma, then disable it when done. The same goes for other task-specific servers.
The mental model I use for deciding: would I give this tool unsupervised access to my system? A docs search server — yes, always on. A database MCP that can run queries — only when I’m actively working on something that needs it, and with "bash" permissions set to "ask" so I see every command before it runs.
One important rule regardless of which servers you add: don’t pile them all in at once. Every MCP server registers its tools in the model’s context window, and context has a finite size. A server you enable but never invoke in a session is just wasted tokens on every message.
/undo is your safety net — use it
One of the things that held me back from trusting AI agents with significant changes was the fear of unrecoverable state. opencode’s /undo removed that fear.
When a change goes wrong, I type /undo. The files revert to exactly what they were before opencode touched them. Run it multiple times to walk back multiple steps. /redo goes forward again.
This means I can run the Build agent on a complex refactor, let it go, and if the result isn’t right — undo it and adjust the prompt rather than manually reversing fifteen file changes. The safety net changes how boldly I prompt. I ask for more, because reversing costs nothing.
Multi-session for parallel work
opencode’s client/server architecture means multiple sessions can run against the same project simultaneously. I use this constantly:
# Terminal 1 — main development session
cd my-project && opencode
# Terminal 2 — running a parallel task
cd my-project && opencode
A typical pattern: one session writing unit tests while the other implements the feature those tests are for. Because both sessions read the same AGENTS.md, they share the same understanding of the project’s conventions without me re-explaining anything.
The sessions are independent — they don’t share conversation history — but they share the filesystem. Changes made by one session are immediately visible to the other. This requires a little discipline (don’t have both sessions editing the same file simultaneously) but the productivity gain is real.
The full config
Here’s the opencode.json that pulls all of this together, at either ~/.config/opencode/opencode.json for global settings or .opencode/opencode.json for per-project overrides:
{
"$schema": "https://opencode.ai/config.json",
"model": "anthropic/claude-sonnet-4-20250514",
"mcp": {
"context7": {
"type": "remote",
"url": "https://mcp.context7.com/mcp",
"enabled": true
},
"playwright": {
"type": "local",
"command": ["npx", "-y", "@playwright/mcp@latest"],
"enabled": true
}
},
"agent": {
"build": {
"permission": {
"bash": {
"*": "ask",
"git status": "allow",
"git diff*": "allow",
"git log*": "allow",
"npm run*": "allow",
"npx tsc*": "allow"
}
}
}
}
}
The permission.bash block is worth highlighting: I allow read-only git commands and build commands automatically, but anything that mutates state (git commit, git push, database migrations) requires my explicit approval. It threads the needle between autonomy and control.
Why I don’t miss Claude Code
After months of daily opencode use, the honest answer is: I don’t. The things I was worried about losing — quality of output, tool reliability, context understanding — all come from the model, not the agent. I still run Claude — Sonnet for most tasks, Opus when the problem needs deeper reasoning — and switching between them is one line in the config or a /models command away.
What I gained:
- I can switch to GPT-5 for tasks where it outperforms Claude, and back again in seconds
- Custom commands mean my most-used workflows are a
/away rather than re-typed every session - The AGENTS.md approach makes every new model feel immediately at home in my codebase
/undomeans I prompt more boldly and waste less time on manual reversions- The project is open source, actively maintained, and has 10k+ commits — it’s not going anywhere
The vendor-agnostic model is what gets you in the door. The commands, agents, and AGENTS.md are what keep you there.
More info: opencode.ai — github.com/anomalyco/opencode — my configuration