I wrote about Spec Kit a few months ago. I genuinely liked the idea — write a spec before you code, give your AI assistant real context instead of vague prompts, get better output. The principle is sound. But after using Spec Kit on several real projects, I kept running into the same friction. Then I tried OpenSpec, and the difference was immediate.
This is not a takedown of Spec Kit. It’s a clear-eyed look at why OpenSpec fits better for the kind of work most of us actually do.
The problem with Spec Kit in practice
Spec Kit’s seven-step workflow — constitution, specify, clarify, plan, analyze, tasks, implement — is thorough. Maybe too thorough.
For a greenfield feature with complex requirements, that ceremony makes sense. But most of my work isn’t greenfield. It’s adding a field to an API, reworking how a service handles retries, swapping out a dependency. Brownfield work. Iterative changes to an existing codebase.
Running seven sequential commands to change how a retry policy works felt like filling out a permit application to move a chair. The overhead wasn’t proportional to the task.
Three things kept bothering me:
- Setup cost. Spec Kit requires Python and
uv. On a team where everyone runs Node, that’s an extra dependency to manage and explain. - Output volume. A typical Spec Kit run produces around 800 lines of spec artifacts. That’s a lot of context to review before you even start coding.
- Greenfield bias. The workflow assumes you’re building something new. It doesn’t have a clean way to express “here’s what exists, here’s what’s changing.”
What OpenSpec does differently
OpenSpec is a TypeScript CLI. Install it with npm. No Python, no extra toolchain. Five-minute setup.
npm install -g openspec-dev
Initialize it in your project:
openspec init
That’s it. You’re ready.
The workflow has three phases instead of seven:
1. Propose — describe what you want to change.
openspec propose
This creates a proposal folder with four files: proposal.md (intent and scope), delta specs in specs/ (what’s being added, modified, or removed), design.md (technical approach), and tasks.md (implementation steps).
2. Apply — implement the tasks.
openspec apply
The AI works through the task list against the spec. Same idea as Spec Kit’s implement step, but with less preamble.
3. Archive — merge the changes into your project’s source of truth.
openspec archive
The delta specs get folded back into your main spec. The proposal folder is cleaned up. Your spec stays current without manual bookkeeping.
Delta specs are the key difference
This is the feature that sold me. OpenSpec uses delta markers — ADDED, MODIFIED, REMOVED — to express changes relative to what already exists.
## User Authentication
### Login
- Users log in with email and password
- A JWT token is returned on success
- [MODIFIED] Token expiry changed from 24 hours to 12 hours
- [ADDED] Refresh token issued alongside access token
- [REMOVED] Session cookie fallback for legacy clients
This is how real work looks. You’re not rewriting the entire spec for a feature — you’re expressing a delta. Spec Kit doesn’t have this concept. Every spec is a standalone document, which means either you rewrite context that hasn’t changed, or you lose track of what’s actually different.
For brownfield projects, delta specs are a natural fit. For greenfield, everything is ADDED and it works fine too.
A direct comparison
| Spec Kit | OpenSpec | |
|---|---|---|
| Language | Python | TypeScript |
| Install | uv tool install spec-kit | npm install -g openspec-dev |
| Setup time | ~30 minutes | ~5 minutes |
| Workflow steps | 7 sequential phases | 3 phases (propose, apply, archive) |
| Output size | ~800 lines | ~250 lines |
| Brownfield support | Limited | First-class (delta specs) |
| AI tool support | 18+ assistants | 20+ assistants |
| API keys needed | No | No |
| Git branch management | Automatic | Manual (you control strategy) |
When I still reach for Spec Kit
Spec Kit isn’t wrong — it’s specific. If I were starting a new system from scratch with complex, multi-component requirements and I wanted a detailed planning phase with cross-referencing and gap analysis, Spec Kit’s analyze step adds real value there.
But that’s maybe 10% of my work. The other 90% is iterative — and OpenSpec handles that without getting in the way.
Why this matters
The whole point of spec-driven development is to give AI better context so it produces better output. But if the spec workflow itself is heavy enough that you skip it for “small” changes, you lose the benefit exactly where you need it most. Small changes with unclear scope are where AI hallucinates the most.
OpenSpec’s lightweight approach means I actually use it. For everything. A three-command workflow with 250 lines of output is something I’ll run for a two-endpoint change. A seven-step workflow with 800 lines of output, I won’t.
The best tool is the one you actually use.
Using it in practice
I’ve already started using OpenSpec in my open-source project mockr — a CLI for mocking REST and gRPC APIs so frontend teams aren’t blocked waiting on backend work. It’s exactly the kind of brownfield project where OpenSpec shines: an existing codebase with real users, where every change is a delta on top of what’s already there. Each new feature or tweak becomes a clean propose → apply → archive cycle, and the spec stays in sync with the code without me having to think about it.
Getting started
npm install -g openspec-dev
cd your-project
openspec init
From there, run openspec propose and describe what you’re building. Review the generated spec, adjust if needed, and run openspec apply. When you’re done, openspec archive to keep your specs in sync.
More info: github.com/Fission-AI/OpenSpec
