What Is Spec-Driven Development?
Spec-Driven Development (SDD) is a structured approach to software development that treats specifications as executable sources of truth rather than throwaway planning documents. In traditional development, code is king — specifications serve code, often becoming outdated as implementation evolves. SDD inverts this relationship: specifications become the primary artifact, and code serves specifications.
This paradigm shift is what makes AI-assisted development reliable at scale. Four core principles define SDD:
- Specifications as the primary artifact. The spec is the central source of truth. Code becomes its expression in a particular language and framework. Maintaining software means evolving specifications, not just patching code.
- Executable specifications. Specs must be precise, complete, and unambiguous enough to generate working systems. This precision eliminates the gap between intent and implementation.
- Living documentation. Debugging means fixing specifications that generate incorrect code. Refactoring means restructuring specs for clarity. Specifications remain synchronized with implementation.
- AI-human collaboration. AI transforms specs to code, but raw generation without structure produces chaos. SDD provides that structure through well-defined specifications and implementation plans.
Every SDD workflow flows through exactly four phases, each producing artifacts that feed into the next:
Each phase produces artifacts that feed into the next, creating a traceable path from requirements to working code.
Why SDD Matters for Enterprise Teams
Three converging trends make SDD essential for enterprise teams today:
- AI capabilities. Natural language specifications can now reliably generate working code, automating the mechanical translation from specification to implementation. The bottleneck has shifted from writing code to defining intent clearly.
- Software complexity. Modern systems integrate dozens of services, frameworks, and dependencies. SDD provides systematic alignment through specification-driven generation, preventing architectural drift across long projects or multiple developers.
- Pace of change. Requirements change rapidly. SDD transforms requirement changes from obstacles into a normal workflow — update the spec, and affected artifacts regenerate systematically rather than being patched into code without updating documentation.
For enterprise developers specifically, SDD delivers three concrete benefits:
- Consistent alignment with organizational standards — security policies, cloud platform requirements, and compliance obligations are written once in the constitution and enforced in every generated artifact.
- Auditable documentation — every requirement, architectural decision, and implementation step exists as a versioned markdown file alongside the source code.
- Systematic compliance enforcement — rather than relying on developers to remember every organizational constraint while prompting, those constraints are embedded in the specification process.
While SDD excels at greenfield development, most enterprise work involves existing codebases. The constitution documents existing architectural patterns and constraints. New feature specifications reference those established patterns, so generated plans integrate with current architecture rather than proposing isolated reimplementations.
GitHub Spec Kit: Core Components and Commands
GitHub Spec Kit is an open-source toolkit that enables SDD by integrating structured workflows, persistent artifacts, and reusable AI command patterns. It addresses a fundamental challenge in AI-assisted development: maintaining context and consistency across multiple interactions with coding assistants.
The toolkit delivers three essential capabilities:
- Persistent artifacts. Specifications, plans, and tasks are stored as plain markdown files in your repository. Git handles versioning, branching, and review.
- Standardized workflow. A defined process guides you through the four SDD phases, ensuring nothing is skipped and each phase receives proper review before the next begins.
- Reusable commands. Built-in slash commands encapsulate best-practice prompting patterns so the workflow is consistent across team members and projects.
Core components
| Component | Purpose |
|---|---|
specify-cli |
Initializes and manages spec-driven projects; scaffolds prompt templates and artifact directories. |
| Markdown artifact files | constitution.md, spec.md, plan.md, and tasks.md drive each phase of development. |
| Core slash commands | /speckit.constitution, /speckit.specify, /speckit.plan, /speckit.tasks, /speckit.implement |
| Optional enhancement commands | /speckit.clarify, /speckit.analyze, /speckit.checklist |
Slash command syntax varies by agent
After running specify init, your AI coding agent will have access to these structured
development commands. The invocation syntax differs depending on the agent:
| Agent | Command syntax | Notes |
|---|---|---|
| GitHub Copilot | /speckit.constitution, /speckit.specify, … |
Templates are installed in .github/prompts/ |
| Claude Code | /speckit-constitution, /speckit-specify, … |
Installs as skills in .claude/skills/ |
| Codex CLI | $speckit-constitution, $speckit-specify, … |
Requires --ai-skills flag during specify init |
| Cursor, Windsurf, others | /speckit.constitution, /speckit.specify, … |
Standard dotted slash commands |
GitHub Spec Kit supports 20+ agents at the time of writing, including GitHub Copilot,
Claude Code, Cursor, Windsurf, Gemini CLI, Codex CLI, Amazon Q Developer, Kiro, Amp,
Roo Code, Junie, and more. A generic mode (--ai generic) lets you bring
any unsupported agent by pointing --ai-commands-dir at your agent's command directory.
See the spec-kit repository for the current full list.
Beyond the core workflow, Spec Kit has a growing community ecosystem.
Extensions add new commands and workflows — for example, Jira integration,
post-implementation code review, Azure DevOps sync, or V-Model test traceability.
Presets override templates and terminology without changing tooling — useful
for enforcing organizational spec formats or regulatory standards.
Both are managed with specify extension add <name> and specify preset add <name>.
Enhancement Commands
Beyond the core five commands, Spec Kit provides three optional commands that improve artifact quality before implementation starts. Used together, they form a quality gate between spec generation and coding.
/speckit.clarify — gap analysis
Run /speckit.clarify after generating an initial spec to surface ambiguities,
missing details, and underspecified edge cases. The AI reviews your spec and generates
targeted questions — for example:
- "The spec mentions file upload but doesn't specify maximum concurrent uploads. Should there be a limit?"
- "Error handling for network failures isn't specified. What should happen if the upload connection is lost?"
For each question, the AI often provides multiple-choice options. You select or provide a custom answer, and the spec is updated accordingly. Run it multiple times — a first pass surfaces major gaps, a second covers edge case details, a third fine-tunes nonfunctional requirements. Stop when Copilot has no more questions or only raises features you intend to defer.
/speckit.analyze — cross-artifact consistency
Run /speckit.analyze after generating plan.md and
tasks.md but before starting implementation. It performs cross-artifact
consistency checking: does the plan implement all spec requirements? Do tasks cover all
plan elements? Does everything align with the constitution? Typical findings include:
- "Plan proposes PostgreSQL, but constitution requires Azure SQL Database."
- "Specification requires audit logging, but plan doesn't describe logging implementation."
- "Task list omits database migration scripts mentioned in plan."
Each identified inconsistency is an issue that would otherwise surface during implementation or code review. Catching them during analysis prevents costly rework.
/speckit.checklist — quality validation
/speckit.checklist generates a custom checklist based on your specification —
essentially "unit tests for English prose." The AI produces verification questions such as:
- "Does every user story have corresponding acceptance criteria?"
- "Are all error scenarios documented with specific error messages?"
- "Do nonfunctional requirements include measurable success criteria?"
Any "no" answers reveal gaps to close before sharing with stakeholders or proceeding to implementation.
Git Integration and Project Organization
All Spec Kit artifacts are plain markdown files stored in your Git repository alongside source code. This gives you change tracking, branch-based development, and comprehensive pull request reviews — reviewers see both what you built (spec) and how you built it (code).
Project structure
my-project/
├── .github/
│ ├── agents/
│ └── prompts/ ← Spec Kit slash command templates
├── .specify/
│ ├── memory/
│ │ └── constitution.md
│ ├── scripts/
│ └── templates/
├── SourceCode/
│ └── ...
└── specs/
└── 001-document-upload-feature/
├── spec.md
├── plan.md
└── tasks.md
Features are numbered sequentially (001, 002) to track development
order. For teams working on multiple features concurrently, each feature has its own
directory containing its complete specification, plan, and tasks — preventing confusion and
enabling parallel work without conflicts.
Feature tracking with SPECIFY_FEATURE
In Git-based workflows, Spec Kit infers the active feature from your branch name. If you're
on branch feature/document-upload, Spec Kit automatically reads and writes
artifacts in specs/document-upload/. For non-Git workflows or manual override,
set the environment variable explicitly:
# PowerShell
$env:SPECIFY_FEATURE = "001-document-upload"
# Bash / zsh
export SPECIFY_FEATURE="001-document-upload"
SPECIFY_FEATURE must be set in the context of the agent you're working with
before invoking /speckit.plan or follow-up commands. Without it, the
AI may read the wrong spec.md if multiple features are in progress.
Continuous workflow: command chaining
Spec Kit supports iterative development through progressive command chaining. If requirements change at any point, return to the relevant phase, update the artifact, and regenerate downstream artifacts:
/speckit.specify/speckit.clarify/speckit.plan/speckit.analyze/speckit.tasks/speckit.implementDevelopment Scenarios
Spec Kit supports four development patterns, each with distinct usage approaches:
Greenfield (0-to-1)
The primary use case — transforming a high-level product vision into a concrete, structured
implementation path from scratch. Start with /speckit.constitution to establish
project principles, then use /speckit.specify for each feature as you build
the application iteratively.
Brownfield enhancement
For existing applications, your constitution documents existing architectural patterns and constraints. New feature specifications reference those established patterns. When adding a document upload feature to an existing employee portal, the plan shows how the new feature integrates with the current React front end, .NET back end, and Azure infrastructure — rather than proposing a separate implementation.
Refactoring and modernization
Treat the desired end state as the specification, create a plan for the refactoring approach, and generate incremental tasks. This structured approach prevents the common problem of starting a refactor and getting lost mid-process with partially working code.
Exploratory development
Generate multiple plans from the same specification — for example, one using Azure Blob Storage and another using Azure Files. Implement both, compare results, and choose the better approach based on actual experience rather than assumptions.
The Constitution File
The constitution (constitution.md) captures the non-negotiable principles,
constraints, and standards that govern your project. It acts as a standing guardrail: every
generated spec, plan, task, and code output is checked against it automatically. Write a
principle once; Spec Kit enforces it throughout every phase.
Key benefits in enterprise settings:
- Consistency enforcement — prevents architectural drift across long projects or multiple developers.
- Compliance documentation — makes security policies and regulatory requirements explicit and auditable.
- Institutional knowledge capture — preserves hard-won architectural lessons in a form AI can reference.
- Reduced cognitive load — developers don't need to remember every organizational standard while prompting.
Constitution structure
| Section | What to define |
|---|---|
| Technology standards | Approved cloud platform, frameworks, databases, secret management tooling |
| Security requirements | Authentication, authorization, encryption standards, PII handling rules |
| Performance and scalability | Response time targets (e.g., 200 ms at p95), concurrency limits, caching policy, async thresholds |
| Coding standards | Language conventions, minimum test coverage, logging interfaces, documentation requirements |
| Compliance and governance | Regulatory obligations, accessibility (WCAG 2.1 AA), audit log retention periods |
Keep every principle specific and measurable — "API responses complete within 200 ms for 95% of requests" is more useful than "the system should be fast."
Creating the constitution
Run /speckit.constitution in GitHub Copilot Chat with a natural language
description of your project constraints. For a new project, Copilot generates a structured
constitution.md from scratch. For an existing project, it reviews the codebase
and infers principles from what it finds.
After generation, review the output critically:
- Add missing requirements (e.g., specific logging formats your organization mandates).
- Remove generic boilerplate that provides no concrete guidance.
- Replace vague statements with measurable criteria.
- Align with internal standards documents where they exist.
- Validate with security, compliance, and architecture teams before treating it as authoritative.
How the constitution integrates into the workflow
Every core command respects the constitution automatically:
/speckit.specify— flags spec requirements that conflict with constitution principles before you proceed./speckit.plan— generates plans with explicit sections demonstrating compliance with each relevant principle./speckit.analyze— compares spec, plan, and tasks against the constitution and surfaces deviations./speckit.implement— produces code that honors constitution constraints automatically (e.g., using encrypted storage when mandated).
Enterprise constitution example
## Azure Platform Standards
- Host all services on Azure App Service or Azure Container Apps.
- Use Azure Blob Storage for document storage.
- Secrets stored exclusively in Azure Key Vault.
## Identity Integration
- Authenticate via Microsoft Entra ID using OAuth 2.0 / OpenID Connect.
- Implement RBAC using Microsoft Entra ID groups. No custom authentication.
## Corporate Compliance
- Audit logging per enterprise retention policies (minimum 90 days).
- Accessibility: WCAG 2.1 Level AA minimum.
- Scan all dependencies for known vulnerabilities before deployment.
## Development Standards
- Back end: .NET 10 following organizational coding conventions.
- Minimum 80% unit test coverage.
- All APIs documented with OpenAPI/Swagger.
Writing the Spec File
The spec file (spec.md) is the single source of truth for what a feature
should do. Every implementation decision traces back to it — if something isn't in the spec,
it doesn't get built unless the spec is updated first.
Spec structure
| Section | What to include |
|---|---|
| Summary | One or two sentences describing the feature from an end-user perspective |
| User stories | Brief narratives of how users interact with the feature |
| Acceptance criteria | Specific, testable conditions that mark the feature as complete |
| Functional requirements | Detailed descriptions of system behavior, broken into sub-areas |
| Nonfunctional requirements | Performance, security, scalability, and compliance attributes with measurable thresholds |
| Edge cases | Error conditions, boundary behaviors, and unusual scenarios with explicit handling descriptions |
Generating a spec with /speckit.specify
Run /speckit.specify in GitHub Copilot Chat with a natural language description
covering: what the feature does, who uses it, where it lives in the system, any constraints,
and expected error handling. Copilot generates a structured spec.md covering
all six sections.
After generation, review for completeness, accuracy, and consistency. The initial draft is
a strong starting point but rarely final — use /speckit.clarify to close gaps
before moving to planning.
Spec writing best practices
- Be specific and measurable. Replace "support large files" with "support files up to 50 MB."
- Use consistent terminology. If you call them "documents" in the summary, don't switch to "files" later.
- Cover error handling explicitly. Specify the exact message and behavior for each failure mode.
- Define what, not how. The spec states requirements; implementation decisions belong in the plan.
- Keep scope manageable. If a spec exceeds roughly 300 lines, split it into separate feature specs.
- Validate against the constitution. Any conflict caught here is far cheaper to fix than after implementation.
Creating the Plan File
The plan file (plan.md) bridges the gap between what the spec defines and the
concrete tasks that follow. The spec answers what to build; the plan answers
how to build it. This separation is intentional — if you switch from Azure Blob
Storage to Azure Files, plan.md changes while spec.md remains
largely untouched.
Plan structure
| Section | What to include |
|---|---|
| Architecture overview | High-level description of how components interact |
| Technology stack and key decisions | Explicit technology choices with rationales |
| Implementation sequence | Logical order of implementation steps, from setup to completion |
| Constitution verification | Explicit confirmation that proposed solutions comply with constitution principles |
| Assumptions and open questions | Documented assumptions and unresolved decisions before coding starts |
Generating a plan with /speckit.plan
Run /speckit.plan in GitHub Copilot Chat. Before invoking it, provide context
about your existing stack — frameworks, infrastructure, authentication setup — so Copilot
produces a plan that fits your environment rather than a greenfield solution. After
generation, review the output against three questions:
- Does every spec requirement map to an implementation approach in the plan?
- Do all technology choices align with the constitution and organizational standards?
- Are assumptions and open questions documented so stakeholders can address them before implementation starts?
Common planning pitfalls
- Skipping planning entirely. Jumping straight from spec to code increases the risk of architectural mistakes that are costly to undo.
- Accepting the first draft without review. AI-generated plans are starting points, not final designs.
- Over-constraining implementation. The plan should guide, not dictate every line; leave tactical decisions to developers.
- Ignoring constitution conflicts. Address them immediately in the plan rather than discovering them during code review.
- Letting the plan go stale. When implementation reveals a better approach, update
plan.mdso it remains a useful reference.
Generating Tasks and Implementing Code
The tasks file (tasks.md) converts architectural decisions in
plan.md into specific, actionable work items. Each task is the smallest unit
of work that can be implemented, tested, and verified independently. Well-scoped tasks
share four characteristics:
- Actionable — clearly states what needs to be done.
- Testable — completion can be verified objectively.
- Independent — can be completed without waiting for unrelated work.
- Time-bounded — completable in hours to a day, not weeks.
Generating tasks with /speckit.tasks
Run /speckit.tasks in GitHub Copilot Chat. The AI reads both
spec.md and plan.md and produces a numbered, phase-organized task
list. A typical phase structure for a complex feature looks like:
- Phase 1 — Foundation: database schema, configuration, service class scaffolding.
- Phase 2 — Core functionality: API endpoints, storage integration, metadata persistence.
- Phase 3 — Front end: UI components, client-side validation, progress indicators.
- Phase 4 — Security: authentication checks, server-side validation, audit logging.
- Phase 5 — Testing and docs: unit tests, integration tests, OpenAPI documentation.
Each phase creates a natural milestone: after Phase 2 the back end works; after Phase 3 users can interact with it; after Phase 4 it is production-ready. After generation, verify that every plan element maps to at least one task and that sequencing is logical (schema before API, API before front end).
Implementing with /speckit.implement
Run /speckit.implement with a specific task number, a range, or a description
taken from tasks.md. The AI works through tasks sequentially, referencing
spec.md, plan.md, and tasks.md to keep code aligned
with the overall architecture:
/speckit.implement Implement the MVP first strategy (Tasks: T001 - T027)
After each batch, verify the results before proceeding — run the application and tests to confirm the task objective is met. If the AI flags ambiguity or needs confirmation to build or run, respond promptly to keep the session moving.
Managing tasks during implementation
- Scope growth. When a task reveals unexpected complexity, break it into smaller tasks and update
tasks.mdbefore proceeding. - Blocked tasks. Mark them explicitly with the reason and a tracking reference so they are not forgotten.
- Changing priorities. Reorder, add, or defer tasks in
tasks.mdas business needs evolve. - Discovered ambiguity. Pause, trace back to
spec.mdandplan.mdfor the original intent, update the task description, then continue.
Setting Up the Environment
A working Spec Kit environment requires five components: the Specify CLI,
the uv package manager used to install it, a
code editor with AI integration (VS Code + GitHub Copilot is the primary
configuration), a Git repository, and whatever
programming runtime your tech stack requires.
Prerequisites
| Requirement | Notes |
|---|---|
| Python 3.11+ | Required by the specify-cli package |
| uv | Fast Python package manager; used to install and manage specify-cli |
| Git | Required for branch-based feature tracking |
| Supported AI agent | GitHub Copilot in VS Code is the primary target; 20+ agents are supported |
Installation options
Option 1: Persistent installation (recommended) — installs once and makes the specify command available system-wide. Pin a stable release tag (see releases for the latest):
# Persistent install — replace vX.Y.Z with the latest release tag
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git@vX.Y.Z
# Or install latest from main (may include unreleased changes)
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
Option 2: One-time usage (no install) — runs directly without installing:
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <PROJECT_NAME>
Option 3: Enterprise / air-gapped — if your environment blocks access to PyPI or GitHub, use pip download to create portable OS-specific wheel bundles on a connected machine. See the Enterprise Installation guide for step-by-step instructions.
Initializing a project
Run specify init in your project directory to scaffold the spec-driven structure:
# Initialize a new project with GitHub Copilot
specify init my-project --ai copilot
# Initialize in the current directory
specify init --here --ai copilot
# Windows — generate PowerShell scripts instead of bash
specify init --here --ai copilot --script ps
# Codex CLI / Antigravity — requires skills mode
specify init my-project --ai codex --ai-skills
This generates the .github/prompts/ directory containing prompt templates for every /speckit.* command, artifact template files, and any agent-specific configuration.
Key init flags
| Flag | Purpose |
|---|---|
--ai <agent> | Select the AI agent (copilot, claude, gemini, cursor-agent, windsurf, codex, etc.) |
--ai-skills | Install templates as agent skills (required for Codex CLI and Antigravity) |
--script ps | Generate PowerShell scripts instead of bash (Windows / cross-platform) |
--here / --force | Initialize into an existing directory; --force skips confirmation |
--no-git | Skip Git repository initialization |
--github-token | Supply a token for corporate GitHub environments |
--skip-tls | Bypass TLS verification behind a proxy (not recommended for production) |
--branch-numbering | sequential (default) or timestamp — useful for distributed teams to avoid numbering conflicts |
--debug | Enable verbose output for troubleshooting |
After init, verify your environment is ready:
specify check
This confirms Git is configured and the selected AI agent is accessible before you start.
Corporate proxies: configure proxy settings and custom certificate authorities before installing via uv.
Extension approval: VS Code extension installation may require a security review; plan accordingly.
Azure DevOps / GitHub Enterprise Server: artifacts integrate with Azure Repos and pull requests; tasks can reference Azure Boards work items via the Azure DevOps community extension.
Summary
Without a shared source of truth, requirements drift and AI-generated code misses the mark. SDD keeps intent and implementation aligned from the first line.
GitHub Spec Kit structures the four SDD phases — Specify, Plan, Tasks, Implement — into slash commands backed by versioned markdown artifacts that live in your repository.
Production-ready code that aligns with your requirements, organizational standards, and security constraints — generated by GitHub Copilot from a clear, constitution-governed specification.
References
- GitHub Spec Kit — official repository
- MS Learn: Implement spec-driven development using GitHub Spec Kit
- MS Learn: Generate a clear, precise, and effective spec
- MS Learn: Create a detailed technical plan
- MS Learn: Generate tasks and implement code
- MS Learn: Examine the GitHub Spec Kit development environment
- Spec Kit: Enterprise and air-gapped installation guide