-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Description
Problem Statement
Background
In the current Spec-Kit workflow, the typical structure generated during specify and plan includes multiple specification files such as:
- Feature specification
- Architecture / technical plan
- Implementation tasks
This works well for human-driven development, where developers manually implement the code, run tests, and decide whether the implementation satisfies the requirements.
However, in AI-driven development workflows, especially those using autonomous coding agents, there is an important missing component: a structured verification specification that defines how the implementation should be validated.
Problem
Without a machine-readable or structured verification specification, AI agents face several challenges:
-
No clear completion criteria
The agent cannot reliably determine when the task is finished. -
No standardized validation process
Validation steps (tests, linting, build, etc.) are implicit rather than explicitly defined. -
Weak support for iterative agent loops
Modern AI coding workflows rely on iterative execution loops (e.g., implement → verify → fix → verify) until success. Without a verification spec, this loop becomes unreliable.
This becomes particularly important for agent-based workflows such as autonomous coding systems, where the agent needs deterministic rules for validating progress.
Proposal
Introduce an additional spec file, for example:
verification-spec.md
or
acceptance-spec.md
This file would explicitly define how the implementation should be verified.
Example structure:
/specs
feature-spec.md
plan.md
tasks.md
verification-spec.md
Example Verification Spec
Example content for verification-spec.md:
# Verification Specification
## Build
The project must build successfully:
make build
## Unit Tests
All tests must pass:
go test ./...
## Lint
The code must pass lint checks:
golangci-lint run
## Integration Tests
Start the service:
docker compose up
Run API tests:
scripts/test_api.sh
Expected result:
- HTTP 200 responses
- JSON schema matches specification
## Performance (optional)
Benchmark must reach:
TPS >= 5000
Run:
scripts/benchmark.sh
Benefits
Adding a verification spec would significantly improve Spec-Kit's compatibility with AI-assisted and autonomous development workflows, including:
- AI coding agents
- iterative execution loops
- automated implementation pipelines
It enables a clear workflow like:
spec → plan → tasks → implement → verify → fix → verify
This kind of loop is increasingly common in modern AI coding environments and helps agents reliably determine success conditions.
Optional Future Direction
A more advanced version could support a machine-executable verification format, such as:
verify:
- cmd: make build
- cmd: go test ./...
- cmd: golangci-lint run
This would allow agents to directly execute the verification steps without manual interpretation.
Summary
Adding a verification-spec would:
- provide explicit acceptance criteria
- improve automation compatibility
- support modern AI-driven development loops
This could make Spec-Kit significantly more powerful for agent-based development workflows.
Would love to hear thoughts from maintainers and the community.
Proposed Solution
verification-spec.md
Alternatives Considered
No response
Component
Specify CLI (initialization, commands)
AI Agent (if applicable)
None
Use Cases
No response
Acceptance Criteria
No response
Additional Context
No response