Replies: 19 comments 41 replies
-
|
You should not change the master spec, if you create a feature you just add the spec with the new feature. If you change you say that you change that in the new spec. Ex. spec1 - Todo list with local storage |
Beta Was this translation helpful? Give feedback.
-
|
Thank you @simongartz for your explanation. Where I personally have a hard time understanding is, that for each spec one git branch is created. Based on your example: spec1 is the todo list with local storage. So the tasks on a high-level would be like create project structure, create tests, implement the todo list with local storage. |
Beta Was this translation helpful? Give feedback.
-
|
The way I am thinking about it, for an existing mono repo project, is to have a spec project per broad feature, then each feature will have a trail of specs that will define its truth. I'm not sure yet if speckit in its current form works with that because I'd essentially want many spec projects at the src root (one for each broad feature). Not sure if this is "correct" or not, but currently the only way I can think to get it to work for a large existing brownfield project. |
Beta Was this translation helpful? Give feedback.
-
|
What is missing is a snapshot of the project specification on the memory folder, alongside with the constitution. This master spec should be updated when a feature is complete. I think you can even ask the AI to do that and review the output. This master spec should be the starting point for the next feature. |
Beta Was this translation helpful? Give feedback.
-
|
I agre, there should always be a single, consolidated specification that reflects the most current features. Without this, it’s inevitable that, over time, the model could accidentally reference outdated information. Whenever a feature is updated, the root specification should be updated as well. This ensures there’s always a reliable, up-to-date source. If the specification grows too large, there should also be a logical way to reorganize or structure it for easier navigation. |
Beta Was this translation helpful? Give feedback.
-
|
I think it would have to be a combination of reviewing the specs and code
together. The specs will give the motivation and the code will give what
has actually been implemented.
…On Thu, 18 Sept 2025, 09:35 euroblaze, ***@***.***> wrote:
100% agree with the master-spec idea.
But wonder if the LLM is effectively capable of considering all _truth,
requirements and situational facts (from existing code), and only
accordingly produce new code.
—
Reply to this email directly, view it on GitHub
<#152 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACJN3SUNWSXWIHKFGKAEC3D3TJVEJAVCNFSM6AAAAACGE55LQWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTINBUGAYTAOA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Hello! I'm sorry I'm late to the party - I'd like to chime in with some of my own thoughts around this issue: One way of thinking about all of this stuff is that the spec or specs are really the source code for the implementation. Something that Sean Grove said (paraphrasing) at the AI Engineer Summit was around how we spend all this time versioning the source code that today is generated from agents, but yet we don't really do any version control management, updating, ensuring that it's consistent as the source of truth of the specs, which seems odd. It would be like if we spent all of our time carefully versioning the binaries produced by our compilers and throwing away the source code. So it becomes really important, not so much is there a single master spec, a set of feature specs or anything else. It's that those things must be kept in sync with the source code. And that requires a certain amount of discipline to make it effective. I don't think that for any reasonable size project that we could have just one spec. I think there is some kind of natural hierarchy that evolves in your projects. What I have taken to doing recently is writing a spec for a feature that is more of a unified spec. Both the motivations as well as the implementation details and the research and other things are captured inside of a single document that I can iterate with the AI. But then what I do is I spend cycles in the agentic loop to close the loop between the code that was actually written and the spec because these things naturally diverge as it's pretty rare for the agent to one-shot the code and then call it a day. Oftentimes there's follow-ups and other things, right? Because you underspecified something, there's a bug, etc. And those changes ultimately become material. So unfortunately right now it's on you to fold those things back into the spec or ask the agent. to fold those changes and updates back into the spec. |
Beta Was this translation helpful? Give feedback.
-
|
I'm glad I saw this discussion. I started to evaluate Spec-Kit this week, and I would like to understand the vision on how leveraging the current model will scale overtime in projects where there are multiple iterations. I've been reasearching, evaluating different Spec Driven Development tools and leveraging them to build greenfield applications. The way I'm using Spec Driven Development so far which is working for me is:
If we focus on the features spec: The way I'm using is very different than SpecKit is designed to work. The model I'm using works extremely well, however it does have limitations for teams who are developing multiple feature branches in parallel. It's not an issue for me at the moment though. The way SpecKit handles features, by aligning with feature branches is great for incremental work, but it seems to go against the Spec Driven Development principles (at least based on my understanding). I fully agree with @simongartz , the middle ground would be a command to "compact" multiple "specs" into "target state specs". A very simple use case: The question then is: Should the flow be:
I prefer the first option, as it gives more control to the developer and ability to review consistency in near realtime, rather than compacting it at the end. I prefer to interact with target state and review incremental plan, rather than the other way around. Keen to get others' views on this. |
Beta Was this translation helpful? Give feedback.
-
|
I'm a couple of weeks into seriously trying to use this on a prototype project. I expected there would be a repo-level spec, and then feature specs subordinate to it. For now, the first feature is essentially the project spec, if I understand it correctly. I think that the specs (beyond the repro-spec) might follow common branching scenarios:
Each of these developer activities really needs a spec or a mini-spec, using a different template. |
Beta Was this translation helpful? Give feedback.
-
|
I have been exploring a related angle, specifically the OpenAPI side of evolving specs (fragmentation across feature folders and keeping a unified root spec for frontend–backend alignment). I have shared details in #443 along with links to my write-up and repo. Would love feedback from anyone who’s thought about this. |
Beta Was this translation helpful? Give feedback.
-
|
I understand the concept of SDD and that each feature or change is captured in a new spec file. |
Beta Was this translation helpful? Give feedback.
-
|
I would think it would be beneficial to know the most up-to-date requirements for a particular feature. We run into this all the time with QA. QA will log a bug based on an outdated AC. |
Beta Was this translation helpful? Give feedback.
-
|
If you’re building a full end to end application, it’s not entirely clear which structure the authors intend. The discussion seems to hinge on whether the whole system should live under a single spec subdirectory, or whether it should be broken down into separate capabilities — each with its own plan, spec, and data model. Option A – single application spec: Option B – capability-based specs: I’d love to hear from the maintainers or other users which pattern aligns better with SpecKit’s intended workflow for complex, multi-feature systems. |
Beta Was this translation helpful? Give feedback.
-
|
It would be great if spec-kit could support the following workflow. What I can usually do is add tests afterward for already confirmed features, to support future iterations and refactoring. Here’s my proposed approach: Allow non-constitutional specification documents (such as spec, plan, and tasks documents) to be adjusted during the manual end-to-end testing and acceptance phase based on reviewers’ feedback. 验收与实现的关系 · Relationship Between Acceptance and Implementation
### Acceptance-Driven Specification Iteration
During the manual end-to-end testing and acceptance stage after feature implementation, specification documents (spec, plan, tasks) are **allowed and encouraged** to be *guidance-adjusted* based on feedback from acceptance reviewers, enabling agile iteration of requirements.
**Core Principles**:
- **Alignment between idea and implementation** is the only acceptance criterion:
Functional code is considered production-ready only when it meets the actual needs and expectations of the acceptance reviewers.
- **Specifications guide the code**:
Code implementation and iteration must strictly follow the specification documents and must not deviate from them.
- **Acceptance feedback guides the specifications**:
Specification documents must be revised according to the reviewers’ feedback to accurately reflect real business needs.
**Workflow**:
1. **Initial Specification Drafting**: Create `spec.md`, `plan.md`, and `tasks.md` based on the initial understanding of requirements.
2. **Implementation**: Develop strictly according to the specification documents without deviation.
3. **End-to-End Acceptance**: Reviewers conduct manual testing and provide feedback.
4. **Specification Iteration**: Update the specification documents (`spec/plan/tasks`) according to the feedback, clearly recording the reasons for each change.
5. **Code Adjustment**: Adjust code implementation based on the updated specifications.
6. **Repeat Steps 3–5** until the reviewers confirm the feature meets expectations.
7. **Acceptance Approval**: Once idea and implementation are aligned, the feature is ready for production release.
**Documentation Revision Rules**:
- Each specification revision must include a change log at the top of the document (date, reason, and summary of changes).
- The version number of the specification should increment (by date or sequence).
- Acceptance feedback must serve as the explicit basis for each revision and be referenced within the document.
- Commit messages for code changes must reference the corresponding specification document version.
**Rationale**:
Traditional “frozen specification” approaches assume requirements can be fully defined before development.
In reality, users can only accurately evaluate whether a feature meets expectations after seeing it in action.
By allowing iterative updates to specification documents during the acceptance phase, we establish a **two-way feedback loop**:
code faithfully implements specifications, and specifications faithfully reflect acceptance needs — ensuring the final deliverable truly satisfies user expectations.flowchart TD
A[初始规格编写 / Initial Spec] --> B[代码实现 / Implementation]
B --> C[人工端到端验收 / Manual E2E Acceptance]
C --> D{验收通过? / Accepted?}
D -- 否 / No --> E[根据验收意见修订规格 / Revise Spec per Feedback]
E --> F[调整代码实现 / Adjust Code]
F --> C
D -- 是 / Yes --> G[验收通过 / Final Acceptance]
G --> H[功能进入生产 / Production Release]
|
Beta Was this translation helpful? Give feedback.
-
|
Ok, so bear with me on this cause I had a slightly different thought that's a touch outside of the existing implemented flows and possibly a future exploration sort of thought. But what if this "last step/snapshot" in question isn't a true spec at this point at all? Instead, turn the entire concept into a customized "suite" of gen-AI documentation. Then your truth becomes a reflection of the spec, like an inverted flow back to truth for the system in question from the code itself. Imo, that's close enough to count as a master spec (assuming you've thrown in diagrams) while also serving several different purposes overall:
Cons: Brand new design/setup/prompt, longer to research/implement, potential for misalignment with brand, higher complexity, some models can't handle this sort of prompt in one go (requires Claude/GPT-5), it's going to take AI longer to implement, customization options (maybe)?
|
Beta Was this translation helpful? Give feedback.
-
|
I don't want to make this look like a low-effort post, but I to consolidate the main points of discussion so far, I've made a ChatGPT summary. Summary
Here’s a summary + analysis of the key points and trade-offs from that “Evolving specs” discussion in the Spec Kit repo (Discussion #152). If you want, I can also turn this into a decision-matrix or recommended workflow. Context & core problem
Major viewpoints & themesHere are the main positions, trade-offs, and suggestions that emerged:
Recommended practices / emerging consensus (as per discussion)From reading through the back-and-forth, these seem to be the pragmatic suggestions people gravitated toward:
Challenges & open questions
If you like, I can also propose a refined workflow for evolving specs (for brownfield / greenfield projects) that balances these trade-offs. Do you want me to sketch that? Here’s a summary + analysis of the key points and trade-offs from that “Evolving specs” discussion in the Spec Kit repo (Discussion #152). If you want, I can also turn this into a decision-matrix or recommended workflow.Context & core problem
Major viewpoints & themesHere are the main positions, trade-offs, and suggestions that emerged:
Recommended practices / emerging consensus (as per discussion)From reading through the back-and-forth, these seem to be the pragmatic suggestions people gravitated toward:
Challenges & open questions
If you like, I can also propose a refined workflow for evolving specs (for brownfield / greenfield projects) that balances these trade-offs. Do you want me to sketch that? My point of view - not having a source of truth is what leads to the "slog" - if spec-driven development is about giving context to LLMs and fast team sync, then, having a consolidated spec that everyone in the team can reference definitely helps. I'm in data science where we have data teams and ML teams that improve on the data. I've worked as a backend engineer too. It seems reasonable to have a "microservice"-like architecture, a spec per microservice. If the project is a microservice where everything fits into one, then let it simply be a single microservice with a single spec. In addition, the recent services like the Bugbot from Cursor or Lifeboat from Windsurf work well as they don't need huge contexts. Also, this fits well into ci/cd. Make merges of spec-driven development every [Tuesday] before the weekly meetings. Also perhaps it makes sense to make things togglable? e.g. during a project definition "choose how you want to evolve the spec"?, and using scripts, adjust prompts on what the tasks are for the LLM. I think it's quite famous that startups develop very different from enterprises. Edit, after a week of using spec-kit: I think this is done fine by running Perhaps |
Beta Was this translation helpful? Give feedback.
-
|
I see that even having the spec-first development in small feature cycles are already beneficial, and as per my exploration this is the point spec-kit fits better currently. We create specs for each "system feature/change", and if in the future we need to evolve a feature, it will be another spec-kit feature with another spec->plan->tasks->implementation cycle. It is already a great step. But it seems to me that spec-kit is not trying to solve the "Master PRD" or a global Spec doc (yet), checking how the spec-kit flow works not sure if this is the end goal. @localden, it would be great to have your inputs on this topic. |
Beta Was this translation helpful? Give feedback.
-
|
My experience with spec-kit for real greenfield project is that it works much better with creating a spec per feature of app/service. The artefacts produced with spec per feature approach are more manageable to review and update during implementation if needed. I've tried to cover all app capabilities in a one spec but it was really huge and a lot of effort needed to review it for consistency and gaps even with /clarify command available. I cannot imagine how hard it will be to change it if needed. |
Beta Was this translation helpful? Give feedback.
-
|
I think we should also check and borrow ideas of how OpenSpec does it. I think it has 2 folders and it archives the changes after completing a spec and maybe updates the master spec. OpenSpec vs Speckit |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Using the spec kit I create a spec and implement it.
What is the process when I now have a change request. Spec driven development would seem to indicate that the spec is the source of truth for the system, but spec kit leads me to create a new spec with the variation. That doesn't seem to be in keeping with spec driven development, as now to know what the system does I need to read both specs.
Or should I be getting the AI to update the master spec as well?
Beta Was this translation helpful? Give feedback.
All reactions