Vibe coding is now part of daily engineering work: you describe intent, your assistant generates code, you iterate quickly, and you ship. The speed is real.
But there’s a predictable failure mode: unstructured acceleration.
When a repo has weak boundaries, AI‑assisted changes tend to:
- land in random places (“wherever it compiled”),
- mix business rules with vendor SDK calls,
- duplicate patterns (three config styles, two data access styles),
- hard‑code provider choices as strings,
- become harder to test as the codebase grows.
It works—until it doesn’t. Then every new feature costs more than it should, and the assistant becomes less reliable because the codebase has no consistent “shape”.
This post is a practical playbook for getting better results from vibe coding—without slowing down.
You want a codebase where:
- AI can add features without guessing where code should go,
- swapping databases/services/providers is a configuration change, not a rewrite,
- tests are easy to generate and remain stable,
- refactors are localized because dependencies are controlled.
That outcome comes from structure + contracts + explicit choices + repo rules.
Your stack will change—DBs, queues, observability, AI providers, auth, hosting. If business logic imports those choices directly, your “core” becomes fragile.
A simple rule helps:
Business logic should not know which database, SDK, or vendor you picked.
Keep business rules stable and treat infrastructure as replaceable edges.
Most vibe‑coding mistakes happen because the assistant doesn’t know where a change belongs.
Use a predictable layout with clear boundaries:
- Domain: business concepts and rules (no SDKs)
- Application: use cases and contracts (interfaces)
- Infrastructure: concrete implementations (DBs, vendors, SDKs)
- Presentation: API/CLI/UI entrypoints and adapters
And enforce one rule:
Dependencies point inward.
Infrastructure depends on application/domain, not the other way around.
- Domain = “what the business is”
- Application = “what the system does”
- Infrastructure = “how it connects to the world”
- Presentation = “how users call it”
If you ask an assistant “add storage” or “add DB calls”, the shortest path is often “do it inline”. That’s how business logic gets coupled to a vendor SDK.
Instead:
Define an interface (contract) in the application layer first.
Then implement it in infrastructure.
Examples of contracts:
UserRepositoryAlertStoreNotificationClientObjectStorageLLMProvider
This forces a clean separation and gives the assistant a clear target: implement this contract here.
Magic strings drift over time:
"postgres","pg","postgresql""openai"vs"OpenAI"
Enums give you a finite, validated set of choices:
DatabaseProvider.SQLITEDatabaseProvider.MEMORYLLMProvider.OPENAI
In practice, enums:
- prevent config ambiguity,
- improve code generation quality (the assistant can see valid options),
- make provider switches safer and more reviewable.
Humans learn conventions from context. Assistants don’t—unless you provide them.
agent.md is a short, repo‑local instruction file that tells any assistant:
- architecture boundaries and “dependency direction”,
- where to add new code,
- naming conventions,
- what not to do (no vendor imports in domain/application),
- testing expectations,
- which examples to copy.
If you adopt only one habit for long‑term vibe coding quality, make it this:
Every repo should include
agent.md.
When structure is present, a provider change looks like this:
- Add/extend an enum (e.g.,
DatabaseProvider) - Define/extend an interface (e.g.,
NoteRepository) - Implement the interface (e.g.,
SqliteNoteRepository) - Wire the selection in one place (composition root / factory)
- Keep use cases unchanged (they depend on the interface)
- Add tests
- unit tests mock interfaces (fast, stable)
- integration tests validate adapters (focused)
This is what makes “fast changes” stay cheap over time.
A handler imports a vendor SDK directly and mixes concerns:
# presentation/http_handlers.py (or worse: everywhere)
import sqlite3
def create_note(text: str) -> None:
# business rule + persistence detail mixed together
if not text.strip():
raise ValueError("text is required")
conn = sqlite3.connect("notes.db")
conn.execute("INSERT INTO notes(text) VALUES (?)", (text,))
conn.commit()Problems:
- persistence is welded into your business path,
- hard to test without a real DB,
- switching DB means touching business logic everywhere.
Use case contains business logic + calls a contract:
# application/use_cases/create_note.py
from domain.note import Note
from application.interfaces.note_repository import NoteRepository
class CreateNote:
def __init__(self, repo: NoteRepository):
self.repo = repo
def execute(self, text: str) -> Note:
if not text.strip():
raise ValueError("text is required")
note = Note.create(text=text)
self.repo.add(note)
return noteDB choice lives in infrastructure:
# infrastructure/persistence/sqlite_repo.py
import sqlite3
from application.interfaces.note_repository import NoteRepository
from domain.note import NoteNow:
- use cases are easy to unit test (mock the repo),
- infrastructure can change without rewriting business logic,
- assistants have a clear pattern to follow.
- Don’t import vendor SDKs in domain/application
- Add interfaces before implementations
- Use enums for provider selection (no magic strings)
- Keep use cases pure (orchestration + business rules)
- Infrastructure implements interfaces
- Presentation adapts inputs/outputs, doesn’t hold business logic
- Unit tests mock interfaces; integration tests validate adapters
- Update
agent.mdwhen patterns evolve
This repository includes a runnable template under template/:
- clean architecture folder layout,
- enums + interfaces + factories,
agent.mdyou can copy into any repo,- minimal CLI entrypoint,
- unit tests (stdlib
unittest).
cd template
python -m venv .venv
source .venv/bin/activate
python run.py create "hello"
python run.py listSwitch providers:
# in-memory
export DB_PROVIDER=memory
python run.py create "stored in memory"
# sqlite
export DB_PROVIDER=sqlite
python run.py create "stored in sqlite"To publish this as a GitHub repo:
- Create a new repo (e.g.,
vibe-coding-playbook) - Copy everything from this folder
- Keep this
README.mdas your blog post - Keep
template/as the starter kit users can clone
If you want to expand the template later:
- add a FastAPI presentation layer,
- add OpenTelemetry wiring in
infrastructure/observability/, - add a second provider type (e.g.,
LLMProvider) using the same patterns, - add an ADR folder (
docs/decisions/) to document why patterns exist.
If you keep the boundaries consistent, your assistants will keep producing consistent code as the repo grows.