There are moments when a prototype feels like a spark and the deadline feels like a storm. A developer knows that rapid ideation can save a day, yet sloppy layout steals hours later. This introduction connects that tension: the need to move fast while keeping the project sane for Day 1 and beyond.
Vibe coding—a term Andrej Karpathy popularized in early 2025—shifts the role of the developer toward guiding AI assistants while retaining ownership of tests, reviews, and long-term quality. Practical workflows pair editors like Cursor with Gemini 2.5 Pro, and tools such as Google AI Studio and Firebase Studio help move an app from prompt to Cloud Run with guardrails.
This guide shows a repeatable process for organizing code and folders so the workflow scales. We map a project layout to Day 0 → Day 1+ lifecycles, so teams avoid costly rework. Along the way, readers will see how context—rules, PRDs, and plans—belongs next to the relevant code paths.
Key Takeaways
- Design the folder layout to support fast prototyping and long-term maintenance.
- Keep context discoverable: rules, PRDs, and plans should sit near code.
- Pair editors and AI tools—Cursor, Gemini, Google AI Studio—for efficient iteration.
- Align Day 0 outputs with Day 1 responsibilities to reduce rework.
- A clear layout saves hours and shortens onboarding for new contributors.
- For deeper reading, see the guide on how flow fast still ship clean.
Why folder structure matters for vibe coding today
A predictable folder layout turns rapid ideas into reliable code that scales beyond the first prototype.
Engineers report many tools shine at Day 0 but struggle once a project grows. Sourcegraph, Claude Code, and Continue are improving for large codebases. Cursor, Windsurf, and Trae added rules and context files to reduce repeated errors.
Why this matters: clear locations for PRDs, rules, and design notes let both humans and AI find context without long prompts. That reduces time spent re-explaining intent and shortens the loop between idea and delivery.
Teams that place CI configs, env files, and tests in expected spots cut onboarding time. Tempo Labs, Bolt.new, and Lovable.dev show how quick app builds pair well with Supabase for auth and CRUD—but continuity depends on where artifacts live.
- Predictability: consistent paths make writing code and reviews straightforward.
- Continuity: Day 0 gains survive Day 1+ when context is discoverable.
- Scale: clear seams reduce cross-feature interference as apps and databases expand.
Core principles of a scalable project layout
A resilient project layout makes rapid prototypes survive the long haul. This approach balances Day 0 speed with Day 1+ maintainability. It supports teams and AI agents so work scales without repeated rewrites.
Plan for Day 0 prototyping and Day 1+ maintenance
Enable quick, working code during Day 0. Ship small, testable slices that demonstrate value fast.
For Day 1+, codify decisions and isolate responsibilities. That reduces friction when adding features or handing work to new contributors.
Encapsulate features end-to-end with vertical slices
Vertical slices mean one feature holds schema, operations, server logic, routes, and UI. A change flows from DB to UI in a single path—no cross-folder thrash.
Wasp- and Prisma-style configs work well as an example: define schema, central ops, then co-locate pages and components.
Prioritize AI context clarity and human readability
Keep rules, PRDs, and per-feature READMEs next to the code. .cursor/rules and living plans cut the need for long prompts and speed review cycles.
- Choose tools that complement the layout, not fight it.
- Name consistently so agents and humans find context fast.
- Test per slice to prevent regressions as projects grow.
A canonical vibe coding file structure you can start with
Start with a predictable top-level layout that separates runtime code, AI context, and docs. That separation makes it faster to find what matters when the project grows and teams onboard.
At the top level, keep: app, server, client, ai, docs, and scripts. This split isolates backend concerns from UI and from the context packs agents use.
Under src/features, adopt a feature-first approach. Group vertical slices—auth, payments, analytics—so each feature holds UI, server logic, and data access.
- Keep database schema and operations colocated. For example, include *.operations.ts next to schema files so changes follow a clear DB → UI path.
- Maintain an ai/ directory for rules, PRDs, and plans to make context discoverable to both people and tools.
- Docs and scripts host architecture notes, ADRs, seeds, and repeatable commands for building and testing the app.
Wasp-style central configs can be the source of truth for routes and auth, while Supabase often handles auth and CRUD in real examples. This way, teams and AI Studio or Firebase Studio blueprints deploy to Cloud Run with fewer surprises.
Mapping vertical slices into folders for end-to-end development
Map each vertical slice so a single change flows from schema to UI without hunting across the repo. That predictability speeds review and reduces regressions.
DB to UI path: schema, operations, routes, components
Follow a fixed process: define the database schema, declare operations, implement backend handlers, expose routes, then build components. In Wasp/Prisma workflows this looks like schema.prisma → main operations → server functions → pages → UI hooks.
Where authentication, billing, and dashboards live
Place auth under src/features/auth with forms, providers, and server guards. Put billing in src/features/billing and dashboards in src/features/analytics or dashboards. Each folder owns its migrations, seeds, and README.
Incrementally adding complexity without reorganizing
Start minimal—simple auth, basic invoices—then add MFA and billing tiers inside the same slice. Encapsulate provider SDKs behind service interfaces so swapping Stripe or other adapters does not force repo-wide changes.

- Keep validations and shared types close to the feature boundary.
- Document intent in a per-feature README: what exists now and what’s next.
- Example: invoice schema → create/charge operations → Stripe server call → confirmation UI.
Environment, configuration, and secrets management
Environment and secrets deserve a predictable home so teams and agents run the same app everywhere.
Config files: app-level, per-feature, and per-environment
Centralize variables with a clear .env pattern: .env.local, .env.staging, and .env.production. This reduces context drift and saves time during handoffs.
Keep app-level config explicit: feature flags, API endpoints, and models should live in a versioned config file. Per-feature files clarify contracts—auth providers or billing settings stay at the slice boundary.
Local, staging, and production parity for faster AI iterations
Enforce parity with a template env file and scripts for seeding, migrations, and health checks. Firebase Studio and Cursor users benefit when blueprints map to real environments and tools behave predictably.
- Document secrets and require secure tooling and rotation to avoid leaks during development.
- Validate early: lint and schema-check env values to catch errors before CI.
- Record choices: capture model and provider selections (example: Gemini 2.5 Pro) so agents know targets.
Embedding AI rules and context for reliable assistance
Concise rules next to project context make AI help predictable and useful.
Place a .cursor/rules directory at the repo root and split it by domain: style, conventions, auth, and operations. Keep each file short and focused so the agent reads high-signal content first.
Package a context pack that includes the PRD, a short Plan, per-feature READMEs, and template references. This pack frames each task and reduces back-and-forth prompts during review.
Guardrails and naming conventions
Use ordered names to prioritize reading: 01-style.mdc, 02-architecture.mdc, 03-auth.mdc. Add remediation steps for common mistakes—dependency misconfig, bad imports—so the agent can suggest exact fixes instead of vague advice.
Closed-loop improvements
Include prompt templates for common tasks such as a “vertical-slice plan” or a “self-critique pass.” After a session, have the agent critique the rules and update them. This shortens the loop and keeps rules current.
- Keep examples of good responses so the model has a concrete standard.
- Record decision history near the rules to avoid repeated changes and to capture rationale.
- Standardize a prompt for “review for breadth and clarity” to automate quality checks.
| Item | Purpose | Recommended Name |
|---|---|---|
| Style rules | Enforce formatting and naming | 01-style.mdc |
| Architecture notes | Scope and trade-offs for features | 02-architecture.mdc |
| Auth & Ops | Guards for providers and imports | 03-auth.mdc |
| Prompt templates | Consistent starts for tasks | 04-prompts.mdc |
Documentation that closes the loop for humans and AI
Recording what changed after a feature slice turns short-term decisions into lasting context. Teams that capture design notes and decision records save time and reduce rework. Good docs make onboarding faster and let agents give better suggestions.
ai/docs for per-feature design notes and rationale
Store per-feature notes in ai/docs so humans and agents retrieve focused context quickly.
- Keep a short README per feature: intent, tradeoffs, and links to key files and tests.
- Use a standardized checklist that lists what changed—schema, operations, server handlers, components—and links to the exact file or files.
- Include a short example of usage or sequence list when it clarifies data flow without heavy maintenance.
- Make onboarding pages that reference completed slices so new contributors mirror quality fast.
Auto-generated change logs and decision records
After merging, run an automated doc pass. A prompt to the AI can summarize diffs and create a decision record while context is still fresh.
Store records immediately: who decided, why, tradeoffs, and links to code. This reduces time spent on future reviews and supports cross-app changes for projects with multiple services.
| Artifact | Purpose | Where |
|---|---|---|
| Per-feature README | Design notes, intent, quick start | ai/docs/features/feature-name/README.md |
| Change checklist | List of changed items and linked files | ai/docs/features/feature-name/CHANGES.md |
| Decision record | Captured tradeoffs and rationale | ai/docs/decisions/YYYY-MM-DD-slug.md |
| Onboarding example | Concrete sample slice to mirror | ai/docs/onboarding/example.md |
Make docs a living tool: require a doc update in the same PR as code changes. Encourage the team to request a doc prompt after each merge so documentation becomes an accelerator—not a chore.
Collaboration, versioning, and CI/CD considerations
When branches mirror vertical slices, reviewers and tools can reason about changes with less context. This keeps reviews focused and reduces cross-cutting surprises during merges.
Adopt a simple, repeatable process: branch from main, implement the task inside a slice, open a small PR with docs updated, and run automated checks. Small PRs help the team and AI assistants understand intent quickly.
Branching and code reviews aligned to vertical slices
Align branches to features so each PR contains localized changes. Use code ownership rules to route reviews to the right maintainers.
- Run CI on every branch: validate schema, build, tests, and lint.
- Integrate static analysis and secret scanning to block risky code early.
- Enable preview environments per branch so stakeholders test a feature in isolation.
Leverage tools like Sourcegraph for cross-repo insight and Continue or Cline in VS Code to iterate agentically on tasks. Document pipelines in docs/ci and adopt a clear versioning plan so releases match meaningful changes.
Tool-aware structure: from Google AI Studio to Firebase Studio and Gemini Code Assist
Treat blueprints and generated previews as inputs to a disciplined integration workflow that preserves repo integrity. Studio outputs must live beside code, not in a separate sandbox. This makes reviews, tests, and deploys repeatable.
Placing blueprints, prototypes, and deploy artifacts
Create dedicated folders: tools/ai-studio for prompt histories, generated artifacts, and Cloud Run deploy configs; and tools/firebase for blueprints, prototype notes, and environment-specific publish outputs.
Keep a short list of model versions and constraints so contributors know generation limits and cost profiles. Store reusable prompts and chat transcripts to speed iteration and keep context consistent.
- Step: record the prompt, attach preview, run tests.
- Step: convert generated code into a vetted branch, add tests.
- Step: publish artifacts to the deployment space (Dockerfile, manifest).
| Tool | Best for | Repo home |
|---|---|---|
| Google AI Studio | Rapid prompt-driven prototypes, live preview, Deploy to Cloud Run | tools/ai-studio/ |
| Firebase Studio | Blueprint → review → prototype → publish workflows | tools/firebase/ |
| Gemini Code Assist | IDE edits: refactor, tests, precise code generation | tools/gemini/ |
Govern changes: track sessions, evaluate structural impact, and use short feedback cycles between Studio prototypes and IDE refinement. We recommend evaluating all Studio-introduced changes before merging to preserve a feature-first repo and clean stack alignment.
Real-world examples and patterns inspired by modern stacks
A few focused patterns make it easier to map configs, auth, and feature code across a growing backend and frontend.
Wasp/Prisma as a central contract: keep a main config that lists routes, auth, and operations. Let that contract name routes and models while feature folders hold real implementations. This balances a single source of truth with per-feature ownership.
Supabase-based auth and database: place provider configs and Prisma/SQL schemas inside the auth slice. Put backend adapters under src/features/auth/adapters, and keep UI components in src/features/auth/ui. That separation reduces coupling and speeds development.
- schema → src/features/billing/schema.prisma
- operations → src/features/billing/operations.ts
- server handlers → src/features/billing/server/
- UI components → src/features/billing/ui/
Keep database access behind repositories or services so the backend stays testable and portable. Add a runbook file with migrations, seeding, and rollback steps. Record model conventions—names, relations, indices—to avoid drift.
| What | Do | Avoid |
|---|---|---|
| Upgrades | Use Sourcegraph for broad changes | Mass edits without search |
| PRs | Record good examples | Large, mixed changes |
| Runbooks | Include migrations & rollbacks | Unclear one-off steps |
Checklist: keep the central contract small, co-locate schemas, separate adapters from UI, and document runbooks and PR examples. Apply these patterns so teams maintain velocity even with a lot of concurrent work.
Conclusion
When prototypes must become products, predictable practices save time and reduce risk.
Teams that pair vertical slices, in-repo rules, and living docs turn a quick idea into durable software in hours. This approach makes it easier to ship features and manage changes across versions.
Use tool-aware placement: keep AI Studio and Firebase artifacts near reviewed code so prototypes move to production with intent intact. Good documentation and a clear way to version things reduce hidden costs over time.
In short, strong conventions are the reliable way to translate vibe coding speed into maintainable code. Revisit the approach periodically—small refinements compound and keep projects ready for the next feature or release.
FAQ
What does "Folder Structure for Scalable and Clean Vibe Projects" mean in practice?
It means organizing a project so teams can prototype fast and maintain code over time. Start with clear top-level directories—app, server, client, ai, docs, scripts—and group work by feature rather than by technical layer. This reduces friction when adding features like authentication, billing, or analytics and makes CI/CD, reviews, and deployments more predictable.
Why does folder structure matter for modern development and AI-assisted workflows?
A consistent layout speeds onboarding, clarifies responsibilities, and improves AI context accuracy. When files follow predictable patterns, tools like code assistants and deployment platforms find the right configuration and prompts, which reduces mistakes and shortens iteration cycles across Day 0 prototypes and Day 1+ maintenance.
What are the core principles of a scalable project layout?
Prioritize feature-first organization, vertical slices (end-to-end feature folders), separation of UI, business logic, and data, and explicit documentation for AI context. These principles make incremental changes safer and let teams scale complexity without frequent refactors.
How should teams plan for Day 0 prototyping versus Day 1 maintenance?
On Day 0, focus on minimal viable paths: clear entry points, a small server, and mock data. For Day 1 and beyond, transition to feature folders, add tests, extract configs, and formalize secrets and environment parity. This staged approach preserves speed while enabling robust maintenance.
What does "encapsulate features end-to-end with vertical slices" look like?
Each feature folder contains schema, operations, routes, components, tests, and docs. For example, an “auth” slice holds DB migrations, API handlers, UI components, and a README. That keeps ownership clear and makes CI, code review, and rollbacks straightforward.
How do you prioritize AI context clarity and human readability simultaneously?
Use concise READMEs, PRDs, plan files, and context packs adjacent to code. Name rules and templates predictably (e.g., .cursor/rules), and keep human comments that explain intent. This dual focus improves model outputs and helps developers understand trade-offs.
What is a canonical file layout to start with for a full-stack app?
Begin with top-level directories—app, server, client, ai, docs, scripts—and under src/features place feature-first folders. Within each feature, separate UI, logic, and data. Include infra config and example deploy scripts so prototypes can evolve into production with minimal restructuring.
How should a sample tree handle auth, payments, and analytics?
Create feature slices like src/features/auth, src/features/billing, src/features/analytics. Each slice includes DB schema, operations, routes, UI components, and tests. Keep shared utilities in a small src/lib or shared folder to avoid duplication while preserving slice autonomy.
How do you map vertical slices from DB to UI?
Define a clear DB-to-UI path: schema → operations (CRUD) → API routes → services → UI components. Keep each step near the feature to trace behavior end-to-end. This mapping simplifies debugging and enables focused code reviews tied to user outcomes.
Where should authentication, billing, and dashboards live in the repository?
They belong in their respective feature folders under src/features. Cross-cutting concerns—like auth hooks or billing helpers—can live in a shared lib with strict interfaces. That prevents leaks across slices while making integration explicit.
How can teams incrementally add complexity without reorganizing?
Start with stable conventions: agreed folder names, file locations, and config patterns. When a feature grows, add subfolders (migrations, tests, docs) inside the same slice. This keeps history intact and avoids large refactors that slow delivery.
How should environment, configuration, and secrets be managed?
Use layered config: app-level defaults, per-feature overrides, and per-environment values. Store secrets in a secure vault or platform-managed store, not in repo. Maintain local, staging, and production parity to speed testing and AI-driven simulations.
What config files are essential across environments?
Include an app-level config (app.config.json or similar), per-feature config files, and environment-specific overlays (.env.local, .env.staging, .env.production). Keep deploy scripts and CI config in scripts/ and docs explaining how values map to services.
How do you embed AI rules and context for reliable assistance?
Create a rules directory (e.g., .cursor/rules) with naming conventions, store context packs like PRDs and README templates, and add guardrails that encode common constraints. This reduces repeated model errors and makes automated suggestions actionable.
What are context packs and how should they be organized?
Context packs include PRDs, plans, READMEs, and template references placed near features. Keep them concise and versioned; use them as the primary source for prompt context so assistants produce consistent, explainable results.
How do guardrails reduce repeated AI mistakes?
Guardrails enforce rules—type checks, allowed APIs, and response formats—that models must follow. Embedding them in repository patterns and CI checks prevents regressions and maintains safety as models evolve.
Where should per-feature design notes and rationale live?
Place them in ai/docs or within the feature folder as design.md files. These notes should capture trade-offs, alternatives considered, and decision records so future contributors and models understand the why behind the code.
How do auto-generated change logs and decision records help teams?
They provide an audit trail of what changed, why, and who approved it. Auto-generation from commits or PR metadata keeps records up-to-date and supports compliance, postmortems, and better AI-assisted summaries.
What are best practices for branching and code reviews with vertical slices?
Align branches to feature slices and scope PRs to single vertical changes. Reviewers should validate schema, API, and UI together. This reduces missed integration issues and streamlines CI by limiting blast radius.
How should tool-specific artifacts (Google AI Studio, Firebase, Gemini) be organized?
Reserve an ai/tools or integrations folder for blueprints, prototypes, and deploy artifacts. Keep platform-specific configs (Firebase.json, studio manifests) near deploy scripts and document intended use to avoid confusion during handoffs.
Where to place AI Studio or Firebase blueprints and deploy artifacts?
Store them under scripts/ or ai/deploy with clear versioning and README instructions. Include example credentials (masked) and steps for local emulation to make reproducing production behavior easier for developers and AI tools.
What real-world patterns help when using central config like Wasp/Prisma with feature folders?
Keep central config files for schema and database connections at the repo root, and reference them from feature folders. Use a small adapter layer in src/lib so features consume a stable interface while central tools manage global concerns.
How do Supabase-based auth and database folders scale?
Treat Supabase as a platform: store migrations and table schemas under db/, create feature-specific access rules, and keep client SDK wrappers in feature slices. This keeps platform specifics contained and makes migrations and rollbacks predictable.


