# Add a Database Table This recipe walks through adding a new database table, from schema definition to querying it in the API. ## 1. Define the schema Create a file in `db/schema/` with your table definition: ```ts // db/schema/project.ts import { relations } from "drizzle-orm"; import { index, pgTable, text, timestamp } from "drizzle-orm/pg-core"; import { generateId } from "./id"; import { organization } from "./organization"; export const project = pgTable( "project", { id: text() .primaryKey() .$defaultFn(() => generateId("prj")), name: text().notNull(), description: text(), organizationId: text() .notNull() .references(() => organization.id, { onDelete: "cascade" }), createdAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .notNull(), updatedAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .$onUpdate(() => new Date()) .notNull(), }, (table) => [index("project_organization_id_idx").on(table.organizationId)], ); export const projectRelations = relations(project, ({ one }) => ({ organization: one(organization, { fields: [project.organizationId], references: [organization.id], }), })); export type Project = typeof project.$inferSelect; export type NewProject = typeof project.$inferInsert; ``` Key conventions: * **IDs** – use `generateId("xxx")` with a unique 3-letter prefix (see [Schema](/database/schema) for existing prefixes) * **Timestamps** – always include `createdAt` and `updatedAt` with timezone * **Foreign keys** – use `onDelete: "cascade"` for owned resources * **Indexes** – add indexes on columns used in `WHERE` or `JOIN` clauses * **Casing** – write TypeScript in camelCase; Drizzle converts to snake\_case automatically ## 2. Export from the barrel file ```ts // db/schema/index.ts export * from "./project"; // [!code ++] ``` ## 3. Generate and apply the migration ```bash bun db:generate # Creates a new SQL migration file in db/migrations/ bun db:push # Applies it to your local database ``` Review the generated SQL in `db/migrations/` before applying to staging or production. ## 4. Add seed data (optional) Create a seed function: ```ts // db/seeds/projects.ts import type { PostgresJsDatabase } from "drizzle-orm/postgres-js"; import type * as schema from "../schema"; import { project } from "../schema"; export async function seedProjects(db: PostgresJsDatabase) { const projects = [ { name: "Acme Dashboard", organizationId: "org_..." }, { name: "Mobile App", organizationId: "org_..." }, ]; for (const p of projects) { await db.insert(project).values(p).onConflictDoNothing(); } console.log(`Seeded ${projects.length} projects`); } ``` Call it from `db/scripts/seed.ts`: ```ts import { seedProjects } from "../seeds/projects"; await seedProjects(db); ``` ## 5. Query in a tRPC procedure ```ts // apps/api/routers/project.ts import { protectedProcedure, router } from "../lib/trpc.js"; export const projectRouter = router({ list: protectedProcedure.query(async ({ ctx }) => { return ctx.db.query.project.findMany({ where: (p, { eq }) => eq(p.organizationId, ctx.session.activeOrganizationId!), orderBy: (p, { desc }) => desc(p.createdAt), }); }), }); ``` See [Add a tRPC Procedure](/recipes/new-procedure) for the full frontend wiring. ## Reference * [Schema](/database/schema) – column conventions, ID prefixes, entity reference * [Migrations](/database/migrations) – migration workflow and best practices * [Query Patterns](/database/queries) – multi-tenant queries, joins, transactions --- --- url: /recipes/new-page.md --- # Add a Page This recipe walks through adding a new route to the app. All routes live in `apps/app/routes/` and are auto-discovered by [TanStack Router](https://tanstack.com/router/latest). ## 1. Create the route file Add a file under the `(app)` layout group so it inherits the auth guard and shell layout: ``` apps/app/routes/(app)/projects.tsx ``` ```tsx import { createFileRoute } from "@tanstack/react-router"; export const Route = createFileRoute("/(app)/projects")({ component: Projects, }); function Projects() { return (

Projects

Your projects will appear here.

); } ``` Run `bun app:dev` – TanStack Router regenerates `lib/routeTree.gen.ts` automatically and the page is available at `/projects`. ## 2. Add navigation Open the sidebar or header component and add a link: ```tsx import { Link } from "@tanstack/react-router"; Projects ; ``` `` is type-safe – TypeScript will error if the route doesn't exist. ## 3. Fetch data Use a tRPC query hook inside the component: ```tsx import { useSuspenseQuery } from "@tanstack/react-query"; import { trpcClient } from "@/lib/trpc"; import { queryOptions } from "@tanstack/react-query"; function projectsQueryOptions() { return queryOptions({ queryKey: ["projects"], queryFn: () => trpcClient.project.list.query(), }); } function Projects() { const { data } = useSuspenseQuery(projectsQueryOptions()); return (

Projects

); } ``` See [State & Data Fetching](/frontend/state) for more patterns. ## 4. Add search params (optional) Validate query string parameters with Zod: ```tsx import { z } from "zod"; const searchSchema = z.object({ page: z.number().default(1), q: z.string().optional(), }); export const Route = createFileRoute("/(app)/projects")({ validateSearch: searchSchema, component: Projects, }); function Projects() { const { page, q } = Route.useSearch(); // ... } ``` ## 5. Add a public page (optional) To create a page that doesn't require authentication, place it under the `(auth)` layout group or directly in `routes/`: ``` apps/app/routes/(auth)/pricing.tsx ``` Pages outside `(app)/` skip the auth guard and don't render the app shell layout. ## Reference * [Routing](/frontend/routing) – file conventions, layouts, and route guards * [TanStack Router docs](https://tanstack.com/router/latest/docs/framework/react/guide/file-based-routing) --- --- url: /recipes/new-procedure.md --- # Add a tRPC Procedure This recipe adds a new tRPC procedure with input validation and wires it up from the API to the frontend. ## 1. Create the router file Add a new router in `apps/api/routers/`: ```ts // apps/api/routers/project.ts import { z } from "zod"; import { schema } from "@repo/db"; import { protectedProcedure, router } from "../lib/trpc.js"; export const projectRouter = router({ list: protectedProcedure.query(async ({ ctx }) => { const projects = await ctx.db.query.project.findMany({ where: (p, { eq }) => eq(p.organizationId, ctx.session.activeOrganizationId!), orderBy: (p, { desc }) => desc(p.createdAt), }); return { projects }; }), create: protectedProcedure .input( z.object({ name: z.string().min(1).max(100), description: z.string().max(500).optional(), }), ) .mutation(async ({ ctx, input }) => { const [project] = await ctx.db .insert(schema.project) .values({ ...input, organizationId: ctx.session.activeOrganizationId!, }) .returning(); return project; }), }); ``` Use `protectedProcedure` for authenticated endpoints and `publicProcedure` for unauthenticated ones. Protected procedures guarantee `ctx.session` and `ctx.user` are non-null. ## 2. Register the router Import and add it to the app router in `apps/api/lib/app.ts`: ```ts import { projectRouter } from "../routers/project.js"; const appRouter = router({ billing: billingRouter, user: userRouter, organization: organizationRouter, project: projectRouter, // [!code ++] }); ``` The procedure is now callable at `/api/trpc/project.list` and `/api/trpc/project.create`. ## 3. Call from the frontend Create a query options helper in `apps/app/lib/queries/`: ```ts // apps/app/lib/queries/project.ts import { queryOptions, useQuery, useSuspenseQuery, } from "@tanstack/react-query"; import { trpcClient } from "../trpc"; export function projectListOptions() { return queryOptions({ queryKey: ["projects"], queryFn: () => trpcClient.project.list.query(), }); } export function useProjectList() { return useQuery(projectListOptions()); } ``` Use in a component: ```tsx import { useProjectList } from "@/lib/queries/project"; function ProjectList() { const { data, isLoading } = useProjectList(); if (isLoading) return

Loading...

; return ( ); } ``` ## 4. Call a mutation ```tsx import { trpcClient } from "@/lib/trpc"; import { useQueryClient } from "@tanstack/react-query"; function CreateProjectButton() { const queryClient = useQueryClient(); async function handleCreate() { await trpcClient.project.create.mutate({ name: "New Project", }); // Invalidate the list so it refetches await queryClient.invalidateQueries({ queryKey: ["projects"] }); } return ; } ``` ## Reference * [Procedures](/api/procedures) – query vs mutation, public vs protected * [Validation & Errors](/api/validation-errors) – Zod input schemas and error handling * [State & Data Fetching](/frontend/state) – TanStack Query patterns --- --- url: /recipes/teams.md --- # Add Teams Teams let you create subgroups within organizations. This recipe enables Better Auth's [teams feature](https://www.better-auth.com/docs/plugins/organization#teams) and wires it into the existing schema. ## 1. Add the schema Create `db/schema/team.ts`: ```typescript import { relations } from "drizzle-orm"; import { index, pgTable, text, timestamp, unique } from "drizzle-orm/pg-core"; import { generateId } from "./id"; import { organization } from "./organization"; import { user } from "./user"; export const team = pgTable( "team", { id: text() .primaryKey() .$defaultFn(() => generateId("tea")), name: text().notNull(), organizationId: text() .notNull() .references(() => organization.id, { onDelete: "cascade" }), createdAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .notNull(), updatedAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .$onUpdate(() => new Date()) .notNull(), }, (table) => [index("team_organization_id_idx").on(table.organizationId)], ); export const teamMember = pgTable( "team_member", { id: text() .primaryKey() .$defaultFn(() => generateId("tmb")), teamId: text() .notNull() .references(() => team.id, { onDelete: "cascade" }), userId: text() .notNull() .references(() => user.id, { onDelete: "cascade" }), createdAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .notNull(), updatedAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .$onUpdate(() => new Date()) .notNull(), }, (table) => [ unique("team_member_team_user_unique").on(table.teamId, table.userId), index("team_member_team_id_idx").on(table.teamId), index("team_member_user_id_idx").on(table.userId), ], ); export const teamRelations = relations(team, ({ one, many }) => ({ organization: one(organization, { fields: [team.organizationId], references: [organization.id], }), members: many(teamMember), })); export const teamMemberRelations = relations(teamMember, ({ one }) => ({ team: one(team, { fields: [teamMember.teamId], references: [team.id], }), user: one(user, { fields: [teamMember.userId], references: [user.id], }), })); ``` Export it from `db/schema/index.ts`: ```typescript export * from "./team"; // [!code ++] ``` ## 2. Extend session and invitation tables Add `activeTeamId` to the session table in `db/schema/user.ts`: ```typescript export const session = pgTable( "session", { // ...existing columns activeOrganizationId: text(), activeTeamId: text(), // [!code ++] }, // ... ); ``` Add `teamId` to the invitation table in `db/schema/invitation.ts` for team-scoped invitations: ```typescript export const invitation = pgTable( "invitation", { // ...existing columns teamId: text().references(() => team.id, { onDelete: "cascade" }), // [!code ++] }, // ... ); ``` ## 3. Enable the teams plugin In `apps/api/lib/auth.ts`, add the new tables to the Drizzle adapter schema and enable teams in the organization plugin: ```typescript database: drizzleAdapter(db, { provider: "pg", schema: { // ...existing mappings team: Db.team, // [!code ++] teamMember: Db.teamMember, // [!code ++] }, }), // ... plugins: [ organization({ allowUserToCreateOrganization: true, organizationLimit: 5, creatorRole: "owner", teams: { enabled: true }, // [!code ++] }), ], ``` In `apps/app/lib/auth.ts`, enable teams on the client: ```typescript export const auth = createAuthClient({ // ... plugins: [ organizationClient({ teams: { enabled: true }, // [!code ++] }), // ...other plugins ], }); ``` ## 4. Apply the migration ```bash bun db:generate bun db:push ``` ## 5. Use the teams API Create a team within the active organization: ```ts await auth.organization.createTeam({ name: "Engineering", }); ``` Set the active team for the current session: ```ts await auth.organization.setActiveTeam({ teamId: "tea_...", }); ``` List teams and manage members: ```ts // List teams in the active organization const { data: teams } = await auth.organization.listTeams(); // Add a member to a team await auth.organization.addTeamMember({ teamId: "tea_...", userId: "usr_...", }); // Remove a member from a team await auth.organization.removeTeamMember({ teamId: "tea_...", userId: "usr_...", }); ``` The active team ID is available in the session as `session.activeTeamId`, alongside the existing `session.activeOrganizationId`. ## Reference * [Better Auth organization plugin – Teams](https://www.better-auth.com/docs/plugins/organization#teams) * [Organizations & Roles](/auth/organizations) – base organization setup --- --- url: /adr/001-auth-hint-cookie.md --- # ADR-001 Auth Hint Cookie For Edge Routing **Status:** Accepted **Date:** 2025-12-28 **Tags:** auth, routing, edge ## Problem The web edge needs a fast signal to route `/` without owning auth logic. ## Decision Use a dedicated auth-hint cookie set on login and cleared on logout or invalid session. The web worker checks only cookie presence to route, while the app remains the authority. No API calls or session validation in `web`. This cookie is NOT a security boundary. It is a routing hint only. False positives are acceptable and result in one extra redirect to `/login`. ## Implementation Notes * Cookie name: `__Host-auth` in HTTPS; `auth` in HTTP dev (browsers reject `__Host-` without Secure). * Cookie lifecycle: set on new session; clear on sign-out; clear on session-check failure. * Web routing: check for either cookie name; never read session cookies. ## Alternatives Considered 1. **Validate session in web via API** – Couples edge to auth, adds latency/failure modes. 2. **Read Better Auth session cookie directly** – Brittle to auth library changes and cookie formats. ## Consequences * **Positive:** Faster edge routing, clear separation of concerns, auth-lib agnostic. * **Negative:** False positives cause one extra redirect; requires maintaining set/clear hooks. ## Links * https://github.com/kriasoft/react-starter-kit/issues/2101 --- --- url: /adr/000-template.md --- # ADR-NNN Title **Status:** Proposed | Accepted | Deprecated | Superseded\ **Date:** YYYY-MM-DD\ **Tags:** tag1, tag2 ## Problem * One or two sentences on the decision trigger or constraint. ## Decision * The chosen approach in a short paragraph. ## Alternatives (brief) * Option A – why not. * Option B – why not. ## Impact * Positive: * Negative/Risks: ## Links * Code/Docs: * Related ADRs: --- --- url: /api.md --- # API Overview The API server (`apps/api/`) runs as a Cloudflare Worker and handles all backend logic: authentication, data access, and billing webhooks. It combines two frameworks: * **[Hono](https://hono.dev/)** – lightweight HTTP router for auth endpoints, webhooks, and health checks * **[tRPC](https://trpc.io/)** – type-safe RPC layer for all client-facing queries and mutations Hono handles the HTTP surface. tRPC handles the typed contract between frontend and backend. They share the same Worker and middleware stack. ## How the Worker is Wired The API has two entrypoints – one for production (Cloudflare Workers) and one for local development (Bun): | File | Runtime | Description | | ----------- | ------------------ | ---------------------------------------------- | | `worker.ts` | Cloudflare Workers | Production entrypoint | | `dev.ts` | Bun | Local dev server via `wrangler` platform proxy | Both follow the same structure: ``` worker.ts / dev.ts ├── errorHandler, notFoundHandler ├── secureHeaders() ├── requestId() ├── logger() ├── context init (db, dbDirect, auth) └── mount app.ts ├── GET /api → API info (JSON) ├── GET /health → health check ├── * /api/auth/* → Better Auth handler └── * /api/trpc/* → tRPC fetch adapter ``` The top-level worker (`worker.ts`) sets up global middleware and initializes shared resources, then mounts the core Hono app (`lib/app.ts`) which defines the actual routes. ```ts // apps/api/worker.ts (simplified) const worker = new Hono(); worker.onError(errorHandler); worker.notFound(notFoundHandler); worker.use(secureHeaders()); worker.use(requestId({ generator: requestIdGenerator })); worker.use(logger()); // Initialize shared context worker.use(async (c, next) => { const db = createDb(c.env.HYPERDRIVE_CACHED); const dbDirect = createDb(c.env.HYPERDRIVE_DIRECT); c.set("db", db); c.set("dbDirect", dbDirect); c.set("auth", createAuth(db, c.env)); await next(); }); // Mount the core app worker.route("/", app); ``` ## Endpoints | Path | Method | Handler | Description | | ------------- | --------- | ----------- | ------------------------------------------------------------------------------ | | `/` | GET | Hono | Redirects to `/api` | | `/api` | GET | Hono | API metadata (name, version, endpoints) | | `/health` | GET | Hono | Health check – returns `{ status, timestamp }` | | `/api/auth/*` | GET, POST | Better Auth | Authentication routes ([docs](https://www.better-auth.com/docs/api-reference)) | | `/api/trpc/*` | \* | tRPC | Type-safe RPC – all queries and mutations | ## tRPC Router The root router merges domain-specific sub-routers: ```ts // apps/api/lib/app.ts const appRouter = router({ billing: billingRouter, user: userRouter, organization: organizationRouter, }); ``` Each sub-router lives in `routers/` and exports a single router instance. See [Procedures](./procedures) for details on adding your own. ## Project Structure ```bash apps/api/ ├── worker.ts # Cloudflare Workers entrypoint ├── dev.ts # Local dev server (Bun) ├── index.ts # Public package exports ├── lib/ │ ├── ai.ts # OpenAI provider factory │ ├── app.ts # Hono app + tRPC router composition │ ├── auth.ts # Better Auth configuration │ ├── context.ts # TRPCContext and AppContext types │ ├── db.ts # Drizzle ORM database factory │ ├── email.ts # Resend email utilities │ ├── env.ts # Environment variable schema (Zod) │ ├── loaders.ts # DataLoader instances for N+1 prevention │ ├── middleware.ts # Error handler, 404 handler, request ID │ ├── plans.ts # Subscription plan limits │ ├── stripe.ts # Stripe client factory │ └── trpc.ts # tRPC init, procedures, error formatter ├── routers/ │ ├── billing.ts # Subscription queries │ ├── billing.test.ts # Billing router tests │ ├── organization.ts # Organization CRUD │ └── user.ts # User profile queries └── wrangler.jsonc # Cloudflare Workers config ``` ## Calling the API from the Frontend The frontend app (`apps/app/`) uses `@trpc/client` with TanStack Query integration. The tRPC client is configured in `apps/app/lib/trpc.ts`: ```ts import { createTRPCOptionsProxy } from "@trpc/tanstack-react-query"; export const api = createTRPCOptionsProxy({ client: trpcClient, queryClient, }); ``` Use `api` in components to call procedures with full type safety: ```ts import { useSuspenseQuery } from "@tanstack/react-query"; import { api } from "~/lib/trpc"; function Profile() { const { data } = useSuspenseQuery(api.user.me.queryOptions()); return

{data.name}

; } ``` See the [tRPC + TanStack Query docs](https://trpc.io/docs/client/react/tanstack-react-query) for the full client API. --- --- url: /architecture.md --- # Architecture Overview React Starter Kit runs on three Cloudflare Workers connected by [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). A single domain receives all traffic – the **web** worker routes each request to the right destination without any cross-worker public URLs. ## Request Flow ```mermaid sequenceDiagram participant Browser participant Web as Web Worker participant App as App Worker participant API as API Worker participant DB as Neon PostgreSQL Browser->>Web: GET / alt auth-hint cookie present Web->>App: service binding App-->>Web: SPA (dashboard) else no cookie Web-->>Browser: marketing page end Browser->>Web: GET /settings Web->>App: service binding App-->>Web: SPA assets Browser->>Web: POST /api/trpc/user.me Web->>API: service binding API->>DB: Hyperdrive DB-->>API: query result API-->>Web: JSON response Web-->>Browser: JSON response ``` ## Workers | Worker | Workspace | Purpose | Has `nodejs_compat` | | ------- | ---------- | ----------------------------------------------------- | :-----------------: | | **web** | `apps/web` | Edge router – receives all traffic, routes to app/api | No | | **app** | `apps/app` | SPA static assets (React, TanStack Router) | No | | **api** | `apps/api` | Hono server – tRPC, Better Auth, webhooks | Yes | ### Web Worker The web worker is the only worker with a public route (`example.com/*`). It decides where each request goes: * `/api/*` – forwarded to the API worker * `/login`, `/signup`, `/settings`, `/analytics`, `/reports`, `/_app/*` – forwarded to the app worker * `/` – routed by [auth hint cookie](#auth-hint-cookie) (app if signed in, marketing site if not) * Everything else – served from the web worker's own static assets (marketing pages) ```ts // apps/web/worker.ts (simplified) app.all("/api/*", (c) => c.env.API_SERVICE.fetch(c.req.raw)); app.all("/login*", (c) => c.env.APP_SERVICE.fetch(c.req.raw)); app.on(["GET", "HEAD"], "/", async (c) => { const hasAuthHint = getCookie(c, "__Host-auth") === "1" || getCookie(c, "auth") === "1"; const upstream = await (hasAuthHint ? c.env.APP_SERVICE : c.env.ASSETS).fetch( c.req.raw, ); // ... }); ``` ### App Worker A static asset worker with `not_found_handling: "single-page-application"` – any path that doesn't match a file returns `index.html`, enabling client-side routing via TanStack Router. The app worker has no custom worker script. It is accessed only through service bindings from the web worker. ### API Worker Runs the Hono HTTP server with the following middleware chain: ```ts // apps/api/worker.ts (simplified) worker.onError(errorHandler); worker.notFound(notFoundHandler); worker.use(secureHeaders()); worker.use(requestId({ generator: requestIdGenerator })); worker.use(logger()); // Initialize shared context worker.use(async (c, next) => { const db = createDb(c.env.HYPERDRIVE_CACHED); c.set("db", db); c.set("dbDirect", createDb(c.env.HYPERDRIVE_DIRECT)); c.set("auth", createAuth(db, c.env)); await next(); }); worker.route("/", app); // Mounts tRPC + auth + health routes ``` Primary endpoints: | Path | Handler | | ------------- | ------------------------------------------------------ | | `/api/auth/*` | Better Auth (login, signup, sessions, OAuth callbacks) | | `/api/trpc/*` | tRPC procedures (batching enabled) | | `/api` | API info (name, version, endpoint list) | | `/health` | Health check | ## Service Bindings Service bindings let workers call each other directly over Cloudflare's internal network – no HTTP round-trip through the public internet. ```jsonc // apps/web/wrangler.jsonc "services": [ { "binding": "APP_SERVICE", "service": "example-app" }, { "binding": "API_SERVICE", "service": "example-api" } ] ``` ::: warning Service bindings are **non-inheritable** in Wrangler – they must be declared in every environment block. Forgetting this causes staging/preview workers to bind to production services. ::: Naming convention: `--` (e.g. `example-api-staging`). See [Edge > Service Bindings](./edge#service-bindings) for the full per-environment config. ## Database Connection The API worker connects to [Neon PostgreSQL](https://neon.tech) via [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) – a connection pool that sits between Workers and your database. Two bindings are available: | Binding | Caching | Use case | | ------------------- | -------- | ------------------------------------- | | `HYPERDRIVE_CACHED` | Enabled | Default reads – most queries go here | | `HYPERDRIVE_DIRECT` | Disabled | Writes and reads that need fresh data | Both bindings are initialized in the API worker middleware and available on every request context as `db` and `dbDirect`. See [Database](/database/) for schema and query patterns. ## Auth Hint Cookie The `/` route serves two different experiences – a marketing page for visitors and the app dashboard for signed-in users. The web worker needs a fast signal to choose without owning auth logic. **How it works:** Better Auth sets a lightweight `__Host-auth=1` cookie on sign-in and clears it on sign-out. The web worker checks only for cookie *presence* – it never validates sessions. If the cookie exists, the request goes to the app worker; otherwise it serves the marketing page. This cookie is a **routing hint only**, not a security boundary. A false positive (stale cookie) results in one extra redirect to `/login` – the app worker validates the real session. ::: info In local development the cookie is named `auth` (HTTP), since browsers reject the `__Host-` prefix without HTTPS. ::: See [ADR-001](/adr/001-auth-hint-cookie) for the full decision record and [Sessions & Protected Routes](/auth/sessions) for the auth flow. ## Environments | Environment | Workers | Domain | Database | Deploy command | | ----------- | --------------- | --------------------- | -------------- | ------------------------------- | | Development | `wrangler dev` | `localhost:5173` | Dev branch | `bun dev` | | Preview | `*-preview` | `preview.example.com` | Preview branch | `wrangler deploy --env preview` | | Staging | `*-staging` | `staging.example.com` | Staging branch | `wrangler deploy --env staging` | | Production | `*` (no suffix) | `example.com` | Main branch | `wrangler deploy` | Each environment has its own Hyperdrive bindings, service binding targets, and `APP_ORIGIN` / `ALLOWED_ORIGINS` variables. See [Edge > Service Bindings](./edge#service-bindings) for the full wrangler config. ## Build Order The workspaces must build in dependency order: ``` email → web → api → app ``` Email templates are compiled first because the API server imports them. The `bun build` command handles this automatically. ## Key Invariants * The **API worker is the sole authority** for authentication and data access – the web worker never validates sessions or queries the database. * Only the **web worker** has public routes. App and API workers are accessed exclusively through service bindings. * **Service bindings are non-inheritable** – every Wrangler environment must declare its own bindings. * The auth hint cookie is a **routing optimization**, not a security mechanism. * The API worker is the only worker with `nodejs_compat` enabled. --- --- url: /specs/auth-form.md --- # Auth Flow UX Specification Target UX inspired by Linear's authentication flow. ## Design Principles 1. **Progressive disclosure** – Show only what's needed at each step 2. **Method selection first** – Let users choose their auth method before showing inputs 3. **Minimal friction** – Reduce cognitive load with focused, single-purpose views 4. **Clear navigation** – Easy to go back and switch methods ## Flow Structure ### Login (`/login`) ```text Step 1: Method Selection ┌─────────────────────────────┐ │ [Logo] │ │ │ │ Log in to [App Name] │ │ │ │ ┌───────────────────────┐ │ │ │ Continue with Google │ │ │ └───────────────────────┘ │ │ ┌───────────────────────┐ │ │ │ Continue with email │ │ │ └───────────────────────┘ │ │ ┌───────────────────────┐ │ │ │ Log in with passkey │ │ │ └───────────────────────┘ │ │ │ │ Don't have an account? │ │ Sign up │ └─────────────────────────────┘ Step 2: Email Input (after clicking "Continue with email") ┌─────────────────────────────┐ │ [Logo] │ │ │ │ What's your email address? │ │ │ │ ┌───────────────────────┐ │ │ │ Enter your email... │ │ │ └───────────────────────┘ │ │ ┌───────────────────────┐ │ │ │ Continue with email │ │ │ └───────────────────────┘ │ │ │ │ ← Back to login │ └─────────────────────────────┘ Step 3: OTP Verification ┌─────────────────────────────┐ │ [Logo] │ │ │ │ Check your email │ │ We sent a code to │ │ user@example.com │ │ │ │ ┌─┬─┬─┬─┬─┬─┐ │ │ │ │ │ │ │ │ │ (6 digits) │ │ └─┴─┴─┴─┴─┴─┘ │ │ │ │ Resend code │ │ ← Back │ └─────────────────────────────┘ ``` ### Signup (`/signup`) ```text Step 1: Method Selection ┌─────────────────────────────┐ │ [Logo] │ │ │ │ Create your account │ │ │ │ ┌───────────────────────┐ │ │ │ Continue with Google │ │ │ └───────────────────────┘ │ │ ┌───────────────────────┐ │ │ │ Continue with email │ │ │ └───────────────────────┘ │ │ │ │ By signing up, you agree │ │ to our Terms and Privacy │ │ Policy. │ │ │ │ Already have an account? │ │ Log in │ └─────────────────────────────┘ Step 2: Email Input (after clicking "Continue with email") ┌─────────────────────────────┐ │ [Logo] │ │ │ │ What's your email address? │ │ │ │ ┌───────────────────────┐ │ │ │ Enter your email... │ │ │ └───────────────────────┘ │ │ ┌───────────────────────┐ │ │ │ Continue with email │ │ │ └───────────────────────┘ │ │ │ │ By signing up, you agree │ │ to our Terms and Privacy │ │ Policy. │ │ │ │ ← Back to sign up │ └─────────────────────────────┘ Step 3: OTP Verification ┌─────────────────────────────┐ │ [Logo] │ │ │ │ Check your email │ │ We sent a code to │ │ user@example.com │ │ │ │ ┌─┬─┬─┬─┬─┬─┐ │ │ │ │ │ │ │ │ │ (6 digits) │ │ └─┴─┴─┴─┴─┴─┘ │ │ │ │ Resend code │ │ ← Back to email │ └─────────────────────────────┘ ``` Note: No passkey option on signup (passkeys require existing account). ## Third-Party Auth Behavior * **Google**: On failure or user cancel, return to method selection with inline error. * **Passkey**: On failure (not supported, no credential, user cancel), return to method selection with inline error and a short hint to use email instead. * **Network/system errors**: Show a non-blocking toast and keep the user on the current step. ## Key Differences from Current Implementation | Aspect | Current | Target | | ------------ | --------------------------------- | ----------------------------------------- | | Initial view | All methods + email input visible | Method selection buttons only | | Email input | Always visible with divider | Separate step after clicking email button | | Layout | Card with optional right panel | Centered content, no card | | Headings | "Welcome" / "Welcome back" | "Create your account" / "Log in to \[App]" | | Navigation | None | "Back to login" link between steps | | Terms | Footer on both pages | Inline on signup only | ## Copy & Labels | Screen | Heading | CTA | Helper | | ------------- | -------------------------- | ---------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | Login method | Log in to \[App Name] | Continue with Google / Continue with email / Log in with passkey | Don't have an account? Sign up | | Login email | What's your email address? | Continue with email | ← Back to login | | Login OTP | Check your email | Verify code | Resend code / ← Back to email | | Signup method | Create your account | Continue with Google / Continue with email | By signing up, you agree to our Terms and Privacy Policy. Already have an account? Log in | | Signup email | What's your email address? | Continue with email | By signing up, you agree to our Terms and Privacy Policy. ← Back to sign up | | Signup OTP | Check your email | Verify code | Resend code / ← Back to email | ## Component Architecture ### State Machine ```text ┌─────────────┐ click email ┌───────────┐ submit email ┌──────────────┐ │ METHOD │ ──────────────────→ │ EMAIL │ ────────────────→ │ OTP │ │ SELECTION │ │ INPUT │ │ VERIFICATION │ └─────────────┘ ←─────────────── └───────────┘ ←─────────────── └──────────────┘ back back/cancel ``` ### Suggested Step Type ```ts type AuthStep = "method" | "email" | "otp"; ``` ### Props ```ts interface AuthFormProps { mode: "login" | "signup"; onSuccess?: () => void; } ``` ## Visual Design * **Layout**: Centered, max-width ~400px, no card wrapper * **Logo**: Centered above heading * **Buttons**: Full-width, stacked vertically with consistent spacing * **Typography**: Clear hierarchy – heading (h1), body text, links * **Back link**: Left-aligned, subtle styling, positioned below form ## Transitions * Smooth fade/slide between steps (optional enhancement) * Maintain scroll position when navigating back ## Error Handling * Inline error messages below relevant input * Clear error state when user modifies input * Specific messages for common errors (invalid email, expired OTP, rate limit) * Third-party auth error surfaced on method selection with a one-line explanation ## Loading & Empty States * Method selection: disable buttons and show spinner during third-party auth initiation * Email input: disable CTA while sending code; show spinner inside button * OTP: disable inputs while verifying; show progress indicator * Resend: disabled until cooldown expires; show countdown ## OTP Constraints * 6 digits, numeric only * Expires after 10 minutes * Resend cooldown: 30 seconds * Rate limit: 5 attempts per hour per email ## Accessibility * Focus management: auto-focus first input when entering email/OTP steps * Keyboard navigation: Enter to submit, Escape to go back (optional) * Screen reader announcements for step changes ## Open Questions * \[ ] Should the logo link to home or be static? * \[ ] Add "Remember me" checkbox? * \[ ] Show password option as alternative to OTP? * \[ ] Magic link option in addition to OTP? * \[ ] Should login email step include a short notice about email delivery/usage? --- --- url: /auth.md --- # Authentication Overview Authentication is handled by [Better Auth](https://www.better-auth.com/) – a TypeScript-native auth framework that runs entirely in the API worker. The project ships with multiple sign-in methods, organization-based multi-tenancy, and Stripe billing integration out of the box. ## What's Included | Method | Description | | ---------------------------------- | ----------------------------------------- | | [Email & OTP](./email-otp) | Passwordless 6-digit code via email | | Email & Password | Traditional email/password with reset | | [Google OAuth](./social-providers) | Social login with redirect flow | | [Passkeys](./passkeys) | WebAuthn biometric / security key | | Anonymous | Guest sessions that can be upgraded later | All methods produce the same session format. Users can link multiple methods to one account. ## Plugins Better Auth's functionality is extended through plugins. The server and client must enable matching plugins: | Plugin | Server | Client | Purpose | | -------------- | ---------------- | ---------------------- | --------------------------- | | `emailOTP` | `emailOTP()` | `emailOTPClient()` | Passwordless OTP sign-in | | `organization` | `organization()` | `organizationClient()` | Multi-tenant orgs and roles | | `passkey` | `passkey()` | `passkeyClient()` | WebAuthn authentication | | `anonymous` | `anonymous()` | `anonymousClient()` | Guest sessions | | `stripe` | `stripe()` | `stripeClient()` | Subscription billing | The Stripe plugin is conditionally loaded – it only activates when `STRIPE_SECRET_KEY` and related env vars are set. Without them, the app works normally but billing endpoints return 404. ## Server Configuration The auth instance is created per-request in `apps/api/lib/auth.ts`: ```ts // apps/api/lib/auth.ts export function createAuth(db: DB, env: AuthEnv) { return betterAuth({ baseURL: `${env.APP_ORIGIN}/api/auth`, trustedOrigins: [env.APP_ORIGIN], secret: env.BETTER_AUTH_SECRET, database: drizzleAdapter(db, { provider: "pg", schema: { ... } }), emailAndPassword: { enabled: true, sendResetPassword: async ({ user, url }) => { await sendPasswordReset(env, { user, url }); }, }, emailVerification: { sendVerificationEmail: async ({ user, url }) => { await sendVerificationEmail(env, { user, url }); }, }, socialProviders: { google: { clientId: env.GOOGLE_CLIENT_ID, clientSecret: env.GOOGLE_CLIENT_SECRET, }, }, plugins: [ anonymous(), organization({ allowUserToCreateOrganization: true, organizationLimit: 5, creatorRole: "owner", }), passkey({ rpID, rpName: env.APP_NAME, origin: env.APP_ORIGIN }), emailOTP({ otpLength: 6, expiresIn: 300, allowedAttempts: 3 }), ...stripePlugin(db, env), ], }); } ``` The `account` model is renamed to `identity` to better describe its purpose (OAuth provider credentials): ```ts account: { modelName: "identity" }, ``` ### ID Generation All auth tables use prefixed CUID2 IDs generated at the application level: ```ts advanced: { database: { generateId: ({ model }) => generateAuthId(model), }, }, ``` This produces IDs like `usr_cm...`, `ses_cm...`, `org_cm...` – making it easy to identify what kind of record an ID refers to. ## Client Configuration The auth client lives in `apps/app/lib/auth.ts`: ```ts // apps/app/lib/auth.ts import { createAuthClient } from "better-auth/react"; export const auth = createAuthClient({ baseURL: baseURL + "/api/auth", plugins: [ anonymousClient(), emailOTPClient(), organizationClient(), passkeyClient(), stripeClient({ subscription: true }), ], }); ``` ::: warning Do not use `auth.useSession()` directly. Session state is managed exclusively through TanStack Query – see [Sessions & Protected Routes](./sessions). ::: ## Auth Routes Better Auth exposes HTTP endpoints at `/api/auth/*`. These are mounted in the Hono app alongside tRPC: ``` /api/auth/sign-in/* Sign-in endpoints (email, social, passkey) /api/auth/sign-up/* Sign-up endpoints /api/auth/sign-out Session termination /api/auth/get-session Current session data /api/auth/callback/* OAuth callbacks /api/auth/email-otp/* OTP send and verify /api/auth/passkey/* WebAuthn registration and authentication /api/auth/organization/* Organization CRUD and membership ``` See the [Better Auth API reference](https://www.better-auth.com/docs/api-reference) for the full endpoint list. ## Database Tables Authentication uses 9 database tables defined in `db/schema/`: | Table | File | Description | | -------------- | ----------------- | ---------------------------------------------------------- | | `user` | `user.ts` | User accounts with profile info | | `session` | `user.ts` | Active sessions with `activeOrganizationId` | | `identity` | `user.ts` | OAuth provider credentials (Better Auth's `account` model) | | `verification` | `user.ts` | Email verification and OTP tokens | | `organization` | `organization.ts` | Tenant organizations | | `member` | `organization.ts` | Organization memberships with roles | | `invitation` | `invitation.ts` | Pending org invitations | | `passkey` | `passkey.ts` | WebAuthn credential store | | `subscription` | `subscription.ts` | Stripe subscription state | ## Auth Hint Cookie The API worker sets a lightweight cookie (`__Host-auth` in HTTPS, `auth` in HTTP dev) on sign-in and clears it on sign-out. The web edge worker reads this cookie to route `/` – authenticated users get the app, anonymous users get the marketing page. This cookie is a routing hint only, not a security boundary. See [ADR-001](/adr/001-auth-hint-cookie) for the full rationale. ## Environment Variables | Variable | Required | Description | | ---------------------- | -------- | ------------------------------------------------- | | `BETTER_AUTH_SECRET` | Yes | Secret for signing sessions and tokens | | `GOOGLE_CLIENT_ID` | Yes | Google OAuth client ID | | `GOOGLE_CLIENT_SECRET` | Yes | Google OAuth client secret | | `RESEND_API_KEY` | Yes | API key for sending OTP emails | | `RESEND_EMAIL_FROM` | Yes | Sender address for auth emails | | `APP_NAME` | Yes | Display name (used in emails and passkey prompts) | | `APP_ORIGIN` | Yes | Full origin URL (e.g., `https://example.com`) | --- --- url: /billing.md --- # Billing Stripe subscriptions are integrated via the [`@better-auth/stripe`](https://www.better-auth.com/docs/plugins/stripe) plugin. The auth system manages the full subscription lifecycle – customer creation, checkout, webhooks, and status tracking – so billing state lives alongside sessions and organizations in the same database. Billing is **optional** – without the `STRIPE_*` environment variables the app works normally; billing endpoints return 404 and the UI falls back to the free plan. ## What's Included | Feature | Implementation | | --------------------------------------- | ------------------------------------------- | | Three-tier plans (Free / Starter / Pro) | Config in `apps/api/lib/plans.ts` | | Stripe hosted checkout | `auth.subscription.upgrade()` client method | | Customer portal (cancel, change card) | `auth.subscription.billingPortal()` | | Org-level and personal billing | `referenceId` derived from session | | Webhook-driven status sync | Plugin-managed endpoint | | 14-day free trial on Pro | `freeTrial: { days: 14 }` in plan config | | Annual discount pricing | `annualDiscountPriceId` on Pro plan | ## Architecture ```text ┌─────────────┐ POST /api/auth/subscription/upgrade ┌───────────────┐ │ Browser │ ──────────────────────────────────────────→ │ API Worker │ │ (app) │ │ (Hono) │ │ │ ←── 302 redirect │ │ │ │──→ Stripe Checkout (hosted) │ Better Auth │ │ │ │ + stripe() │ │ │ POST /api/auth/stripe/webhook │ plugin │ │ │ Stripe ────────→│ webhook ──→ │ │ │ │ update DB │ │ │ GET /api/trpc/billing.subscription │ │ │ │ ──────────────────────────────────────────→ │ tRPC router │ └─────────────┘ ←── subscription data (TanStack Query) └───────────────┘ ``` 1. User clicks **Upgrade** – auth client calls `auth.subscription.upgrade()` 2. Plugin creates a Stripe Checkout session – redirects browser to Stripe 3. User completes payment – Stripe sends webhook to `/api/auth/stripe/webhook` 4. Plugin verifies signature, updates `subscription` table 5. Client refetches billing state via tRPC + TanStack Query Mutations (upgrade, portal) go through the auth client because the plugin handles Stripe API calls, session validation, and org authorization internally. Reads go through tRPC to benefit from TanStack Query caching and org-aware cache keys. ## Billing Reference Billing is tied to `session.activeOrganizationId` when present; otherwise falls back to `user.id` for personal use. One active subscription per reference ID. | Context | `referenceId` | Who can manage | | ------------------- | ---------------------- | -------------- | | Organization active | `activeOrganizationId` | Owner or admin | | No organization | `user.id` | The user | The server derives `referenceId` from the session – no client-side parameter needed. The billing query key includes `activeOrgId`, so switching organizations refetches automatically. ## Plans Three tiers with enforced member limits: | Plan | Members | Trial | Price ID env var | | ------- | ------- | ------- | ------------------------- | | Free | 1 | – | – | | Starter | 5 | – | `STRIPE_STARTER_PRICE_ID` | | Pro | 50 | 14 days | `STRIPE_PRO_PRICE_ID` | See [Plans & Pricing](./plans) for configuration details. ## Environment Variables | Variable | Required | Description | | ---------------------------- | ----------- | ------------------------------------------------- | | `STRIPE_SECRET_KEY` | For billing | Stripe secret key (`sk_test_...` / `sk_live_...`) | | `STRIPE_WEBHOOK_SECRET` | For billing | Webhook signing secret (`whsec_...`) | | `STRIPE_STARTER_PRICE_ID` | For billing | Stripe price ID for Starter plan (`price_...`) | | `STRIPE_PRO_PRICE_ID` | For billing | Stripe price ID for Pro plan (`price_...`) | | `STRIPE_PRO_ANNUAL_PRICE_ID` | Optional | Annual discount price for Pro plan (`price_...`) | Set in `.env.local` for development, Cloudflare secrets for staging/production. See [Environment Variables](/getting-started/environment-variables). ## File Map | Layer | Files | | ------ | ------------------------------------------------------------------------------------------ | | Schema | `db/schema/subscription.ts`, `stripeCustomerId` on user + organization tables | | Server | `apps/api/lib/plans.ts`, `apps/api/lib/stripe.ts`, stripe plugin in `apps/api/lib/auth.ts` | | Router | `apps/api/routers/billing.ts` | | Client | `stripeClient` in `apps/app/lib/auth.ts`, `apps/app/lib/queries/billing.ts` | | UI | Billing card in `apps/app/routes/(app)/settings.tsx` | --- --- url: /billing/checkout.md --- # Checkout Flow Upgrades and subscription management use Stripe's hosted pages – Stripe Checkout for new subscriptions and the Customer Portal for changes. No Stripe.js client dependency is needed. ## Upgrade Flow The auth client handles the redirect to Stripe Checkout: ```ts // apps/app/routes/(app)/settings.tsx async function handleUpgrade(plan: "starter" | "pro") { await auth.subscription.upgrade({ plan, successUrl: returnUrl, cancelUrl: returnUrl, }); } ``` `auth.subscription.upgrade()` calls the Better Auth endpoint, which creates a Stripe Checkout session and redirects the browser. After payment, Stripe redirects back to `successUrl`. The subscription is activated asynchronously via [webhook](./webhooks). For the Pro plan, if `STRIPE_PRO_ANNUAL_PRICE_ID` is configured, Stripe Checkout shows both monthly and annual options automatically. ## Customer Portal Existing subscribers manage their subscription (cancel, change payment method, switch plans) through Stripe's hosted portal: ```ts // apps/app/routes/(app)/settings.tsx async function handleManageBilling() { await auth.subscription.billingPortal({ returnUrl }); } ``` Configure the portal appearance and allowed actions in the [Stripe Dashboard → Customer Portal settings](https://dashboard.stripe.com/settings/billing/portal). ## Authorization The plugin's `authorizeReference` callback controls who can manage billing: | Context | Who can upgrade/manage | | ------------------------ | ---------------------- | | Personal (no active org) | The user themselves | | Organization | Org owner or admin | ```ts // apps/api/lib/auth.ts authorizeReference: async ({ user, referenceId }) => { // Personal billing if (referenceId === user.id) return true; // Org billing: check membership role const [row] = await db .select({ role: Db.member.role }) .from(Db.member) .where( and( eq(Db.member.organizationId, referenceId), eq(Db.member.userId, user.id), ), ); return row?.role === "owner" || row?.role === "admin"; }, ``` Regular org members see the billing status but cannot modify the subscription. ## Billing UI The `BillingCard` component in `apps/app/routes/(app)/settings.tsx` handles all billing states: | State | UI | | --------------- | -------------------------------------------------------------- | | Loading | Muted loading text | | Free plan | "You are on the Free plan" + upgrade buttons | | Active/trialing | Plan name, status badge, renewal date, "Manage Billing" button | | Canceling | Amber warning with access end date, portal link to restore | ## Data Fetching Billing state is fetched via a tRPC query wrapped in TanStack Query: ```ts // apps/app/lib/queries/billing.ts export function billingQueryOptions(activeOrgId?: string | null) { return queryOptions({ queryKey: [...billingQueryKey, activeOrgId ?? null], queryFn: () => trpcClient.billing.subscription.query(), }); } ``` The query key includes `activeOrgId` so switching organizations automatically triggers a refetch. Use the `billingQueryKey` prefix for bulk invalidation after subscription changes. --- --- url: /deployment/ci-cd.md --- # CI/CD GitHub Actions automates building, testing, and deploying. The pipeline uses two workflows: `ci.yml` for the build and conditional deploys, and `deploy.yml` as a reusable deployment workflow. ## Pipeline Overview ``` Pull request → build + lint + test → deploy to preview Push to main → build + test → deploy to staging Manual dispatch (production) → deploy to production ``` The `ci.yml` workflow runs a single **build** job, then conditionally triggers one of three **deploy** jobs depending on the event: | Trigger | Condition | Environment | | ------------------- | --------------------------------- | ----------- | | `pull_request` | Any PR to `main` | Preview | | `push` | Merge to `main` | Staging | | `workflow_dispatch` | Manual, `environment: production` | Production | ## Build Job The build job runs in every trigger scenario: ```yaml # .github/workflows/ci.yml – build job (simplified) steps: - uses: actions/checkout@v6 - uses: oven-sh/setup-bun@v2 - run: bun install --frozen-lockfile # Lint (PRs only – merged code was already checked) - run: bun prettier --check . - run: bun lint # Validate Terraform formatting - run: terraform fmt -check -recursive infra/ # Build and test - run: bun email:build # Email templates (needed for types) - run: bun tsc --build # Type checking - run: bun run test -- --run # Vitest - run: bun --filter @repo/web build - run: bun --filter @repo/api build - run: bun --filter @repo/app build # Upload artifacts for deploy jobs - uses: actions/upload-artifact@v6 ``` Concurrency is configured so only one run per PR or branch executes at a time, cancelling in-progress runs. ## Deploy Workflow The reusable `deploy.yml` workflow is called by each deploy job with environment-specific inputs: ```yaml # .github/workflows/ci.yml – deploy job example deploy-staging: needs: [build] if: github.event_name == 'push' && github.ref == 'refs/heads/main' uses: ./.github/workflows/deploy.yml with: name: Staging environment: staging url: https://staging.example.com secrets: inherit ``` The deploy workflow downloads build artifacts and deploys each worker via Wrangler: ```yaml # .github/workflows/deploy.yml (simplified) steps: - uses: actions/checkout@v6 - uses: actions/download-artifact@v6 - uses: oven-sh/setup-bun@v2 - run: bun install --frozen-lockfile # Deploy each worker - run: bun wrangler deploy --config apps/api/wrangler.jsonc --env $ - run: bun wrangler deploy --config apps/app/wrangler.jsonc --env $ - run: bun wrangler deploy --config apps/web/wrangler.jsonc --env $ ``` ::: warning The `wrangler deploy` steps in `deploy.yml` are currently commented out as TODOs. Uncomment them once your Cloudflare infrastructure is provisioned and `CLOUDFLARE_API_TOKEN` is set in GitHub secrets. ::: ## Preview Deployments Preview deploys use [pr-codename](https://github.com/kriasoft/pr-codename) to generate unique subdomains for each PR (e.g., `brave-fox.example.com`). The codename is stable across pushes to the same PR. ## Required Secrets Configure these in your GitHub repository settings under **Settings → Secrets and variables → Actions**: | Secret | Required | Description | | ---------------------- | -------- | ----------------------------------------- | | `CLOUDFLARE_API_TOKEN` | Yes | API token with Workers deploy permissions | Worker-level secrets (`BETTER_AUTH_SECRET`, Stripe keys, etc.) are set via `wrangler secret put` – not GitHub secrets. See [Cloudflare Workers: Secrets](/deployment/cloudflare#secrets). ## Additional Workflow A separate `conventional-commits.yml` workflow validates PR titles against the [Conventional Commits](https://www.conventionalcommits.org/) spec using `amannn/action-semantic-pull-request`. --- --- url: /deployment/cloudflare.md --- # Cloudflare Workers Each app has its own `wrangler.jsonc` with per-environment configuration for variables, service bindings, and Hyperdrive. ## Wrangler Configuration The **web** worker is the edge router. It receives all traffic via route patterns and forwards requests to **app** and **api** workers through service bindings: ```jsonc // apps/web/wrangler.jsonc (simplified) { "name": "example-web", "routes": [{ "pattern": "example.com/*", "zone_name": "example.com" }], "services": [ { "binding": "APP_SERVICE", "service": "example-app" }, { "binding": "API_SERVICE", "service": "example-api" }, ], "assets": { "directory": "./dist", "run_worker_first": ["/"], }, } ``` The **api** worker has `nodejs_compat` enabled and connects to Neon through two Hyperdrive bindings (cached and direct): ```jsonc // apps/api/wrangler.jsonc (simplified) { "name": "example-api", "compatibility_flags": ["nodejs_compat"], "hyperdrive": [ { "binding": "HYPERDRIVE_CACHED", "id": "your-hyperdrive-cached-id" }, { "binding": "HYPERDRIVE_DIRECT", "id": "your-hyperdrive-direct-id" }, ], } ``` The **app** worker serves the SPA with `not_found_handling: "single-page-application"` so all routes resolve to `index.html`. ::: info Service bindings are non-inheritable in Wrangler – each environment (`staging`, `preview`) must declare its own `services` array with the correct worker names (e.g., `example-app-staging`). ::: See [Architecture: Edge](/architecture/edge) for details on the service binding model. ## Environment Variables Each worker declares `vars` per environment in `wrangler.jsonc`. The API worker has the most: | Variable | Worker | Description | | ------------------- | -------- | ------------------------------------------------- | | `ENVIRONMENT` | all | `development`, `preview`, `staging`, `production` | | `APP_NAME` | api | Display name used in emails | | `APP_ORIGIN` | api | Full origin URL (e.g., `https://example.com`) | | `ALLOWED_ORIGINS` | api, app | Comma-separated list for CORS | | `RESEND_EMAIL_FROM` | api | Sender address for transactional emails | See [Environment Variables](/getting-started/environment-variables) for the complete reference. ## Secrets Secrets are set per worker via the Wrangler CLI. For the API worker: ```bash # Generate a secret for Better Auth openssl rand -hex 32 # Set secrets (repeat for each environment: --env staging, --env preview) wrangler secret put BETTER_AUTH_SECRET wrangler secret put GOOGLE_CLIENT_ID wrangler secret put GOOGLE_CLIENT_SECRET wrangler secret put RESEND_API_KEY wrangler secret put STRIPE_SECRET_KEY wrangler secret put STRIPE_WEBHOOK_SECRET ``` ::: warning Run `wrangler secret put` from the workspace directory (e.g., `apps/api/`) or pass `--config apps/api/wrangler.jsonc` so secrets bind to the correct worker. ::: ## Build and Deploy Build order matters – email templates must compile before the API worker bundles them: ```bash # Build all workspaces in dependency order bun build # email → web → api → app # Deploy each worker bun api:deploy bun app:deploy bun web:deploy # Or deploy to a specific environment bun wrangler deploy --config apps/api/wrangler.jsonc --env staging bun wrangler deploy --config apps/app/wrangler.jsonc --env staging bun wrangler deploy --config apps/web/wrangler.jsonc --env staging ``` ## Custom Domain 1. Add your domain to Cloudflare and update nameservers at your registrar 2. Update `routes` in `apps/web/wrangler.jsonc` with your domain 3. Set SSL/TLS encryption mode to **Full (strict)** in the Cloudflare dashboard 4. Enable **Always Use HTTPS** Routes are declared in `wrangler.jsonc` and applied automatically on deploy. Terraform manages DNS records if `cloudflare_zone_id` and `hostname` are set in your environment variables. ## Infrastructure with Terraform Terraform creates worker metadata, Hyperdrive configs, and DNS records. Worker code is deployed separately via Wrangler. ```bash # Plan changes for staging bun infra:staging:edge:plan # Apply changes bun infra:staging:edge:apply ``` Each environment has its own Terraform state in `infra/envs/{dev,preview,staging,prod}/edge/`. --- --- url: /api/context.md --- # Context & Middleware Every tRPC procedure receives a context object (`ctx`) with request-scoped resources. The middleware chain builds this context before any procedure runs. ## TRPCContext Defined in `apps/api/lib/context.ts`, the context provides: | Field | Type | Description | | ------------- | ---------------------------------- | ------------------------------------------------------- | | `req` | `Request` | The incoming HTTP request | | `info` | `CreateHTTPContextOptions["info"]` | tRPC request metadata (headers, connection info) | | `db` | `PostgresJsDatabase` | Drizzle ORM instance via Hyperdrive (cached connection) | | `dbDirect` | `PostgresJsDatabase` | Drizzle ORM instance via Hyperdrive (direct, no cache) | | `session` | `AuthSession \| null` | Authenticated session from Better Auth | | `user` | `AuthUser \| null` | Authenticated user data | | `cache` | `Map` | Request-scoped cache (for DataLoaders, computed values) | | `res?` | `Response` | Optional HTTP response from Hono context | | `resHeaders?` | `Headers` | Response headers (for setting cookies, etc.) | | `env` | `Env` | Environment variables and secrets | ### Two Database Connections The context provides two database connections with different caching behaviors: * **`ctx.db`** – routed through Cloudflare Hyperdrive's connection pool with query caching. Use for read-heavy queries. * **`ctx.dbDirect`** – bypasses the cache. Use for writes, transactions, and reads that must see the latest data. ```ts // Read with caching const users = await ctx.db.select().from(user); // Write via direct connection await ctx.dbDirect.insert(post).values({ title: "Hello" }); ``` ## How Context is Constructed Context is created per-request in the tRPC fetch adapter (`apps/api/lib/app.ts`): ```ts app.use("/api/trpc/*", (c) => { return fetchRequestHandler({ req: c.req.raw, router: appRouter, endpoint: "/api/trpc", async createContext({ req, resHeaders, info }) { const db = c.get("db"); const dbDirect = c.get("dbDirect"); const auth = c.get("auth"); if (!db) throw new Error("Database not available in context"); if (!dbDirect) throw new Error("Direct database not available in context"); if (!auth) throw new Error("Authentication service not available in context"); const sessionData = await auth.api.getSession({ headers: req.headers, }); return { req, res: c.res, resHeaders, info, env: c.env, db, dbDirect, session: sessionData?.session ?? null, user: sessionData?.user ?? null, cache: new Map(), }; }, batching: { enabled: true }, }); }); ``` The `db`, `dbDirect`, and `auth` values come from the Hono middleware layer (set in `worker.ts`). The tRPC context adds session resolution and a fresh `cache` Map. ## Middleware Chain The Worker entrypoint (`worker.ts`) applies middleware in order: ```txt Request │ ├── errorHandler ← catches all unhandled errors ├── notFoundHandler ← returns 404 JSON for unmatched routes │ ├── secureHeaders() ← security headers (CSP, X-Frame-Options, etc.) ├── requestId() ← generates X-Request-Id (uses CF-Ray if available) ├── logger() ← logs request method, path, status, duration │ ├── context init ← creates db, dbDirect, auth; sets on Hono context │ └── app.ts routes ├── /api/auth/* ← Better Auth (session resolved internally) └── /api/trpc/* ← tRPC (session resolved in createContext) ``` ::: info The `protectedProcedure` middleware (defined in `lib/trpc.ts`) adds another layer within tRPC. It checks that `session` and `user` are non-null and narrows their types – procedures using `protectedProcedure` never need null checks. See [Procedures](./procedures#protectedprocedure). ::: ::: tip In production (`worker.ts`), the request ID generator uses the Cloudflare Ray ID when available. In local development (`dev.ts`), it falls back to the default UUID generator since `cf-ray` headers aren't present. ::: ## Request ID The request ID middleware uses the Cloudflare Ray ID when available, falling back to `crypto.randomUUID()` in local development: ```ts export function requestIdGenerator(c: Context): string { return c.req.header("cf-ray") ?? crypto.randomUUID(); } ``` The ID is available via the `X-Request-Id` response header for tracing requests across logs. ## DataLoaders DataLoaders prevent N+1 queries by batching multiple `.load(id)` calls into a single SQL `WHERE IN (...)` query. They're defined in `apps/api/lib/loaders.ts` and cached per-request via `ctx.cache`. ```ts import { userById } from "../lib/loaders.js"; members: protectedProcedure .input(z.object({ organizationId: z.string() })) .query(async ({ ctx, input }) => { const members = await ctx.db.query.member.findMany({ where: (m, { eq }) => eq(m.organizationId, input.organizationId), }); // Batches all user lookups into one query const users = await Promise.all( members.map((m) => userById(ctx).load(m.userId)), ); return members.map((m, i) => ({ ...m, user: users[i] })); }), ``` Loaders are created with a `defineLoader` helper that handles per-request caching via `ctx.cache`: ```ts function defineLoader( key: symbol, batchFn: (ctx: TRPCContext, keys: readonly K[]) => Promise<(V | null)[]>, ): (ctx: TRPCContext) => DataLoader; ``` Each call returns a factory `(ctx) => DataLoader`. The first invocation per request creates the instance; subsequent calls return the cached one. Because `ctx.cache` is a `Map` created per-request, loaders are automatically scoped to the request lifecycle – no stale data across requests. ### Adding a DataLoader Add a `defineLoader` call in `apps/api/lib/loaders.ts`: ```ts export const postById = defineLoader( Symbol("postById"), async (ctx, ids: readonly string[]) => { const posts = await ctx.db .select() .from(post) .where(inArray(post.id, [...ids])); return mapByKey(posts, "id", ids); }, ); ``` Then call `.load(key)` or `.loadMany(keys)` in your procedures. --- --- url: /database.md --- # Database The `db/` workspace manages the data layer with [Drizzle ORM](https://orm.drizzle.team/) and [Neon PostgreSQL](https://neon.tech/). In production, [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) pools and caches connections at the edge. ## Workspace Structure ```bash db/ ├── schema/ # Table definitions and relations ├── migrations/ # Auto-generated SQL migrations ├── seeds/ # Seed data scripts ├── scripts/ # Utilities (seed runner, export) ├── drizzle.config.ts # Drizzle Kit configuration └── index.ts # Re-exports schema + DatabaseSchema type ``` Schema files are organized by domain – one file per entity group (e.g., `user.ts` contains the user, session, identity, and verification tables). All tables are re-exported from `schema/index.ts`. ## Connection Architecture The API worker connects to Neon through Cloudflare Hyperdrive, which provides connection pooling and optional query caching at the edge. Two Hyperdrive bindings are available: | Binding | Cache | Use for | | ------------------- | ---------------- | ------------------------------------------------------- | | `HYPERDRIVE_CACHED` | 60 s query cache | Read-heavy queries where slight staleness is acceptable | | `HYPERDRIVE_DIRECT` | None | Writes, real-time reads, anything requiring fresh data | Both are exposed in [tRPC context](/api/context) as `ctx.db` (cached) and `ctx.dbDirect` (direct): ```ts // apps/api/lib/db.ts (simplified) export function createDb(hyperdrive: Hyperdrive) { const client = postgres(hyperdrive.connectionString, { max: 1, // single connection per Worker isolate prepare: false, // required for Hyperdrive compatibility }); return drizzle(client, { schema, casing: "snake_case" }); } ``` ::: info In development, Wrangler's `getPlatformProxy()` emulates the Hyperdrive bindings locally, resolving them to your `DATABASE_URL`. Your code uses the same `HYPERDRIVE_CACHED` / `HYPERDRIVE_DIRECT` bindings in both environments – no conditional connection logic needed. ::: ## Commands Run from the repo root. Append `:staging` or `:prod` to target other environments. | Command | Description | | ------------------ | --------------------------------------------------- | | `bun db:generate` | Generate migration SQL from schema changes | | `bun db:migrate` | Apply pending migrations | | `bun db:push` | Push schema directly (skips migration files) | | `bun db:studio` | Open Drizzle Studio browser UI | | `bun db:seed` | Run seed scripts | | `bun db:check` | Check for drift between schema and migrations | | `bun db:export` | Export database via pg\_dump to `db/backups/` | | `bun db:typecheck` | Run TypeScript type-checking on the `db/` workspace | ## Environment Targeting Database scripts select the environment through the `ENVIRONMENT` variable (falls back to `NODE_ENV`). Each environment loads env files in priority order: ``` .env.{env}.local → .env.local → .env ``` For example, `bun db:push:staging` loads `.env.staging.local` first. The `DATABASE_URL` variable must be a valid `postgres://` or `postgresql://` connection string. See [Environment Variables](/getting-started/environment-variables) for full details. ## Importing Schemas The `@repo/db` package exports two entry points: ```ts import * as schema from "@repo/db"; // full schema + DatabaseSchema type import { user, session } from "@repo/db/schema"; // individual tables ``` --- --- url: /deployment.md --- # Deployment React Starter Kit deploys as three Cloudflare Workers backed by a Neon PostgreSQL database. Infrastructure is managed with Terraform. ## What Gets Deployed | Component | Target | Description | | ------------------ | ------------------ | -------------------------------------------------------------------------- | | **Web Worker** | Cloudflare Workers | Edge router – receives all traffic, routes to app/api via service bindings | | **App Worker** | Cloudflare Workers | Serves the React SPA and static assets | | **API Worker** | Cloudflare Workers | Hono + tRPC server, authentication, database access | | **Database** | Neon PostgreSQL | Managed Postgres with Hyperdrive connection pooling | | **Infrastructure** | Terraform | Worker metadata, Hyperdrive configs, DNS records | See [Architecture Overview](/architecture/) for how these components connect. ## Prerequisites * **Cloudflare account** with Workers enabled * **Neon account** for PostgreSQL hosting ([sign up](https://get.neon.com/HD157BR)) * **Terraform** installed (`brew install terraform` or [download](https://developer.hashicorp.com/terraform/install)) * **Domain** added to Cloudflare DNS (optional for initial setup) ## Environments | Environment | Trigger | URL pattern | Purpose | | ----------- | --------------- | ------------------------ | ---------------------------------------------------------------------------- | | Development | `bun dev` | `localhost:5173` | Local development | | Preview | Pull request | `{codename}.example.com` | Isolated PR testing ([pr-codename](https://github.com/kriasoft/pr-codename)) | | Staging | Push to `main` | `staging.example.com` | Pre-production validation | | Production | Manual dispatch | `example.com` | Live environment | Each environment has its own Wrangler config, Hyperdrive bindings, and Terraform state. See [CI/CD](/deployment/ci-cd) for how deployments are triggered. ## Deployment Checklist 1. **Provision infrastructure** – run Terraform to create workers, Hyperdrive, and DNS records 2. **Set secrets** – configure `BETTER_AUTH_SECRET`, Stripe keys, and other secrets via Wrangler. See [Cloudflare Workers](/deployment/cloudflare) for the full list 3. **Run migrations** – apply schema to your production database. See [Production Database](/deployment/production-database) 4. **Build and deploy** – push code to workers. See [CI/CD](/deployment/ci-cd) or deploy manually: ```bash bun build # email → web → api → app bun api:deploy # Deploy API worker bun app:deploy # Deploy App worker bun web:deploy # Deploy Web worker ``` ## Section Pages * [Cloudflare Workers](/deployment/cloudflare) – Wrangler config, secrets, build and deploy * [Production Database](/deployment/production-database) – Neon setup, Hyperdrive, running migrations * [CI/CD](/deployment/ci-cd) – GitHub Actions pipelines, preview deployments * [Monitoring](/deployment/monitoring) – Logs, analytics, rollbacks, troubleshooting --- --- url: /architecture/edge.md --- # Edge Implementation details for the Cloudflare Workers deployment. Read the [Architecture Overview](./) first for the mental model. ## Workers Configuration Each worker has its own `wrangler.jsonc` in its workspace directory: | Worker | Config | `nodejs_compat` | Static assets | Service bindings | | ------ | ------------------------- | :-------------: | :-------------: | :----------------------: | | web | `apps/web/wrangler.jsonc` | No | Marketing pages | APP\_SERVICE, API\_SERVICE | | app | `apps/app/wrangler.jsonc` | No | SPA bundle | – | | api | `apps/api/wrangler.jsonc` | Yes | – | – | The API worker enables `nodejs_compat` for packages that depend on Node.js built-ins (e.g. `postgres`, `crypto`). The web and app workers don't need it – they only serve static assets and proxy requests. ## Service Bindings Service bindings are **non-inheritable** in Wrangler – the top-level declaration only applies to production. Each environment must redeclare its bindings with the correct worker names. ```jsonc // apps/web/wrangler.jsonc { // Production (top-level) "services": [ { "binding": "APP_SERVICE", "service": "example-app" }, { "binding": "API_SERVICE", "service": "example-api" }, ], "env": { "staging": { "services": [ { "binding": "APP_SERVICE", "service": "example-app-staging" }, { "binding": "API_SERVICE", "service": "example-api-staging" }, ], }, "preview": { "services": [ { "binding": "APP_SERVICE", "service": "example-app-preview" }, { "binding": "API_SERVICE", "service": "example-api-preview" }, ], }, }, } ``` Worker naming convention: `--`. Production omits the environment suffix. | Environment | Web | App | API | | ----------- | --------------------- | --------------------- | --------------------- | | Production | `example-web` | `example-app` | `example-api` | | Staging | `example-web-staging` | `example-app-staging` | `example-api-staging` | | Preview | `example-web-preview` | `example-app-preview` | `example-api-preview` | ## Hyperdrive [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) provides connection pooling between Workers and Neon PostgreSQL. The API worker declares two bindings per environment: | Binding | Caching | Purpose | | ------------------- | -------- | -------------------------------------- | | `HYPERDRIVE_CACHED` | Enabled | Read-heavy queries | | `HYPERDRIVE_DIRECT` | Disabled | Writes and consistency-sensitive reads | ```jsonc // apps/api/wrangler.jsonc "hyperdrive": [ { "binding": "HYPERDRIVE_CACHED", "id": "your-hyperdrive-cached-id-here" }, { "binding": "HYPERDRIVE_DIRECT", "id": "your-hyperdrive-direct-id-here" } ] ``` Each environment has its own Hyperdrive IDs pointing to the corresponding Neon database branch. The connection code in `apps/api/lib/db.ts`: ```ts import { schema } from "@repo/db"; import { drizzle } from "drizzle-orm/postgres-js"; import postgres from "postgres"; export function createDb(db: Hyperdrive) { const client = postgres(db.connectionString, { max: 1, // Workers are single-request; one connection is enough prepare: false, // Avoids prepared statement caching issues in Workers connect_timeout: 10, idle_timeout: 20, max_lifetime: 60 * 30, transform: { undefined: null }, onnotice: () => {}, // Suppress PostgreSQL NOTICE messages }); return drizzle(client, { schema, casing: "snake_case" }); } ``` Key settings: `max: 1` because each Worker invocation handles a single request. `prepare: false` prevents issues with Hyperdrive's connection reuse where prepared statements from a previous request may not exist on the pooled connection. ## Static Assets ### Web Worker The web worker serves marketing pages from `apps/web/dist/`. The `run_worker_first` setting forces specific paths through the worker script before falling back to static assets: ```jsonc // apps/web/wrangler.jsonc "assets": { "directory": "./dist", "binding": "ASSETS", "run_worker_first": ["/"] } ``` This is required for the `/` route where the worker checks the auth hint cookie to decide between the marketing page and the app dashboard. All other paths either match explicit worker routes (`/api/*`, `/login*`) or fall through to static assets. ### App Worker The app worker is a pure static asset worker with SPA fallback – no custom worker script: ```jsonc // apps/app/wrangler.jsonc "assets": { "directory": "./dist", "not_found_handling": "single-page-application" } ``` `not_found_handling: "single-page-application"` returns `index.html` for any path that doesn't match a static file, enabling TanStack Router's client-side routing. ## Auth Hint Cookie Routing The web worker's `/` route uses the auth hint cookie to choose between two upstream workers: ```ts // apps/web/worker.ts app.on(["GET", "HEAD"], "/", async (c) => { const hasAuthHint = getCookie(c, "__Host-auth") === "1" || getCookie(c, "auth") === "1"; const upstream = await (hasAuthHint ? c.env.APP_SERVICE : c.env.ASSETS).fetch( c.req.raw, ); // Prevent caching – response varies by auth state const headers = new Headers(upstream.headers); headers.set("Cache-Control", "private, no-store"); headers.set("Vary", "Cookie"); return new Response(upstream.body, { status: upstream.status, statusText: upstream.statusText, headers, }); }); ``` The `Cache-Control: private, no-store` and `Vary: Cookie` headers prevent CDN and browser caches from serving the wrong version (marketing page to a logged-in user or vice versa). See [ADR-001](/adr/001-auth-hint-cookie) for the full decision record. ## Infrastructure Worker metadata and Hyperdrive bindings are provisioned with Terraform. Wrangler handles code deployment and route configuration. ``` infra/ ├── stacks/ │ ├── edge/ # Workers, Hyperdrive, DNS │ │ ├── main.tf │ │ ├── variables.tf │ │ └── outputs.tf │ └── hybrid/ # Database and other resources ├── modules/ │ ├── cloudflare/ # Worker, Hyperdrive, DNS modules │ └── gcp/ ├── envs/ # Per-environment Terraform root modules └── templates/ ``` The edge stack (`infra/stacks/edge/main.tf`) creates all three workers, a Hyperdrive binding pair, and DNS records: ```hcl module "worker_api" { source = "../../modules/cloudflare/worker" name = "${var.project_slug}-api${local.worker_suffix}" # ... } module "hyperdrive" { source = "../../modules/cloudflare/hyperdrive" name = "${var.project_slug}-${var.environment}" database_url = var.neon_database_url } ``` The `worker_suffix` local resolves to `""` for production and `"-${var.environment}"` for other environments, matching the naming convention used in service bindings. ## Local Development `bun dev` starts all three workers concurrently with Wrangler's dev mode: | Worker | Port | Notes | | ------ | ------ | --------------------------------------- | | web | `5173` | Entry point – open this in your browser | | app | `5174` | Accessed via service binding from web | | api | `5175` | Accessed via service binding from web | In development, Wrangler simulates service bindings locally – requests between workers happen in-process rather than over the network. The `dev` environment in each `wrangler.jsonc` provides development-specific variables (`APP_ORIGIN: http://localhost:5173`, etc.). ::: tip Email templates must be built before starting the API dev server. The `bun dev` script handles this automatically by running `bun email:build` first. ::: --- --- url: /email.md --- # Email Transactional emails are built with [React Email](https://react.email/) and delivered through [Resend](https://resend.com/). The `apps/email/` workspace owns all templates and rendering – the API imports compiled templates and sends them via the Resend SDK. ## Workspace Structure ```bash apps/email/ ├── components/ │ └── BaseTemplate.tsx # Shared header, footer, and styling ├── templates/ │ ├── otp-email.tsx # OTP codes (sign-in, verification, password reset) │ ├── email-verification.tsx # Link-based email verification │ └── password-reset.tsx # Link-based password reset ├── emails/ # Preview files for dev server (sample data) ├── utils/ │ └── render.ts # renderEmailToHtml() / renderEmailToText() ├── index.ts # Public exports └── package.json ``` ## Templates Three templates ship out of the box, all wrapped in `BaseTemplate` for consistent branding: | Template | Used By | Trigger | | ------------------- | ---------------------------------------- | -------------------------- | | `OTPEmail` | [Email & OTP](/auth/email-otp) auth flow | `emailOTP` plugin callback | | `EmailVerification` | Link-based email verification | `sendVerificationEmail()` | | `PasswordReset` | Password reset flow | `sendPasswordReset()` | `OTPEmail` handles three types via a single `type` prop – `"sign-in"`, `"email-verification"`, and `"forget-password"` – each with different copy. Password resets include an additional security warning. The separate `PasswordReset` template uses a red button to emphasize the security-sensitive action. ## Development Preview templates locally with hot reload: ```bash bun email:dev ``` This starts the React Email preview server at `http://localhost:3001`. Files in `emails/` provide sample data for each template – edit them to test different states. ::: tip The email workspace must be built before the API can import templates. The root `bun dev` handles this automatically, but if you run the API standalone, run `bun email:build` first. ::: ## Sending Emails The API sends emails through helper functions in `apps/api/lib/email.ts`. Each helper renders a template to both HTML and plain text, then sends via Resend: ```ts // apps/api/lib/email.ts import { OTPEmail, renderEmailToHtml, renderEmailToText } from "@repo/email"; const component = OTPEmail({ otp, type, appName: env.APP_NAME, appUrl: env.APP_ORIGIN, }); const html = await renderEmailToHtml(component); const text = await renderEmailToText(component); await sendEmail(env, { to: email, subject: `Your ${typeLabel} code`, html, text, }); ``` Available sender functions: | Function | Purpose | | ------------------------- | ------------------------------------------------------------------------------ | | `sendOTP()` | OTP codes for all auth flows | | `sendVerificationEmail()` | Link-based email verification | | `sendPasswordReset()` | Password reset links | | `sendEmail()` | Low-level sender (validates recipients, requires plain text fallback for HTML) | ::: warning `sendEmail()` throws if you provide HTML without a plain text fallback. Always render both versions using `renderEmailToHtml()` and `renderEmailToText()`. ::: ### Development Shortcut In development, `sendOTP()` also prints the code to the terminal for convenience: ```txt OTP code for user@example.com: 482901 ``` A valid `RESEND_API_KEY` is still required – the console output supplements the email, it doesn't replace it. ## Adding a Template 1. Create the template in `apps/email/templates/`: ```tsx // apps/email/templates/invitation.tsx import { Button, Heading, Text } from "@react-email/components"; import { BaseTemplate } from "../components/BaseTemplate"; interface InvitationProps { inviterName: string; orgName: string; acceptUrl: string; appName?: string; appUrl?: string; } export function Invitation({ inviterName, orgName, acceptUrl, appName, appUrl, }: InvitationProps) { return ( You're invited {inviterName} invited you to join {orgName}. ); } ``` 2. Export it from `apps/email/index.ts`: ```ts export { Invitation } from "./templates/invitation.js"; ``` 3. Add a preview file in `apps/email/emails/` with sample props for the dev server. 4. Create a sender function in `apps/api/lib/email.ts` following the same render-then-send pattern. ## Environment Variables | Variable | Required | Description | | ------------------- | --------- | ------------------------------------------------------------------ | | `RESEND_API_KEY` | For email | Resend API key (`re_...`) | | `RESEND_EMAIL_FROM` | For email | Sender address (e.g., `noreply@example.com`) | | `APP_NAME` | No | Used in email subject lines and branding (defaults to `"Example"`) | | `APP_ORIGIN` | Yes | Used for links in email footer | Set in `.env.local` for development, Cloudflare secrets for staging/production. See [Environment Variables](/getting-started/environment-variables). ## File Map | Layer | Files | | ---------------- | --------------------------------------------- | | Templates | `apps/email/templates/*.tsx` | | Shared layout | `apps/email/components/BaseTemplate.tsx` | | Rendering | `apps/email/utils/render.ts` | | Sending | `apps/api/lib/email.ts` | | Auth integration | `emailOTP` callback in `apps/api/lib/auth.ts` | --- --- url: /auth/email-otp.md --- # Email & OTP The primary sign-in method is passwordless email OTP. Users enter their email, receive a 6-digit code, and enter it to authenticate. The same flow handles both login and signup – if the email doesn't exist, Better Auth creates the account automatically. ## Server Configuration The `emailOTP` plugin is configured in `apps/api/lib/auth.ts`: ```ts emailOTP({ async sendVerificationOTP({ email, otp, type }) { await sendOTP(env, { email, otp, type }); }, otpLength: 6, expiresIn: 300, // 5 minutes allowedAttempts: 3, // max wrong guesses before code is invalidated }), ``` OTP codes are stored in the `verification` table and automatically expire. After 3 failed attempts, the code is invalidated and the user must request a new one. ### Email Delivery OTP emails are sent via [React Email](https://react.email/) templates rendered to HTML + plain text, delivered through [Resend](https://resend.com/): ```ts // apps/api/lib/email.ts export async function sendOTP(env, { email, otp, type }) { // In development, OTP is also printed to the console if (env.ENVIRONMENT === "development") { console.log(`OTP code for ${email}: ${otp}`); } const component = OTPEmail({ otp, type, appName: env.APP_NAME }); const html = await renderEmailToHtml(component); const text = await renderEmailToText(component); return sendEmail(env, { to: email, subject: `Your Sign In code`, html, text, }); } ``` ::: tip During local development, OTP codes are logged to the terminal – you don't need a real Resend API key to test the flow. ::: ## Client Flow The auth form implements a 3-step state machine: ``` method → email → otp ``` Each step is a separate UI component orchestrated by `AuthForm`: | Step | Component | What Happens | | -------- | ----------------- | -------------------------------------------------- | | `method` | `MethodSelection` | User picks sign-in method (Google, email, passkey) | | `email` | `EmailInput` | User enters email, OTP is sent | | `otp` | `OtpVerification` | User enters 6-digit code to complete sign-in | ### State Machine The state transitions are defined in `apps/app/components/auth/use-auth-form.ts`: ```ts const VALID_TRANSITIONS: Record = { method: ["email"], email: ["method", "otp"], otp: ["email"], }; ``` Transitions are validated – invalid step jumps are silently ignored. This prevents race conditions from concurrent auth operations (e.g., passkey conditional UI completing while the user clicks a button). ### Sending the OTP When the user submits their email, the `sendOtp` function normalizes the input and calls the Better Auth client: ```ts // "sign-in" type handles both login and signup const result = await auth.emailOtp.sendVerificationOtp({ email: normalizedEmail, type: "sign-in", }); ``` The `sign-in` type is used for both login and signup flows. Better Auth creates the user account if the email is new. ### Verifying the Code The `OtpVerification` component handles code entry and verification: ```ts const result = await auth.signIn.emailOtp({ email, otp }); ``` The input field restricts to 6 numeric digits with `inputMode="numeric"` and `autoComplete="one-time-code"` for mobile OTP autofill. ## Error Handling The OTP plugin returns specific error codes that map to user-friendly messages: | Error Code | User Message | Behavior | | ------------------- | ------------------------------------------------------ | ----------------------------- | | `TOO_MANY_ATTEMPTS` | "Too many failed attempts. Please request a new code." | Returns to email step | | `OTP_EXPIRED` | "Code has expired. Please request a new one." | Returns to email step | | `INVALID_OTP` | Server message or "Invalid verification code" fallback | Stays on OTP step (can retry) | When `TOO_MANY_ATTEMPTS` or `OTP_EXPIRED` occurs, the form automatically returns to the email step so the user can request a fresh code. ### Resend Cooldown After the initial OTP is sent, users can request a new code with a 30-second cooldown: ```ts const RESEND_COOLDOWN_SECONDS = 30; ``` The resend button shows a countdown timer and is disabled during the cooldown period. ## Component Architecture ``` AuthForm ├── MethodSelection Step 1: choose sign-in method │ ├── GoogleLogin OAuth redirect │ ├── "Continue with email" button │ └── PasskeyLogin WebAuthn (login only) ├── EmailInput Step 2: enter email, send OTP └── OtpStep Step 3: wraps OTP UI with back link (internal to AuthForm) └── OtpVerification Code entry and verification ``` The `AuthForm` accepts a `mode` prop (`"login"` or `"signup"`) that controls copy and available methods. Both modes use the same OTP flow – the difference is cosmetic (headings, ToS display, passkey availability). ::: info Passkeys are only shown during login. They require an existing account with a registered passkey – see [Passkeys](./passkeys). ::: --- --- url: /getting-started/environment-variables.md --- # Environment Variables ## File Conventions The project uses [Vite's env file](https://vite.dev/guide/env-and-mode#env-files) convention: | File | Committed | Purpose | | ----------------------- | --------- | ----------------------------------------------------- | | `.env` | Yes | Shared defaults (placeholder values, no real secrets) | | `.env.local` | No | Local overrides with real credentials | | `.env.staging.local` | No | Staging-specific overrides | | `.env.production.local` | No | Production-specific overrides | `.env.local` takes precedence over `.env`. Create it by copying `.env` and filling in real values: ```bash cp .env .env.local ``` ::: warning Never put real secrets in `.env` – it's committed to git. Use `.env.local` for anything sensitive. ::: ## Cloudflare Worker Bindings In production, environment variables are set as Worker secrets or bindings – not from `.env` files. Configure them in the Cloudflare dashboard or via Wrangler: ```bash wrangler secret put BETTER_AUTH_SECRET ``` Database connections use [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) bindings (`HYPERDRIVE_CACHED`, `HYPERDRIVE_DIRECT`) instead of raw connection strings. See [Deployment](/deployment/) for production setup. For local development, Wrangler reads Hyperdrive connection strings from the `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_*` variables in `.env` / `.env.local`. ## Variable Reference ### Application | Variable | Required | Description | | ---------------------- | -------- | ----------------------------------------------- | | `APP_NAME` | Yes | Display name used in emails and passkey prompts | | `APP_ORIGIN` | Yes | Full origin URL (e.g., `http://localhost:5173`) | | `API_ORIGIN` | Yes | API server URL (e.g., `http://localhost:8787`) | | `ENVIRONMENT` | Yes | `development`, `staging`, or `production` | | `GOOGLE_CLOUD_PROJECT` | Yes | Google Cloud project ID (exposed to frontend) | ### Database | Variable | Required | Description | | ----------------------------------------------------------------- | -------- | ------------------------------------------ | | `DATABASE_URL` | Yes | PostgreSQL connection string | | `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE_CACHED` | Dev only | Hyperdrive cached connection for local dev | | `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE_DIRECT` | Dev only | Hyperdrive direct connection for local dev | ### Authentication | Variable | Required | Description | | ---------------------- | -------- | ------------------------------------------------------------------------------------- | | `BETTER_AUTH_SECRET` | Yes | Secret for signing sessions and tokens. Generate with `bunx @better-auth/cli secret` | | `GOOGLE_CLIENT_ID` | Yes | Google OAuth client ID ([console](https://console.cloud.google.com/apis/credentials)) | | `GOOGLE_CLIENT_SECRET` | Yes | Google OAuth client secret | See [Authentication](/auth/) for provider setup details. ### AI | Variable | Required | Description | | ---------------- | -------- | ------------------------------------------------------- | | `OPENAI_API_KEY` | Yes | [OpenAI](https://platform.openai.com/) API key (AI SDK) | ### Email | Variable | Required | Description | | ------------------- | -------- | ------------------------------------------------------- | | `RESEND_API_KEY` | Yes | [Resend](https://resend.com) API key for sending emails | | `RESEND_EMAIL_FROM` | Yes | Sender address (e.g., `Your App `) | ### Billing (Optional) Stripe billing is optional – the app works without these variables, but billing endpoints return 404. | Variable | Required | Description | | ---------------------------- | -------- | ------------------------------------------ | | `STRIPE_SECRET_KEY` | No | Stripe API secret key | | `STRIPE_WEBHOOK_SECRET` | No | Stripe webhook signing secret | | `STRIPE_STARTER_PRICE_ID` | No | Stripe Price ID for the Starter plan | | `STRIPE_PRO_PRICE_ID` | No | Stripe Price ID for the Pro plan (monthly) | | `STRIPE_PRO_ANNUAL_PRICE_ID` | No | Stripe Price ID for the Pro plan (annual) | See [Billing](/billing/) for Stripe configuration. ### Cloudflare | Variable | Required | Description | | ----------------------- | ----------- | ---------------------------------- | | `CLOUDFLARE_ACCOUNT_ID` | Deploy only | Cloudflare account ID | | `CLOUDFLARE_ZONE_ID` | Deploy only | DNS zone ID for custom domains | | `CLOUDFLARE_API_TOKEN` | Deploy only | API token for Wrangler deployments | ### Analytics and Search | Variable | Required | Description | | ----------------------- | -------- | --------------------------------- | | `GA_MEASUREMENT_ID` | No | Google Analytics 4 measurement ID | | `ALGOLIA_APP_ID` | No | Algolia application ID | | `ALGOLIA_ADMIN_API_KEY` | No | Algolia admin API key | --- --- url: /recipes/file-uploads.md --- # File Uploads This recipe adds file uploads using [Cloudflare R2](https://developers.cloudflare.com/r2/) with presigned URLs. A tRPC procedure validates the request and generates a signed PUT URL, then the client uploads directly to R2 – keeping the API worker lightweight. ## 1. Create the R2 bucket Provision a bucket using the existing Terraform module in `infra/modules/cloudflare/r2-bucket/`: ```hcl # infra/stacks//main.tf module "uploads" { source = "../../modules/cloudflare/r2-bucket" account_id = var.cloudflare_account_id name = "${var.project}-uploads-${var.environment}" } ``` Apply the change: ```bash cd infra/stacks/ terraform apply ``` ## 2. Configure bindings and secrets Bind the bucket to the API worker for serving files: ```jsonc // apps/api/wrangler.jsonc { "r2_buckets": [ { "binding": "UPLOADS", "bucket_name": "rsk-uploads-production", }, ], } ``` Create an [R2 API token](https://developers.cloudflare.com/r2/api/s3/tokens/) with **Object Read & Write** permission, then add the credentials as Worker secrets: ```bash npx wrangler secret put R2_ACCESS_KEY_ID npx wrangler secret put R2_SECRET_ACCESS_KEY ``` Add the binding type in `apps/api/worker.ts`: ```ts type CloudflareEnv = { HYPERDRIVE_CACHED: Hyperdrive; HYPERDRIVE_DIRECT: Hyperdrive; UPLOADS: R2Bucket; // [!code ++] } & Env; ``` Add the S3 API credentials to the env schema in `apps/api/lib/env.ts`: ```ts export const envSchema = z.object({ // ...existing vars R2_ACCESS_KEY_ID: z.string().optional(), // [!code ++] R2_SECRET_ACCESS_KEY: z.string().optional(), // [!code ++] R2_ENDPOINT: z.url().optional(), // [!code ++] R2_BUCKET_NAME: z.string().optional(), // [!code ++] }); ``` ::: tip `R2_ENDPOINT` is the S3-compatible endpoint: `https://.r2.cloudflarestorage.com`. Find it in the R2 dashboard under **Settings > S3 API**. ::: Install [`aws4fetch`](https://github.com/mhart/aws4fetch) for signing presigned URLs in Workers: ```bash bun add --filter @repo/api aws4fetch ``` ## 3. Create the upload procedure Add a router that generates presigned PUT URLs and confirms uploads: ```ts // apps/api/routers/upload.ts import { AwsClient } from "aws4fetch"; import { TRPCError } from "@trpc/server"; import { z } from "zod"; import { protectedProcedure, router } from "../lib/trpc.js"; const ALLOWED_TYPES = [ "image/jpeg", "image/png", "image/webp", "application/pdf", ]; const MAX_SIZE = 10 * 1024 * 1024; // 10 MB const URL_EXPIRY = 600; // 10 minutes export const uploadRouter = router({ /** Generate a presigned PUT URL for direct client-to-R2 upload. */ requestUpload: protectedProcedure .input( z.object({ filename: z.string().min(1), contentType: z.string().refine((t) => ALLOWED_TYPES.includes(t), { message: "Unsupported file type", }), size: z.number().max(MAX_SIZE), }), ) .mutation(async ({ ctx, input }) => { const { R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY, R2_ENDPOINT, R2_BUCKET_NAME, } = ctx.env; if ( !R2_ACCESS_KEY_ID || !R2_SECRET_ACCESS_KEY || !R2_ENDPOINT || !R2_BUCKET_NAME ) { throw new TRPCError({ code: "PRECONDITION_FAILED", message: "File uploads are not configured", }); } const key = `${ctx.session.activeOrganizationId}/${crypto.randomUUID()}/${input.filename}`; const r2 = new AwsClient({ accessKeyId: R2_ACCESS_KEY_ID, secretAccessKey: R2_SECRET_ACCESS_KEY, }); const url = new URL(`${R2_ENDPOINT}/${R2_BUCKET_NAME}/${key}`); url.searchParams.set("X-Amz-Expires", String(URL_EXPIRY)); const signed = await r2.sign( new Request(url, { method: "PUT", headers: { "Content-Type": input.contentType }, }), { aws: { signQuery: true } }, ); return { key, uploadUrl: signed.url }; }), /** Confirm upload and return metadata. */ complete: protectedProcedure .input(z.object({ key: z.string() })) .mutation(async ({ ctx, input }) => { const uploads = (ctx.env as { UPLOADS?: R2Bucket }).UPLOADS; if (!uploads) { throw new TRPCError({ code: "PRECONDITION_FAILED", message: "R2 binding not configured", }); } const object = await uploads.head(input.key); if (!object) { throw new TRPCError({ code: "NOT_FOUND", message: "Object not found" }); } return { key: input.key, size: object.size }; }), }); ``` Register it in `apps/api/lib/app.ts`: ```ts import { uploadRouter } from "../routers/upload.js"; const appRouter = router({ // ...existing routers upload: uploadRouter, // [!code ++] }); ``` ## 4. Upload from the frontend ```tsx import { trpcClient } from "@/lib/trpc"; async function uploadFile(file: File) { // 1. Get a presigned URL from the API const { key, uploadUrl } = await trpcClient.upload.requestUpload.mutate({ filename: file.name, contentType: file.type, size: file.size, }); // 2. Upload directly to R2 const res = await fetch(uploadUrl, { method: "PUT", body: file, headers: { "Content-Type": file.type }, }); if (!res.ok) throw new Error(`Upload failed: ${res.status}`); // 3. Confirm and store metadata return trpcClient.upload.complete.mutate({ key }); } ``` Wire it to a file input: ```tsx function FileUpload() { async function handleChange(e: React.ChangeEvent) { const file = e.target.files?.[0]; if (!file) return; const result = await uploadFile(file); console.log("Uploaded:", result.key); } return ; } ``` ## 5. Serve files Add a Hono route that reads from R2 via the binding: ```ts // apps/api/routes/uploads.ts import { Hono } from "hono"; import type { AppContext } from "../lib/context.js"; const uploads = new Hono(); uploads.get("/api/uploads/:key{.+}", async (c) => { const bucket = (c.env as { UPLOADS?: R2Bucket }).UPLOADS; if (!bucket) return c.json({ error: "R2 not configured" }, 503); const object = await bucket.get(c.req.param("key")); if (!object) return c.notFound(); return new Response(object.body, { headers: { "Content-Type": object.httpMetadata?.contentType ?? "application/octet-stream", "Cache-Control": "public, max-age=31536000, immutable", }, }); }); export { uploads }; ``` Mount it in `apps/api/lib/app.ts`: ```ts import { uploads } from "../routes/uploads.js"; app.route("/", uploads); // [!code ++] ``` Files are served at `/api/uploads/`. ## Reference * [Cloudflare R2 docs](https://developers.cloudflare.com/r2/) – bucket API, S3 compatibility, pricing * [R2 S3 API tokens](https://developers.cloudflare.com/r2/api/s3/tokens/) – creating API credentials * [aws4fetch](https://github.com/mhart/aws4fetch) – lightweight AWS Signature V4 for Workers * [Security Checklist](/security/checklist) – file upload validation (type, size, content) * [Add a tRPC Procedure](/recipes/new-procedure) – procedure patterns --- --- url: /frontend/forms.md --- # Forms & Validation Forms use controlled React inputs with Zod for validation. There's no form library – the patterns are simple enough that a direct approach keeps things explicit. ## Basic Pattern A typical form uses `useState` for input values and a tRPC mutation for submission: ```tsx import { Button, Input, Label } from "@repo/ui"; import { useMutation, useQueryClient } from "@tanstack/react-query"; import { trpcClient } from "@/lib/trpc"; function CreateProjectForm() { const [name, setName] = useState(""); const queryClient = useQueryClient(); const mutation = useMutation({ mutationFn: (input: { name: string }) => trpcClient.project.create.mutate(input), onSuccess: () => { queryClient.invalidateQueries({ queryKey: ["project"] }); setName(""); }, }); return (
{ e.preventDefault(); mutation.mutate({ name }); }} > setName(e.target.value)} required />
); } ``` ## Zod Schema Sharing Zod schemas are defined on tRPC procedures and can be shared with the frontend for search param validation or client-side checks. The login route uses a Zod schema with `validateSearch` to sanitize the `returnTo` param at parse time – see [Routing > Search Params](./routing.md#search-params) for the full example. ## Auth Form The auth form (`apps/app/components/auth/auth-form.tsx`) demonstrates a multi-step form pattern. It uses a state machine with three steps: ``` method → email → otp ↑ ↑ │ └────────┘ │ ←───────┘ ``` The `useAuthForm` hook manages transitions between steps: ```tsx const VALID_TRANSITIONS: Record = { method: ["email"], email: ["method", "otp"], otp: ["email"], }; ``` Each step renders conditionally based on the current state: ```tsx export function AuthForm({ mode = "login", onSuccess, returnTo }) { const { step, email, isDisabled, error /* actions */ } = useAuthForm({ onSuccess, mode, }); return (
{error && (
{error}
)} {step === "method" && } {step === "email" && } {step === "otp" && }
); } ``` Key design decisions in `useAuthForm`: * **Counter-based pending ops** – handles overlapping child operations (e.g., passkey conditional UI running alongside manual click) * **Success guard** (`hasSucceededRef`) – prevents concurrent auth completion from multiple methods * **Email normalization** – trims whitespace and lowercases before API calls * **Error orthogonal to steps** – errors can occur at any step and are displayed at the form level ## Error Display Errors are shown as alert boxes with `role="alert"` for screen reader announcements: ```tsx { error && (
{error}
); } ``` For mutation errors, check `mutation.error`: ```tsx { mutation.error && (
{mutation.error.message}
); } ``` ## Loading States Coordinate disabled state across form elements to prevent double-submission: ```tsx // useAuthForm combines multiple sources into one flag const isDisabled = isLoading || pendingOps > 0 || !!isExternallyLoading; ``` Apply to all interactive elements: ```tsx ``` For mutations, use `isPending` from the mutation object: ```tsx ``` ## Post-Submission After successful form submission, the caller handles cache invalidation and navigation – not the form itself: ```tsx // apps/app/routes/(auth)/login.tsx async function handleSuccess() { await revalidateSession(queryClient, router); await router.navigate({ to: search.returnTo ?? "/" }); } ; ``` This keeps the form reusable – `AuthForm` works in both the login page and a login dialog because the caller controls what happens after success. --- --- url: /specs/infra-terraform.md --- # Infrastructure Terraform Specification ## Overview Two deployment stacks with clear separation of concerns. **Non-goals:** Multi-region orchestration, blue-green deployments, auto-scaling policies. These belong in CI/CD or dedicated tooling. | Stack | Components | Use Case | | ------------------- | ------------------------------------------- | ----------------------- | | **edge** (default) | Hyperdrive, DNS (Workers via Wrangler) | Most SaaS apps | | **hybrid** (opt-in) | Cloud Run, Cloud SQL, GCS + optional CF DNS | GCP services, Vertex AI | ## Directory Structure ```bash infra/ modules/ # Atomic resources (no credentials) cloudflare/ hyperdrive/ # Database connection pooling r2-bucket/ # Object storage dns/ # Proxied DNS records gcp/ cloud-run/ # Container deployment cloud-sql/ # Managed PostgreSQL gcs/ # Object storage stacks/ # Architectural compositions edge/ # Hyperdrive + DNS (Workers via Wrangler) hybrid/ # GCP + optional CF DNS envs/ # Terraform roots (providers + backend + state) dev/edge/ preview/edge/ staging/edge/ prod/edge/ templates/ env-roots/hybrid/ # Copy to enable hybrid backend-r2.example.hcl # Remote state for edge backend-gcs.example.hcl # Remote state for hybrid ``` ## Module Contract Modules must NOT define `provider` blocks. Non-HashiCorp providers require `required_providers` to specify the source: ```hcl # Cloudflare modules declare source only (no version): terraform { required_providers { cloudflare = { source = "cloudflare/cloudflare" } } } ``` Version constraints live exclusively in env roots. This keeps modules reusable while centralizing version management. ## Provider Versions Canonical versions (single source of truth): | Provider | Version | | ---------- | -------------- | | terraform | `>= 1.12, < 2` | | cloudflare | `~> 5.0` | | google | `~> 7.0` | ## Design Decisions ### Explicit Roots Over Dispatcher Each `(environment, stack)` pair gets its own Terraform root with isolated state. ```bash terraform -chdir=infra/envs/prod/edge apply ``` **Why not a dispatcher?** A `variable "stack"` that switches configs: * Destroys one stack when switching to another * Requires separate backends anyway * Creates awkward `module.edge[0].x` references ### No Backend by Default Terraform uses local state when no backend is configured. Remote backends require pre-existing buckets and credentials. **Rationale:** Zero-friction onboarding. Add remote backend when ready for team collaboration. ### Providers in Env Roots Only Only env roots define `provider` blocks with credentials. Modules declare `required_providers` for source resolution only (no versions, no credentials). **Rationale:** Keeps modules reusable. Version constraints and credentials stay in one place per environment. ### Preview Uses Edge Only PR previews need fast spin-up and low cost. Cloudflare Workers: no cold starts, instant deploys, minimal cost. ## Secrets ```bash # Via environment variables (CI/CD) export TF_VAR_cloudflare_api_token="..." terraform -chdir=infra/envs/prod/edge apply # Or local terraform.tfvars (gitignored) ``` Mark sensitive variables: ```hcl variable "cloudflare_api_token" { type = string sensitive = true } ``` ## Switching to Remote Backend ### Edge Stack (R2) ```bash cp infra/templates/backend-r2.example.hcl infra/envs/prod/edge/backend.hcl terraform -chdir=infra/envs/prod/edge init -backend-config=backend.hcl -migrate-state ``` ### Hybrid Stack (GCS) ```bash cp infra/templates/backend-gcs.example.hcl infra/envs/prod/hybrid/backend.hcl terraform -chdir=infra/envs/prod/hybrid init -backend-config=backend.hcl -migrate-state ``` ## Multi-Region Use separate roots: `envs/prod-eu/edge`, `envs/prod-us/edge`. Each manages its own state. ## Naming Conventions ### Resource values Cloud resources use `{project_slug}-{environment}`; lowercase alphanumeric and hyphens only: `^[a-z0-9-]+$`. ### Resource identifiers One simple set of rules: 1. Name the thing being created (provider-native noun, singular). ```hcl resource "cloudflare_hyperdrive_config" "hyperdrive" {} resource "cloudflare_r2_bucket" "bucket" {} resource "cloudflare_dns_record" "record" {} resource "google_cloud_run_v2_service" "service" {} resource "google_sql_database_instance" "instance" {} ``` 2. If you have multiples, suffix with the role. ```hcl resource "cloudflare_r2_bucket" "uploads" {} resource "cloudflare_r2_bucket" "backups" {} ``` 3. Module names describe architectural role; resource names describe the concrete thing. ```hcl module "hyperdrive" { # contains: cloudflare_hyperdrive_config.hyperdrive } # → module.hyperdrive.id ``` ## Known Limitations ### Hyperdrive Database URL Parsing The hyperdrive module parses `database_url` via regex to extract individual connection parameters. This works reliably with Neon URLs (which use URL-safe generated credentials) but has limitations: * Port must be explicitly specified (e.g., `:5432`) * Credentials must not contain unencoded `@` or `:` characters * Validation fails fast with a descriptive error message For non-Neon databases with special characters in credentials, consider modifying the module to accept individual connection parameters instead. --- --- url: /getting-started.md --- # Introduction React Starter Kit is a production-ready monorepo for building SaaS web applications. It wires together authentication, database migrations, billing, email, and edge deployment so you can skip months of boilerplate and focus on your product. ## Who It's For * **Indie hackers** shipping an MVP fast * **Startups** that need a solid foundation without vendor lock-in * **Teams** building multi-tenant SaaS products ## Tech Stack | Layer | Technology | | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Runtime | [Bun](https://bun.sh) 1.3+, TypeScript 5.9, ESM | | Frontend | [React](https://react.dev) 19, [TanStack Router](https://tanstack.com/router), [TanStack Query](https://tanstack.com/query), [Jotai](https://jotai.org), [Tailwind CSS](https://tailwindcss.com) v4 | | UI | [shadcn/ui](https://ui.shadcn.com) (new-york style) | | Backend | [Hono](https://hono.dev), [tRPC](https://trpc.io) 11 | | Auth | [Better Auth](https://www.better-auth.com/) – email OTP, passkeys, Google OAuth, organizations | | Billing | [Stripe](https://stripe.com) subscriptions via Better Auth plugin | | Database | [Neon](https://neon.tech) PostgreSQL, [Drizzle ORM](https://orm.drizzle.team) | | Email | [React Email](https://react.email), [Resend](https://resend.com) | | Deployment | [Cloudflare Workers](https://developers.cloudflare.com/workers/), Terraform | | Testing | [Vitest](https://vitest.dev) 4, Happy DOM | ## What's Included * **Three Cloudflare Workers** – edge router, SPA, and API server connected via service bindings * **Type-safe API** – tRPC procedures with Zod validation, shared types between frontend and backend * **Multi-tenant auth** – email OTP, social login, passkeys, organizations with roles * **Subscription billing** – Stripe checkout, webhooks, and plan management * **Database toolkit** – Drizzle ORM schemas, migrations, seeding, and Hyperdrive connection pooling * **Email system** – React Email templates with Resend delivery * **AI-ready** – pre-configured instructions for Claude Code, Cursor, and Gemini CLI ## How the Docs Are Organized **[Getting Started](/getting-started/quick-start)** covers setup and orientation. **[Architecture](/architecture/)** explains the worker model and request flow. Feature sections – [Frontend](/frontend/routing), [API](/api/), [Auth](/auth/), [Database](/database/), [Billing](/billing/) – document each subsystem. **[Recipes](/recipes/new-page)** provide step-by-step guides for common tasks. **[Deployment](/deployment/)** covers shipping to production. Ready to start? Head to [Quick Start](./quick-start). --- --- url: /database/migrations.md --- # Migrations Drizzle Kit generates SQL migrations by diffing your TypeScript schema against the latest snapshot. Migration files live in `db/migrations/` alongside a journal that tracks applied versions. ## Workflow **1. Edit the schema** in `db/schema/`. **2. Generate a migration:** ```bash bun db:generate ``` This produces a numbered SQL file (e.g., `0001_add_product_table.sql`) in `db/migrations/`. **3. Review the generated SQL.** Drizzle Kit's output is generally correct, but always check for destructive operations – column drops, type changes, or data loss. **4. Apply the migration:** ```bash bun db:migrate ``` **5. Verify in Drizzle Studio:** ```bash bun db:studio ``` ## Push vs Migrate | Command | What it does | Use when | | ---------------- | ----------------------------------------- | ------------------------------------- | | `bun db:migrate` | Applies pending migration files in order | Production, staging, shared databases | | `bun db:push` | Syncs schema directly, no migration files | Local development, rapid prototyping | `push` is faster during development since it skips migration file generation. Switch to `migrate` when you need reproducible, reviewable changes. ## Targeting Environments Append `:staging` or `:prod` to run against other databases: ```bash bun db:migrate:staging bun db:migrate:prod ``` These set `ENVIRONMENT` internally, which controls which `.env.{env}.local` file is loaded. Double-check the target before running migrations against production. ## Drift Detection If schema files and migration snapshots diverge (e.g., after a manual DB change or a merge conflict), run: ```bash bun db:check ``` This reports discrepancies between your TypeScript schema and the migration history. Resolve by either updating the schema to match or generating a new migration to cover the gap. ## Tips * **Name your migrations** – `bun db:generate --name add-product-table` produces clearer filenames than auto-numbered defaults. * **One concern per migration** – avoid bundling unrelated schema changes. Smaller migrations are easier to review and roll back. * **Never edit applied migrations** – if a migration has already run in staging or production, create a new migration to correct issues. * **Review before applying** – `db:generate` writes SQL to disk. Read the file before running `db:migrate`. --- --- url: /deployment/monitoring.md --- # Monitoring Monitor your Workers in production using Cloudflare's built-in tools and roll back quickly when issues arise. ## Wrangler Tail Stream live logs from any worker: ```bash # Tail production API logs wrangler tail --config apps/api/wrangler.jsonc # Filter to specific paths wrangler tail --config apps/api/wrangler.jsonc --search-str="/api/trpc" # Tail staging wrangler tail --config apps/api/wrangler.jsonc --env=staging ``` Logs include request metadata, `console.log` output, and uncaught exceptions. ## Cloudflare Analytics The Cloudflare dashboard provides per-worker metrics: * **Workers → Analytics** – request count, error rate, CPU time, duration percentiles * **Workers → Logs** – real-time and historical log streams * Set up **notification policies** for error rate spikes or latency increases ## Rollback If a deploy introduces issues, roll back to the previous version: ```bash # List recent deployments wrangler deployments list --config apps/api/wrangler.jsonc # Roll back to the previous stable version wrangler rollback --config apps/api/wrangler.jsonc \ --message="Reverting due to auth regression" ``` Repeat for each affected worker (`apps/app/`, `apps/web/`). ::: warning Wrangler rollback reverts worker code but not database migrations. If a deploy included schema changes that the previous code depends on differently, you may need to deploy a fix-forward migration instead. See [Database: Migrations](/database/migrations). ::: ## Troubleshooting **Worker size limit** – Cloudflare Workers have a 10 MB compressed size limit (3 MB on the free plan). If you hit it: * Check for accidentally bundled dependencies * Move large assets to R2 storage * Ensure tree shaking is working (check for side-effect imports) **Database connection issues** – If queries fail or time out: * Verify Hyperdrive IDs in `wrangler.jsonc` match Terraform output * Check Neon dashboard for connection limit exhaustion * Confirm the database isn't in auto-suspended state (first request after suspend is slower) **Authentication problems** – If sign-in fails in production: * Verify `BETTER_AUTH_SECRET` is set (`wrangler secret list --config apps/api/wrangler.jsonc`) * Check `APP_ORIGIN` matches your actual domain (affects cookie domain) * Confirm OAuth redirect URIs include your production URL. See [Social Providers](/auth/social-providers) ## Cost Overview | Service | Free tier | Paid | | ------------------ | ------------------------------------ | ---------------------------------------------------------------------------------------- | | Cloudflare Workers | 100,000 requests/day | [$5/month for 10M requests](https://developers.cloudflare.com/workers/platform/pricing/) | | Neon PostgreSQL | 0.5 GB storage, auto-suspend compute | [Scale-to-zero billing](https://neon.tech/pricing) | | Hyperdrive | Included with Workers paid plan | – | | Resend | 100 emails/day | [$20/month for 50K emails](https://resend.com/pricing) | A typical growth-stage project runs around **~$45/month** (Workers $5 + Neon $19 + Resend $20). Free tiers are sufficient through early production – monitor usage in the Cloudflare and Neon dashboards as traffic grows. --- --- url: /auth/organizations.md --- # Organizations & Roles Organizations provide multi-tenant isolation. Each organization is a separate tenant with its own members, roles, and billing. Users can belong to multiple organizations and switch between them. ## Server Configuration The organization plugin is configured in `apps/api/lib/auth.ts`: ```ts organization({ allowUserToCreateOrganization: true, organizationLimit: 5, creatorRole: "owner", }), ``` | Setting | Value | Description | | ------------------------------- | --------- | ----------------------------------------- | | `allowUserToCreateOrganization` | `true` | Any user can create organizations | | `organizationLimit` | `5` | Max organizations per user | | `creatorRole` | `"owner"` | Creator automatically gets the owner role | ## Database Tables ### `organization` Defined in `db/schema/organization.ts`: | Column | Type | Description | | ------------------ | ------ | ------------------------------------- | | `id` | `text` | Prefixed CUID2 (`org_cm...`) | | `name` | `text` | Display name | | `slug` | `text` | URL-safe unique identifier | | `logo` | `text` | Logo URL (optional) | | `metadata` | `text` | JSON string for custom data | | `stripeCustomerId` | `text` | Stripe customer for org-level billing | ### `member` Links users to organizations with a role: | Column | Type | Description | | ---------------- | ------ | ----------------------------------- | | `id` | `text` | Prefixed CUID2 (`mem_cm...`) | | `userId` | `text` | References `user.id` | | `organizationId` | `text` | References `organization.id` | | `role` | `text` | `"owner"`, `"admin"`, or `"member"` | A unique constraint on `(userId, organizationId)` prevents duplicate memberships. ### `invitation` Manages pending invitations, defined in `db/schema/invitation.ts`: | Column | Type | Description | | ---------------- | ----------- | -------------------------------------------------------- | | `id` | `text` | Prefixed CUID2 (`inv_cm...`) | | `email` | `text` | Invitee's email address | | `inviterId` | `text` | References `user.id` | | `organizationId` | `text` | References `organization.id` | | `role` | `text` | Role assigned upon acceptance | | `status` | `text` | `"pending"`, `"accepted"`, `"rejected"`, or `"canceled"` | | `expiresAt` | `timestamp` | Invitation expiration | | `acceptedAt` | `timestamp` | When the invite was accepted | | `rejectedAt` | `timestamp` | When the invite was rejected or canceled | A unique constraint on `(organizationId, email)` prevents duplicate invitations to the same person. ## Roles Three built-in roles with hierarchical permissions: | Role | Can manage members | Can manage settings | Can delete org | | ---------- | ------------------ | ------------------- | -------------- | | **owner** | Yes | Yes | Yes | | **admin** | Yes | Yes | No | | **member** | No | No | No | ### Role Checks in API Procedures Use the session's `activeOrganizationId` with a membership query to check roles: ```ts // apps/api/routers/organization.ts const [row] = await ctx.db .select({ role: Db.member.role }) .from(Db.member) .where( and( eq(Db.member.organizationId, referenceId), eq(Db.member.userId, user.id), ), ); const isAdmin = row?.role === "owner" || row?.role === "admin"; ``` ## Active Organization The session tracks which organization is currently active via `activeOrganizationId`: ```ts export type AuthSession = SessionResponse["session"] & { activeOrganizationId?: string; }; ``` This field is stored in the `session` table and persists across requests. When the user switches organizations, Better Auth updates this field. ## Billing Integration Subscriptions scope to the active organization. The billing router uses `activeOrganizationId` as the billing reference, falling back to the user's own ID for personal billing: ```ts // apps/api/routers/billing.ts const referenceId = ctx.session.activeOrganizationId ?? ctx.user.id; ``` The Stripe plugin's `authorizeReference` hook enforces that only owners and admins can manage an organization's subscription: ```ts authorizeReference: async ({ user, referenceId }) => { if (referenceId === user.id) return true; // Personal billing const [row] = await db .select({ role: Db.member.role }) .from(Db.member) .where( and( eq(Db.member.organizationId, referenceId), eq(Db.member.userId, user.id), ), ); return row?.role === "owner" || row?.role === "admin"; }, ``` ## Invitation Lifecycle 1. **Owner/admin invites** – sends invitation to email with assigned role 2. **Invitation pending** – stored in `invitation` table with `status: "pending"` and an expiration 3. **Invitee accepts** – Better Auth creates a `member` record and updates invitation status 4. **Or invitee rejects / invitation expires** – invitation status is updated, no member created Each organization can only have one pending invitation per email address. ## Client API The `organizationClient()` plugin adds organization methods to the auth client: ```ts // Create an organization await auth.organization.create({ name: "Acme Inc", slug: "acme" }); // List user's organizations const { data } = await auth.organization.list(); // Set active organization await auth.organization.setActive({ organizationId: "org_cm..." }); // Invite a member await auth.organization.inviteMember({ email: "jane@example.com", role: "member", organizationId: "org_cm...", }); ``` See the [Better Auth organization plugin docs](https://www.better-auth.com/docs/plugins/organization) for the complete client API. --- --- url: /auth/passkeys.md --- # Passkeys Passkey authentication uses the [WebAuthn](https://webauthn.io/) standard to let users sign in with biometrics (Touch ID, Face ID) or hardware security keys. It's the most secure sign-in method – no shared secrets leave the device. ::: info Passkeys are available for **login only**. Users must first create an account via email OTP or Google OAuth, then register a passkey from their account settings. The sign-up form does not show the passkey option. ::: ## Server Configuration The passkey plugin is configured in `apps/api/lib/auth.ts`: ```ts passkey({ rpID, // Domain name (e.g., "example.com" or "localhost") rpName: env.APP_NAME, // Human-readable name shown in browser prompts origin: env.APP_ORIGIN, }), ``` The `rpID` (Relying Party ID) is extracted from `APP_ORIGIN`: ```ts const appUrl = new URL(env.APP_ORIGIN); const rpID = appUrl.hostname; ``` This means passkeys are bound to the domain – a passkey registered on `example.com` won't work on `staging.example.com`. The `rpName` appears in the browser's passkey dialog (e.g., "Sign in to My App"). ### Database Table Passkey credentials are stored in `db/schema/passkey.ts`: | Column | Description | | -------------- | ------------------------------------------------------- | | `publicKey` | WebAuthn public key | | `credentialID` | Unique credential identifier | | `counter` | Signature counter (replay protection) | | `deviceType` | `"singleDevice"` or `"multiDevice"` | | `backedUp` | Whether the credential is synced across devices | | `transports` | Communication methods (USB, BLE, NFC, internal) | | `deviceName` | User-friendly label (e.g., "MacBook Pro") | | `platform` | `"platform"` (built-in) or `"cross-platform"` (USB key) | ## Client Component The `PasskeyLogin` component in `apps/app/components/auth/passkey-login.tsx` handles two modes: ### Explicit Login When the user clicks "Log in with passkey", the component checks for WebAuthn support and triggers the browser's credential picker: ```ts const handlePasskeyLogin = async () => { if (!window.PublicKeyCredential) { onError(authConfig.errors.passkeyNotSupported); return; } const result = await auth.signIn.passkey(); if (result.data) { onSuccess(); } else if (result.error) { const errorCode = "code" in result.error ? result.error.code : undefined; if (errorCode === "AUTH_CANCELLED") { onError("Passkey authentication was cancelled."); } else { onError(result.error.message || authConfig.errors.genericError); } } }; ``` ### Conditional UI (Autofill) When enabled, passkey autofill shows saved credentials in the browser's autocomplete dropdown – similar to how password managers work. This runs passively on mount: ```ts useEffect(() => { if (!authConfig.passkey.enableConditionalUI) return; const setupConditionalUI = async () => { if (!window.PublicKeyCredential?.isConditionalMediationAvailable) return; const isAvailable = await window.PublicKeyCredential.isConditionalMediationAvailable(); if (!isAvailable) return; const result = await auth.signIn.passkey({ autoFill: true }); if (result.data && !aborted) { onSuccessRef.current(); } }; setupConditionalUI(); }, []); ``` Conditional UI is controlled by the `authConfig.passkey.enableConditionalUI` flag (default: `true`). Errors from conditional UI are silently ignored since the user hasn't explicitly requested authentication. ## Client Configuration Passkey behavior is configured in `apps/app/lib/auth-config.ts`: ```ts passkey: { enableConditionalUI: true, timeout: 60_000, // 60 seconds for user interaction userVerification: "preferred", }, ``` | Setting | Default | Description | | --------------------- | ------------- | ----------------------------------------------------------- | | `enableConditionalUI` | `true` | Show passkeys in browser autocomplete | | `timeout` | `60000` | Max time (ms) for user to interact with the WebAuthn dialog | | `userVerification` | `"preferred"` | Request biometric/PIN when available, but don't require it | ## Error Handling | Error | Cause | Behavior | | --------------------- | -------------------------------------------------- | ----------------------------- | | `AUTH_CANCELLED` | User dismissed the WebAuthn prompt or it timed out | Shows cancellation message | | `passkeyNotSupported` | `window.PublicKeyCredential` is undefined | Shows browser support message | | Network error | Offline or DNS failure | Shows network error message | | Server error | No passkey found, invalid credential | Shows server error message | ## Browser Support Passkeys require WebAuthn support. All modern browsers support it: * Chrome 67+, Edge 18+, Firefox 60+, Safari 13+ * iOS 16+ (synced via iCloud Keychain) * Android 9+ (synced via Google Password Manager) The component checks `window.PublicKeyCredential` before attempting authentication and shows a clear message on unsupported browsers. --- --- url: /billing/plans.md --- # Plans & Pricing Plan limits are defined once in `apps/api/lib/plans.ts` and referenced by the auth plugin config (plan definitions) and the tRPC billing router (query responses). ## Plan Limits ```ts // apps/api/lib/plans.ts export const planLimits = { free: { members: 1 }, starter: { members: 5 }, pro: { members: 50 }, } as const; ``` This is the single source of truth for what each plan includes. Add new limit fields here – they'll automatically flow to both the auth plugin and tRPC responses. ## Auth Plugin Configuration Plans are registered with the `@better-auth/stripe` plugin in `apps/api/lib/auth.ts`: ```ts // apps/api/lib/auth.ts (stripe plugin config) stripe({ stripeClient: createStripeClient(env), stripeWebhookSecret: env.STRIPE_WEBHOOK_SECRET, createCustomerOnSignUp: true, subscription: { enabled: true, plans: [ { name: "starter", priceId: env.STRIPE_STARTER_PRICE_ID, limits: planLimits.starter, }, { name: "pro", priceId: env.STRIPE_PRO_PRICE_ID, annualDiscountPriceId: env.STRIPE_PRO_ANNUAL_PRICE_ID, limits: planLimits.pro, freeTrial: { days: 14 }, }, ], }, }); ``` The free tier has no Stripe plan – users without an active subscription are treated as free. The `limits` objects are stored on the Stripe subscription metadata and returned by the plugin. ## Stripe Dashboard Setup For each paid plan, create a **Product** and **Price** in the [Stripe Dashboard](https://dashboard.stripe.com/products): 1. Create a product (e.g., "Starter Plan") 2. Add a recurring price (e.g., $9/month) 3. Copy the price ID (`price_...`) to the corresponding environment variable | Plan | Environment variable | Product example | | ------------- | ---------------------------- | ------------------------- | | Starter | `STRIPE_STARTER_PRICE_ID` | "Starter Plan" – $9/month | | Pro (monthly) | `STRIPE_PRO_PRICE_ID` | "Pro Plan" – $29/month | | Pro (annual) | `STRIPE_PRO_ANNUAL_PRICE_ID` | "Pro Plan" – $290/year | ::: info Use Stripe **test mode** during development. The price IDs are different between test and live modes. ::: ## How Limits Are Exposed The `billing.subscription` tRPC procedure returns the current plan and its limits: ```ts // apps/api/routers/billing.ts const sub = await ctx.db.query.subscription.findFirst({ where: (s, { eq, and, inArray }) => and( eq(s.referenceId, referenceId), inArray(s.status, ["active", "trialing"]), ), }); return { plan, status: sub?.status ?? null, limits: planLimits[plan as PlanName], // ... }; ``` When no active subscription exists, it defaults to the `free` plan limits. Enforce limits in your application logic – tRPC middleware for server-side checks, UI guards for client-side gating. ## Adding or Modifying Plans 1. **Update limits** – edit `planLimits` in `apps/api/lib/plans.ts` 2. **Update auth config** – add/edit the plan entry in `apps/api/lib/auth.ts` 3. **Create Stripe product** – add the product and price in the Stripe Dashboard 4. **Set env var** – add the new `STRIPE_*_PRICE_ID` to `.env.local` and Cloudflare secrets 5. **Update UI** – add the plan option to the billing card in `apps/app/routes/(app)/settings.tsx` --- --- url: /specs/prefixed-ids.md --- # Prefixed CUID2 Database IDs All database primary keys use application-generated, prefixed [CUID2](https://github.com/paralleldrive/cuid2) identifiers: `usr_ght4k2jxm7pqbv01`. The prefix encodes entity type, improving debuggability across logs, URLs, and support conversations. Same pattern as Stripe (`cus_`, `sub_`), Clerk (`user_`, `org_`). IDs are opaque strings – clients must not parse or decode them. ## Format ```text {prefix}_{body} Example: usr_ght4k2jxm7pqbv01 └──3──┘ └─16─┘ 20 chars total ``` * **Prefix:** 3-char lowercase entity type * **Body:** 16-char CUID2 (alphanumeric, starts with letter) ## Prefix Map Defined in `db/schema/id.ts`. Keys are Better Auth model names (not table names). | Model | Prefix | Notes | | -------------- | ------ | ------------------------------------------------------- | | `user` | `usr` | | | `session` | `ses` | | | `account` | `idn` | Maps to `identity` table via `account.modelName` config | | `verification` | `vfy` | | | `organization` | `org` | | | `member` | `mem` | | | `invitation` | `inv` | | | `passkey` | `pky` | | | `subscription` | `sub` | | ## API ```ts import { generateAuthId, generateId } from "@repo/db"; // Auth tables – type-checked against the prefix map generateAuthId("user"); // "usr_ght4k2jxm7pqbv01" // Non-auth tables – any 3-letter prefix generateId("upl"); // "upl_m8xk3jvqp2wnba09" ``` Throws on unknown auth models or invalid prefixes. The CUID2 generator is lazy-initialized (no module-level side effects – safe for Workers isolates). ## Integration Points **Better Auth** – `apps/api/lib/auth.ts`: ```ts advanced: { database: { generateId: ({ model }) => generateAuthId(model as AuthModel), }, }, ``` **Drizzle schema** – `db/schema/*.ts` use `.$defaultFn()` instead of `gen_random_uuid()`: ```ts id: text().primaryKey().$defaultFn(() => generateAuthId("user")), ``` ## Adding a New Model 1. Add the prefix to `AUTH_PREFIX` in `db/schema/id.ts` 2. Use `.$defaultFn(() => generateAuthId("modelName"))` in the schema 3. Re-generate migrations: `bun db:generate` --- --- url: /api/procedures.md --- # Procedures tRPC procedures are the primary way the frontend communicates with the API. Each procedure is either a **query** (read data) or a **mutation** (write data), with optional input validation via Zod. ## Procedure Types The project defines two base procedures in `apps/api/lib/trpc.ts`: ### `publicProcedure` Accessible to all callers, including unauthenticated users. Context includes `db`, `env`, and `cache` but `session` and `user` may be `null`. ```ts import { publicProcedure } from "../lib/trpc.js"; export const healthRouter = router({ ping: publicProcedure.query(() => { return { status: "ok" }; }), }); ``` ### `protectedProcedure` Requires an authenticated session. Throws `UNAUTHORIZED` if the user is not logged in. Context narrows `session` and `user` to non-null types – no runtime null checks needed. ```ts import { protectedProcedure } from "../lib/trpc.js"; export const userRouter = router({ me: protectedProcedure.query(async ({ ctx }) => { return { id: ctx.user.id, // ✓ guaranteed non-null email: ctx.user.email, name: ctx.user.name, }; }), }); ``` ## Router Files Each domain gets its own router file in `apps/api/routers/`: ``` routers/ ├── billing.ts # billing.subscription ├── organization.ts # organization.list, .create, .update, .delete, ... └── user.ts # user.me, .updateProfile, .list ``` Routers are merged into the root `appRouter` in `apps/api/lib/app.ts`: ```ts const appRouter = router({ billing: billingRouter, user: userRouter, organization: organizationRouter, }); ``` The client calls procedures using the namespace: `api.user.me`, `api.billing.subscription`, etc. ## Input Validation Define inputs with Zod schemas. tRPC validates them automatically and returns structured errors on failure (see [Validation & Errors](./validation-errors)). ```ts import { z } from "zod"; export const userRouter = router({ updateProfile: protectedProcedure .input( z.object({ name: z.string().min(1).optional(), email: z.email({ error: "Invalid email address" }).optional(), }), ) .mutation(({ input, ctx }) => { // `input` is fully typed: { name?: string; email?: string } return { id: ctx.user.id, ...input }; }), }); ``` For queries with pagination: ```ts list: protectedProcedure .input( z.object({ limit: z.number().min(1).max(100).default(10), cursor: z.string().optional(), }), ) .query(({ input }) => { // input.limit defaults to 10 if not provided return { users: [], nextCursor: null }; }), ``` ## Adding a New Procedure **1. Create the router file** (or add to an existing one): ```ts // apps/api/routers/post.ts import { z } from "zod"; import { protectedProcedure, router } from "../lib/trpc.js"; export const postRouter = router({ list: protectedProcedure .input(z.object({ limit: z.number().max(50).default(20) })) .query(async ({ ctx, input }) => { return ctx.db.query.post.findMany({ limit: input.limit }); }), create: protectedProcedure .input(z.object({ title: z.string().min(1), body: z.string() })) .mutation(async ({ ctx, input }) => { // Insert into database }), }); ``` **2. Register the router** in `apps/api/lib/app.ts`: ```ts import { postRouter } from "../routers/post.js"; const appRouter = router({ billing: billingRouter, user: userRouter, organization: organizationRouter, post: postRouter, // [!code ++] }); ``` **3. Call from the frontend** – the types propagate automatically: ```ts const { data } = useSuspenseQuery(api.post.list.queryOptions({ limit: 10 })); ``` ## Naming Conventions * **Router files**: singular noun matching the domain (`user.ts`, `billing.ts`, `organization.ts`) * **Router variables**: `{domain}Router` – `userRouter`, `billingRouter` * **Procedure names**: verb or short phrase – `me`, `list`, `create`, `updateProfile` * **Namespace key**: matches the domain – `user:`, `billing:`, `organization:` ## Testing Procedures Use `createCallerFactory` to test procedures without HTTP: ```ts import { createCallerFactory } from "../lib/trpc"; import { billingRouter } from "./billing"; const createCaller = createCallerFactory(billingRouter); it("returns free plan defaults", async () => { const caller = createCaller(mockContext()); const result = await caller.subscription(); expect(result.plan).toBe("free"); }); ``` --- --- url: /deployment/production-database.md --- # Production Database The production database runs on [Neon PostgreSQL](https://neon.tech/) with [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) providing connection pooling at the edge. ## Neon Setup 1. Create a Neon project at [console.neon.tech](https://console.neon.tech/) (or via [referral](https://get.neon.com/HD157BR)) 2. Create separate databases for staging and production (or use Neon branching) 3. Copy the connection strings – you'll need them for Hyperdrive and migrations The connection string format: `postgresql://user:pass@host/dbname?sslmode=require` ## Hyperdrive Configuration Hyperdrive is provisioned via Terraform. The module in `infra/modules/cloudflare/hyperdrive/` parses the Neon connection string and creates a Hyperdrive config with connection pooling: ```bash # Provision Hyperdrive for staging bun infra:staging:edge:apply ``` This creates two Hyperdrive bindings per environment: | Binding | Caching | Use for | | ------------------- | ------------------- | -------------------------------------------------- | | `HYPERDRIVE_CACHED` | Disabled by default | Read-heavy queries (enable in Terraform if needed) | | `HYPERDRIVE_DIRECT` | None | Writes, real-time reads | After Terraform applies, copy the Hyperdrive IDs from the output into `apps/api/wrangler.jsonc` for each environment. See [Database: Connection Architecture](/database/#connection-architecture) for how these bindings are used in application code. ## Running Migrations Migrations run directly against Neon (not through Hyperdrive). The `db/` workspace provides environment-specific commands: ```bash # Staging bun db:migrate:staging # Production bun db:migrate:prod ``` These commands read connection strings from `.env.staging.local` and `.env.prod.local` respectively. See [Database: Migrations](/database/migrations) for the full workflow. ::: warning Always review generated migration SQL before running against production. Use `bun db:generate` to preview changes, then inspect the files in `db/migrations/` before applying. ::: ## Database Performance * **Connection pooling** – Hyperdrive maintains a pool at the edge, reducing cold-start latency * **Indexes** – add indexes for frequently queried columns, especially foreign keys used in multi-tenant filters * **Monitor slow queries** – use the Neon dashboard to identify and optimize slow queries * **Compute auto-suspend** – Neon suspends idle compute after inactivity; first request after suspend has higher latency --- --- url: /getting-started/project-structure.md --- # Project Structure The project is a Bun monorepo with four applications, shared packages, a database workspace, and infrastructure configuration. ```bash my-app/ ├── apps/ │ ├── web/ # Edge router + Astro marketing site │ ├── app/ # React 19 SPA (TanStack Router) │ ├── api/ # Hono + tRPC API server │ └── email/ # React Email templates ├── packages/ │ ├── ui/ # shadcn/ui component library │ ├── core/ # Shared utilities │ ├── ws-protocol/ # WebSocket protocol template │ └── typescript-config/ # Shared tsconfig presets ├── db/ # Drizzle ORM schemas and migrations ├── infra/ # Terraform (Cloudflare Workers, DNS) ├── docs/ # Documentation (VitePress) ├── scripts/ # Build and utility scripts └── package.json # Monorepo root ``` ## Applications ### `apps/web` – Edge Router Cloudflare Worker that serves as the public-facing entry point. Routes `/api/*` to the API worker and app routes (`/login`, `/settings`, etc.) to the app worker via [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). Also serves the Astro-built marketing site for unauthenticated visitors. Uses an [auth hint cookie](/adr/001-auth-hint-cookie) to decide whether `/` shows the app or the landing page. ### `apps/app` – Frontend SPA React 19 single-page application built with Vite. Uses TanStack Router for file-based routing (`apps/app/routes/`), TanStack Query for server state, Jotai for client state, and shadcn/ui components. Deployed as a Cloudflare Worker with static assets. ### `apps/api` – API Server Cloudflare Worker running [Hono](https://hono.dev) for HTTP routing and [tRPC](https://trpc.io) for type-safe RPC. Handles authentication (Better Auth), database queries (Drizzle ORM via Hyperdrive), and Stripe webhooks. Has `nodejs_compat` enabled – the other workers do not. ### `apps/email` – Email Templates [React Email](https://react.email) templates used for OTP codes, invitations, and transactional emails. Built before the API dev server starts so templates are available at runtime. Preview with `bun email:dev`. ## Packages | Package | Description | | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | | `packages/ui` | [shadcn/ui](https://ui.shadcn.com) components (new-york style) with Tailwind CSS v4. Add components with `bun ui:add `. | | `packages/core` | Shared utilities and constants used across apps. | | `packages/ws-protocol` | WebSocket message protocol template for real-time features. | | `packages/typescript-config` | Shared `tsconfig.json` presets for consistent compiler settings. | ## Database Workspace The `db/` workspace contains Drizzle ORM table definitions (`db/schema/`), migration files (`db/migrations/`), and seed scripts (`db/seeds/`). It targets Neon PostgreSQL with Cloudflare Hyperdrive for connection pooling. See [Database Overview](/database/) for details. ## Key Configuration Files | File | Purpose | | ----------------------- | ----------------------------------------------------- | | `infra/` | Terraform modules for Cloudflare resources | | `apps/*/wrangler.jsonc` | Cloudflare Worker configuration per app | | `db/drizzle.config.ts` | Drizzle ORM migration configuration | | `.env` | Shared environment defaults (committed to git) | | `.env.local` | Local secrets and overrides (git-ignored) | | `tsconfig.json` | Root TypeScript project references | | `package.json` | Monorepo root – workspaces, scripts, dev dependencies | --- --- url: /database/queries.md --- # Query Patterns Common patterns for querying the database in tRPC procedures. All examples use Drizzle ORM's relational query API and assume access to `ctx.db` from [tRPC context](/api/context). ## Multi-tenant Queries Every query that returns user data must be scoped to the current organization. The active organization ID is available on the session: ```ts const products = await ctx.db.query.product.findMany({ where: eq(product.organizationId, ctx.session.activeOrganizationId), }); ``` ::: warning Forgetting the organization filter leaks data across tenants. Treat this as a security invariant – every table with an `organizationId` column must filter by it. ::: ## Relations Drizzle's `with` clause loads related records in a single query: ```ts const org = await ctx.db.query.organization.findFirst({ where: eq(organization.id, orgId), with: { members: { with: { user: true }, }, }, }); ``` Select only the columns you need to reduce payload size: ```ts const products = await ctx.db.query.product.findMany({ where: eq(product.organizationId, orgId), columns: { id: true, name: true, price: true }, with: { creator: { columns: { id: true, name: true }, }, }, }); ``` ## DataLoader Pattern The API uses a [DataLoader](https://github.com/graphql/dataloader) pattern to batch lookups and prevent N+1 queries. Loaders are defined with `defineLoader` and cached per-request in `ctx.cache`: ```ts // apps/api/lib/loaders.ts (simplified) export const userById = defineLoader( Symbol("userById"), async (ctx, ids: readonly string[]) => { const users = await ctx.db .select() .from(user) .where(inArray(user.id, [...ids])); return mapByKey(users, "id", ids); }, ); ``` Use loaders when a procedure needs to fetch the same entity type for multiple IDs: ```ts const creator = await userById(ctx).load(product.createdBy); ``` See [Context & Middleware – DataLoaders](/api/context#dataloaders) for the full pattern and how to add new loaders. ## Access Control Verify organization membership before returning data: ```ts const membership = await ctx.db.query.member.findFirst({ where: and(eq(member.userId, ctx.user.id), eq(member.organizationId, orgId)), }); if (!membership) { throw new TRPCError({ code: "FORBIDDEN" }); } ``` Check roles for privileged operations: ```ts if (membership.role !== "owner" && membership.role !== "admin") { throw new TRPCError({ code: "FORBIDDEN" }); } ``` ## Design Patterns ### Multi-tenant Data Isolation Every domain table should reference an organization with cascade delete: ```ts export const yourTable = pgTable("your_table", { id: text() .primaryKey() .$defaultFn(() => generateId("xxx")), organizationId: text() .notNull() .references(() => organization.id, { onDelete: "cascade" }), // ... }); ``` ### Soft Deletes When you need to preserve records for auditing: ```ts // Schema deletedAt: timestamp({ withTimezone: true, mode: "date" }), // Query – exclude soft-deleted records const active = await ctx.db.query.product.findMany({ where: and( eq(product.organizationId, orgId), isNull(product.deletedAt), ), }); // Soft delete await ctx.db .update(product) .set({ deletedAt: new Date() }) .where(eq(product.id, productId)); ``` ### Audit Fields Track who created and modified records: ```ts createdBy: text().references(() => user.id), updatedBy: text().references(() => user.id), ``` ### Batch Inserts Use array values for bulk operations: ```ts await ctx.db.insert(product).values([ { name: "Product A", price: 1000, organizationId: orgId }, { name: "Product B", price: 2000, organizationId: orgId }, ]); ``` --- --- url: /getting-started/quick-start.md --- # Quick Start ::: tip TL;DR ```bash git clone -o seed -b main --single-branch \ https://github.com/kriasoft/react-starter-kit.git my-app cd my-app && bun install && bun dev ``` ::: ## Prerequisites * **[Bun](https://bun.sh)** 1.3.0 or later * A **[Cloudflare](https://dash.cloudflare.com/sign-up)** account (free tier works) ::: info Node.js Optional This project runs entirely on Bun. You don't need Node.js unless you're integrating with Node-specific tools. ::: ## Create Your Project ### Option A: GitHub Template 1. Go to [github.com/kriasoft/react-starter-kit](https://github.com/kriasoft/react-starter-kit) 2. Click **"Use this template"** → **"Create a new repository"** 3. Clone your new repository: ```bash git clone https://github.com/YOUR_USERNAME/YOUR_PROJECT.git cd YOUR_PROJECT bun install ``` ::: tip This creates a clean repository without the template's commit history. ::: ### Option B: Git Clone Clone with a custom remote name so you can pull template updates later: ```bash git clone -o seed -b main --single-branch \ https://github.com/kriasoft/react-starter-kit.git my-app cd my-app bun install ``` Add your own repository as `origin`: ```bash git remote add origin https://github.com/YOUR_USERNAME/YOUR_PROJECT.git git push -u origin main ``` To pull template updates later: ```bash git fetch seed git merge seed/main ``` ::: warning Review template updates carefully before merging – schema or config changes may need manual resolution. ::: ## Start the Dev Server ```bash bun dev ``` This starts three services concurrently: | Service | URL | Description | | ------- | ----------------------- | ------------------------- | | App | `http://localhost:5173` | React SPA with hot reload | | API | `http://localhost:8787` | Hono + tRPC server | | Web | `http://localhost:4321` | Astro marketing site | You can also start services individually: ```bash bun app:dev # React app only bun api:dev # API server only bun web:dev # Marketing site only bun email:dev # Email template preview at http://localhost:3001 ``` ## Explore the Stack * **App** at `http://localhost:5173` – your React app with TanStack Router * **API** at `http://localhost:8787` – tRPC endpoints * **Database GUI** – run `bun db:studio` to open Drizzle Studio * **Email preview** – run `bun email:dev` for template preview at `http://localhost:3001` ## Make It Yours 1. Update branding in `apps/app/index.html` 2. Edit the homepage at `apps/app/routes/(app)/index.tsx` 3. Add API procedures in `apps/api/routers/` 4. Define data models in `db/schema/` ## Development Commands ```bash bun dev # Start all services concurrently bun test # Run tests (Vitest, single run) bun lint # ESLint with cache bun typecheck # TypeScript type checking (tsc --build) bun build # Production build: email → web → api → app ``` ::: info After modifying tRPC routes, types update automatically – no manual sync needed. After editing `db/schema/`, run `bun db:generate` then `bun db:push` to apply changes. ::: --- --- url: /frontend/routing.md --- # Routing The app uses [TanStack Router](https://tanstack.com/router/latest) with file-based routing. Routes are defined as files in `apps/app/routes/` and TanStack Router generates a typed route tree automatically. ## Route File Convention Each file in `routes/` becomes a route. The file path determines the URL: ```bash apps/app/routes/ ├── __root.tsx → Root layout (wraps everything) ├── (auth)/ │ ├── login.tsx → /login │ └── signup.tsx → /signup └── (app)/ ├── route.tsx → Layout for all (app) routes ├── index.tsx → / (dashboard) ├── settings.tsx → /settings ├── users.tsx → /users ├── analytics.tsx → /analytics ├── reports.tsx → /reports ├── dashboard.tsx → /dashboard (redirects to /) └── about.tsx → /about ``` Parenthesized directories like `(app)` and `(auth)` are **route groups** – they create layout boundaries without affecting the URL. `/settings` is the URL, not `/(app)/settings`. The generated route tree lives at `apps/app/lib/routeTree.gen.ts`. Don't edit it – run `bun app:dev` and TanStack Router regenerates it on file changes. ## Route Groups The two route groups serve different auth requirements: | Group | Purpose | Auth behavior | | -------- | ------------------- | ----------------------------------------- | | `(app)` | Protected app pages | Redirects to `/login` if unauthenticated | | `(auth)` | Login/signup pages | Redirects to `/` if already authenticated | ## Root Route The root route (`__root.tsx`) creates the router context and wraps everything in an error boundary: ```tsx // apps/app/routes/__root.tsx export const Route = createRootRouteWithContext<{ queryClient: QueryClient; }>()({ component: Root, }); function Root() { return ( {import.meta.env.DEV && } ); } ``` The `queryClient` in context is what makes `beforeLoad` guards possible – route guards can prefetch or read cached data before rendering. ## Auth Guards ### Protecting app routes The `(app)/route.tsx` layout guard uses a cache-first strategy for instant navigation: ```tsx // apps/app/routes/(app)/route.tsx export const Route = createFileRoute("/(app)")({ beforeLoad: async ({ context, location }) => { // Check cache first – instant when data exists let session = getCachedSession(context.queryClient); // Fetch only if cache is empty (first visit or after sign-out) if (session === undefined) { session = await context.queryClient.fetchQuery(sessionQueryOptions()); } // Both user and session must exist for valid auth state if (!session?.user || !session?.session) { throw redirect({ to: "/login", search: { returnTo: location.href }, }); } return { user: session.user, session }; }, component: AppLayout, }); ``` This pattern makes subsequent navigations between protected routes instant – the session is already cached from the first load. ### Redirecting authenticated users Login and signup routes redirect authenticated users away: ```tsx // apps/app/routes/(auth)/login.tsx export const Route = createFileRoute("/(auth)/login")({ validateSearch: searchSchema, beforeLoad: async ({ context, search }) => { try { const session = await context.queryClient.fetchQuery( sessionQueryOptions(), ); if (session?.user && session?.session) { throw redirect({ to: search.returnTo ?? "/" }); } } catch (error) { if (isRedirect(error)) throw error; // Show login form for fetch errors } }, component: LoginPage, }); ``` ## Search Params Validate and transform search params with Zod. The login route sanitizes `returnTo` to prevent open redirects: ```tsx const searchSchema = z.object({ returnTo: z .string() .optional() .transform((val) => { const safe = getSafeRedirectUrl(val); return safe === "/" ? undefined : safe; }) .catch(undefined), }); ``` Access validated search params in the component: ```tsx function LoginPage() { const search = Route.useSearch(); // search.returnTo is guaranteed safe – validated at parse time } ``` ## Navigation Use the `` component for type-safe navigation: ```tsx import { Link } from "@tanstack/react-router"; Settings // Active styling Settings // With search params Log in ``` For programmatic navigation: ```tsx const router = useRouter(); await router.navigate({ to: "/settings" }); ``` ## Adding a New Route 1. Create a route file: ```tsx // apps/app/routes/(app)/projects.tsx import { createFileRoute } from "@tanstack/react-router"; export const Route = createFileRoute("/(app)/projects")({ component: Projects, }); function Projects() { return (

Projects

); } ``` 2. The route tree regenerates automatically during `bun app:dev`. The new page is available at `/projects` and protected by the `(app)` layout guard. 3. Add navigation in the sidebar or header as needed. See [State & Data Fetching](./state.md) for loading data in your new route. For more on TanStack Router, see the [official docs](https://tanstack.com/router/latest/docs/framework/react/overview). --- --- url: /database/schema.md --- # Schema The database schema lives in `db/schema/`, with one file per entity group. Drizzle ORM's `casing: "snake_case"` option maps camelCase TypeScript properties to snake\_case database columns automatically. ## Conventions **Primary keys** – All tables use application-generated prefixed CUID2 IDs (e.g., `usr_ght4k2jxm7pqbv01`). The 3-character prefix encodes the entity type for recognition in logs, URLs, and support tickets. | Model | Prefix | Table | | ------------ | ------ | -------------- | | user | `usr` | `user` | | session | `ses` | `session` | | account | `idn` | `identity` | | verification | `vfy` | `verification` | | organization | `org` | `organization` | | member | `mem` | `member` | | invitation | `inv` | `invitation` | | passkey | `pky` | `passkey` | | subscription | `sub` | `subscription` | IDs are generated at the application level via `$defaultFn()` – no database sequences or UUID functions. See `db/schema/id.ts` for the implementation and [Prefixed CUID2 IDs](/specs/prefixed-ids) for design rationale. **Timestamps** – Every table has `createdAt` and `updatedAt` columns using `timestamp({ withTimezone: true, mode: "date" })`. `createdAt` defaults to `now()`; `updatedAt` auto-updates via `$onUpdate(() => new Date())`. **Foreign keys** – All FKs use `onDelete: "cascade"`. Every FK column gets a btree index named `{table}_{column}_idx`. **No enums** – `member.role` and `invitation.status` are plain `text` columns, not `pgEnum`. This avoids fragile coupling with Better Auth's role values. ## Entity Relationship Diagram ```mermaid erDiagram user ||--o{ session : "has" user ||--o{ identity : "authenticates with" user ||--o{ passkey : "registers" user ||--o{ member : "belongs to" user ||--o{ invitation : "invited by" user ||--o{ subscription : "subscribes" organization ||--o{ member : "has members" organization ||--o{ invitation : "receives" organization ||--o{ subscription : "subscribes" user { text id PK "usr_..." text name text email UK boolean email_verified text image boolean is_anonymous text stripe_customer_id } session { text id PK "ses_..." timestamp expires_at text token UK text ip_address text user_agent text user_id FK text active_organization_id } identity { text id PK "idn_..." text account_id text provider_id text user_id FK text access_token text refresh_token text id_token timestamp access_token_expires_at timestamp refresh_token_expires_at text scope text password } verification { text id PK "vfy_..." text identifier text value timestamp expires_at } passkey { text id PK "pky_..." text name text public_key text credential_id UK text user_id FK integer counter text device_type boolean backed_up text transports text aaguid timestamp last_used_at text device_name text platform } organization { text id PK "org_..." text name text slug UK text logo text metadata text stripe_customer_id } member { text id PK "mem_..." text user_id FK text organization_id FK text role } invitation { text id PK "inv_..." text email text inviter_id FK text organization_id FK text role text status timestamp expires_at timestamp accepted_at timestamp rejected_at } subscription { text id PK "sub_..." text plan text reference_id text stripe_customer_id text stripe_subscription_id UK text status timestamp period_start timestamp period_end timestamp trial_start timestamp trial_end boolean cancel_at_period_end integer seats text billing_interval } ``` ## Table Groups ### Authentication Tables Managed by [Better Auth](https://www.better-auth.com/docs/concepts/database). Extend with care – changes must stay compatible with the auth framework. | Table | File | Purpose | | -------------- | ------------------- | ------------------------------------------------------------------------------------------------------------- | | `user` | `schema/user.ts` | User accounts – name, email, verification status, Stripe customer ID | | `session` | `schema/user.ts` | Active sessions with device tracking and [active organization context](/auth/sessions) | | `identity` | `schema/user.ts` | OAuth credentials and email/password (Better Auth's `account` table, [renamed](/auth/#identity-table-rename)) | | `verification` | `schema/user.ts` | OTP codes, email verification tokens | | `passkey` | `schema/passkey.ts` | WebAuthn credentials for [passwordless auth](/auth/passkeys) | ::: warning Authentication tables follow [Better Auth's schema requirements](https://www.better-auth.com/docs/concepts/database). When adding columns, register them in the auth config's `additionalFields` to ensure proper data handling. ::: ::: details user table – TypeScript definition ```ts // db/schema/user.ts export const user = pgTable("user", { id: text() .primaryKey() .$defaultFn(() => generateAuthId("user")), name: text().notNull(), email: text().notNull().unique(), emailVerified: boolean().default(false).notNull(), image: text(), isAnonymous: boolean().default(false).notNull(), stripeCustomerId: text(), createdAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .notNull(), updatedAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .$onUpdate(() => new Date()) .notNull(), }); ``` ::: ### Organization Tables Multi-tenancy via Better Auth's [organization plugin](https://www.better-auth.com/docs/plugins/organization). | Table | File | Purpose | | -------------- | ------------------------ | ---------------------------------------------------------------- | | `organization` | `schema/organization.ts` | Tenants / workspaces – name, slug, logo, metadata | | `member` | `schema/organization.ts` | User ↔ organization membership with roles (owner, admin, member) | | `invitation` | `schema/invitation.ts` | Pending org invitations with status lifecycle | Key constraints: * `member(userId, organizationId)` is unique – one membership per user per org * `invitation(organizationId, email)` is unique – one pending invite per email per org * `session.activeOrganizationId` has an index but no FK constraint (Better Auth design) * `organization.metadata` is `text`, not JSONB – Better Auth serializes it as a string ### Billing Tables Managed by the [`@better-auth/stripe`](https://www.better-auth.com/docs/plugins/stripe) plugin. Do not insert or update records manually – the plugin handles the subscription lifecycle via Stripe webhooks. | Table | File | Purpose | | -------------- | ------------------------ | ----------------------------------------------- | | `subscription` | `schema/subscription.ts` | Stripe subscription state, plan, billing period | The `referenceId` column is polymorphic: it points to `user.id` for personal billing or `organization.id` for org-level billing. ## Extended Fields Several tables include columns beyond Better Auth's defaults: * **passkey:** `lastUsedAt` (security audits), `deviceName` (user-friendly label like "MacBook Pro"), `platform` ("platform" or "cross-platform") * **invitation:** `acceptedAt` / `rejectedAt` lifecycle timestamps ## Adding a New Table **1. Create a schema file** in `db/schema/`: ```ts // db/schema/product.ts import { pgTable, text, integer, timestamp } from "drizzle-orm/pg-core"; import { relations } from "drizzle-orm"; import { generateId } from "./id"; import { organization } from "./organization"; import { user } from "./user"; export const product = pgTable("product", { id: text() .primaryKey() .$defaultFn(() => generateId("prd")), name: text().notNull(), description: text(), price: integer().notNull(), organizationId: text() .notNull() .references(() => organization.id, { onDelete: "cascade" }), createdBy: text() .notNull() .references(() => user.id), createdAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .notNull(), updatedAt: timestamp({ withTimezone: true, mode: "date" }) .defaultNow() .$onUpdate(() => new Date()) .notNull(), }); export const productRelations = relations(product, ({ one }) => ({ organization: one(organization, { fields: [product.organizationId], references: [organization.id], }), creator: one(user, { fields: [product.createdBy], references: [user.id], }), })); ``` **2. Export from the barrel file:** ```ts // db/schema/index.ts export * from "./product"; // [!code ++] ``` **3. Generate and apply the migration:** ```bash bun db:generate bun db:migrate ``` See [Migrations](./migrations) for the full workflow. ## Extending Auth Tables To add custom columns to authentication tables, update both the Drizzle schema and the Better Auth config: ```ts // db/schema/user.ts – add the column export const user = pgTable("user", { // ... existing fields ... phoneNumber: text(), // [!code ++] }); ``` ```ts // apps/api/lib/auth.ts – register with Better Auth betterAuth({ user: { additionalFields: { phoneNumber: { type: "string", required: false }, // [!code ++] }, }, }); ``` Then [generate and apply migrations](./migrations) as usual. --- --- url: /security/checklist.md --- # Security Best Practices Checklist A comprehensive security checklist for React Starter Kit applications. Review this checklist during development, before deployment, and regularly in production. ## Development Phase ### Code Security #### Input Validation * \[ ] Validate all user inputs on both client and server * \[ ] Use Zod schemas for type-safe validation * \[ ] Sanitize HTML content to prevent XSS * \[ ] Validate file uploads (type, size, content) * \[ ] Implement rate limiting on forms and APIs ```typescript // Example: tRPC input validation with Zod export const userRouter = router({ create: publicProcedure .input( z.object({ email: z.string().email(), name: z.string().min(1).max(100), age: z.number().int().positive().max(120), }), ) .mutation(async ({ input }) => { // Input is already validated }), }); ``` #### Authentication & Authorization * \[ ] Use Better Auth for authentication * \[ ] Implement proper session management * \[ ] Use secure session storage (httpOnly cookies) * \[ ] Implement CSRF protection * \[ ] Check permissions on every protected route * \[ ] Log authentication events ```typescript // Example: Protected tRPC procedure export const protectedProcedure = t.procedure.use(async ({ ctx, next }) => { if (!ctx.session?.user) { throw new TRPCError({ code: "UNAUTHORIZED" }); } return next({ ctx: { ...ctx, user: ctx.session.user } }); }); ``` #### Data Protection * \[ ] Never log sensitive data (passwords, tokens, PII) * \[ ] Use parameterized queries (Drizzle ORM) * \[ ] Encrypt sensitive data at rest * \[ ] Implement proper error handling without data leaks * \[ ] Use HTTPS for all communications * \[ ] Validate and sanitize database queries ```typescript // Example: Safe database query with Drizzle const users = await db .select() .from(usersTable) .where(eq(usersTable.email, email)); // Parameterized, prevents SQL injection ``` ### Secret Management #### Environment Variables * \[ ] Store secrets in `.env.local` (never commit) * \[ ] Use `.env` only for non-sensitive defaults * \[ ] Document required variables in `.env` * \[ ] Validate environment variables at startup * \[ ] Use different secrets for each environment ```typescript // Example: Environment validation const env = z .object({ DATABASE_URL: z.string().url(), JWT_SECRET: z.string().min(32), SMTP_PASSWORD: z.string(), PUBLIC_API_URL: z.string().url(), // Safe for client }) .parse(process.env); ``` #### Production Secrets * \[ ] Use Cloudflare Workers secrets for production * \[ ] Rotate secrets regularly * \[ ] Never hardcode secrets in code * \[ ] Audit secret access logs * \[ ] Use secret scanning in CI/CD ### Dependencies #### Package Management * \[ ] Run `bun audit` regularly * \[ ] Review new dependencies before adding * \[ ] Check dependency licenses * \[ ] Enable Dependabot alerts * \[ ] Keep dependencies up to date * \[ ] Use lock files (`bun.lockb`) ```bash # Security audit commands bun audit # Check for vulnerabilities bun update --latest # Update dependencies bun pm ls # List all dependencies ``` #### Supply Chain Security * \[ ] Verify package authenticity * \[ ] Use specific versions (not wildcards) * \[ ] Review dependency source code for critical packages * \[ ] Monitor for dependency hijacking * \[ ] Use SubResource Integrity (SRI) for CDN resources ## Pre-Deployment Phase ### Security Headers #### Configure Headers * \[ ] Content Security Policy (CSP) * \[ ] X-Frame-Options * \[ ] X-Content-Type-Options * \[ ] Strict-Transport-Security (HSTS) * \[ ] Referrer-Policy * \[ ] Permissions-Policy ```typescript // Example: Security headers in Hono app.use("*", async (c, next) => { await next(); c.header("X-Frame-Options", "DENY"); c.header("X-Content-Type-Options", "nosniff"); c.header("Strict-Transport-Security", "max-age=31536000"); c.header("Content-Security-Policy", "default-src 'self'"); }); ``` ### API Security #### tRPC Security * \[ ] Validate all inputs with Zod * \[ ] Implement rate limiting * \[ ] Use proper error codes * \[ ] Don't expose internal errors * \[ ] Log suspicious activities * \[ ] Implement request timeouts ```typescript // Example: Rate limiting middleware const rateLimiter = new Map(); export const rateLimit = middleware(async ({ ctx, next }) => { const key = ctx.ip; const limit = rateLimiter.get(key) || 0; if (limit > 10) { throw new TRPCError({ code: "TOO_MANY_REQUESTS" }); } rateLimiter.set(key, limit + 1); setTimeout(() => rateLimiter.delete(key), 60000); return next(); }); ``` #### CORS Configuration * \[ ] Configure allowed origins explicitly * \[ ] Don't use wildcard (\*) in production * \[ ] Validate Origin header * \[ ] Configure allowed methods and headers * \[ ] Use credentials carefully ### Client Security #### React Security * \[ ] Avoid dangerouslySetInnerHTML * \[ ] Sanitize user-generated content * \[ ] Use Content Security Policy * \[ ] Validate URLs before navigation * \[ ] Implement proper error boundaries * \[ ] Don't expose sensitive data in state ```typescript // Example: Safe HTML rendering import DOMPurify from 'isomorphic-dompurify' function SafeHTML({ content }: { content: string }) { const sanitized = DOMPurify.sanitize(content) return
} ``` #### Browser Storage * \[ ] Don't store sensitive data in localStorage * \[ ] Use httpOnly cookies for sessions * \[ ] Encrypt sensitive client-side data * \[ ] Clear storage on logout * \[ ] Implement storage quotas ## Deployment Phase ### Infrastructure Security #### Cloudflare Workers * \[ ] Configure WAF rules * \[ ] Enable DDoS protection * \[ ] Set up rate limiting * \[ ] Configure security headers * \[ ] Enable bot protection * \[ ] Monitor security events ```toml # Example: wrangler.toml security config [env.production] compatibility_date = "2024-01-01" compatibility_flags = ["nodejs_compat"] [env.production.rate_limiting] enabled = true requests_per_minute = 60 ``` #### CI/CD Security * \[ ] Use least privilege for CI/CD tokens * \[ ] Store secrets securely (GitHub Secrets) * \[ ] Enable branch protection * \[ ] Require code reviews * \[ ] Run security checks in pipeline * \[ ] Sign commits and releases ```yaml # Example: GitHub Actions security - name: Run security audit run: | bun audit bun test:security - name: SAST Scan uses: github/super-linter@v5 env: VALIDATE_JAVASCRIPT_ES: true VALIDATE_TYPESCRIPT_ES: true ``` ### Monitoring & Logging #### Security Monitoring * \[ ] Log authentication attempts * \[ ] Monitor for suspicious patterns * \[ ] Set up security alerts * \[ ] Track rate limit violations * \[ ] Monitor dependency vulnerabilities * \[ ] Review logs regularly ```typescript // Example: Security event logging function logSecurityEvent(event: { type: string; user?: string; ip: string; details: Record; }) { console.log( JSON.stringify({ timestamp: new Date().toISOString(), severity: "SECURITY", ...event, }), ); } ``` #### Incident Response * \[ ] Have incident response plan ready * \[ ] Configure security notifications * \[ ] Set up backup and recovery * \[ ] Document security contacts * \[ ] Test incident procedures * \[ ] Keep security playbook updated ## Production Phase ### Ongoing Security #### Regular Tasks * \[ ] Weekly: Review security alerts * \[ ] Monthly: Run dependency audits * \[ ] Quarterly: Security assessment * \[ ] Annually: Penetration testing * \[ ] Ongoing: Security training #### Security Updates * \[ ] Monitor security advisories * \[ ] Apply patches promptly * \[ ] Test updates in staging * \[ ] Document security changes * \[ ] Communicate with users about security ### Compliance #### Data Protection * \[ ] Implement GDPR compliance (if applicable) * \[ ] Add privacy policy * \[ ] Implement data deletion * \[ ] Log data access * \[ ] Encrypt personal data #### Security Documentation * \[ ] Maintain SECURITY.md * \[ ] Document security procedures * \[ ] Keep incident log * \[ ] Update security checklist * \[ ] Train team on security ## Quick Security Wins For immediate security improvements: 1. **Run Security Audit** ```bash bun audit ``` 2. **Add Security Headers** ```typescript // apps/api/src/index.ts app.use(securityHeaders()); ``` 3. **Implement Rate Limiting** ```typescript // apps/api/src/middleware.ts app.use(rateLimit({ limit: 100, window: "1m" })); ``` 4. **Enable HTTPS Redirect** ```typescript // apps/web/src/index.ts if (location.protocol === "http:") { location.replace("https:" + window.location.href.substring(5)); } ``` 5. **Add Input Validation** ```typescript // Use Zod everywhere const schema = z.object({ /* ... */ }); const validated = schema.parse(input); ``` ## Security Resources ### Tools * [OWASP ZAP](https://www.zaproxy.org/) – Security scanning * [Snyk](https://snyk.io/) – Dependency scanning * [GitHub Security](https://github.com/security) – Security features * [Mozilla Observatory](https://observatory.mozilla.org/) – Security assessment ### Documentation * [OWASP Top 10](https://owasp.org/www-project-top-ten/) * [React Security](https://react.dev/learn/security) * [Cloudflare Security](https://developers.cloudflare.com/workers/platform/security/) * [Better Auth Docs](https://better-auth.com/docs/security) ### Emergency Contacts * Security Issues: `security@kriasoft.com` * GitHub Security: [Security Advisories](https://github.com/kriasoft/react-starter-kit/security) * CVE Database: [MITRE CVE](https://cve.mitre.org/) *** *Review this checklist regularly and update based on new threats and best practices.* --- --- url: /security/incident-playbook.md --- # Security Incident Response Playbook This playbook provides step-by-step procedures for handling security incidents in React Starter Kit projects. Each procedure includes specific actions, tools, and decision criteria. ## Quick Reference * **Security Email**: `security@kriasoft.com` * **Incident Tracking**: GitHub Security Advisories * **Communication Channel**: Email (encrypted when possible) * **Escalation**: Project maintainers via GitHub ## Incident Classification ### Determining Severity Use this decision tree to classify incidents: ``` Is remote code execution possible? ├─ Yes → CRITICAL (P0) └─ No → Can authentication be bypassed? ├─ Yes → CRITICAL (P0) └─ No → Is sensitive data exposed? ├─ Yes (all users) → CRITICAL (P0) ├─ Yes (subset) → HIGH (P1) └─ No → Is privilege escalation possible? ├─ Yes → HIGH (P1) └─ No → Is XSS present? ├─ Yes (auth flow) → HIGH (P1) ├─ Yes (other) → MEDIUM (P2) └─ No → Is CSRF possible? ├─ Yes → MEDIUM (P2) └─ No → LOW (P3) ``` ## Phase 1: Initial Response ### Step 1.1: Acknowledge Report (0-2 hours) **Actions:** 1. Send acknowledgment email with template: ``` Subject: [RSK-SEC-YYYY-NNN] Security Report Received Thank you for your security report. We have received your submission and assigned tracking ID: RSK-SEC-YYYY-NNN We will begin our initial assessment and respond within [TIMEFRAME]. Please keep this vulnerability confidential while we investigate. ``` 2. Create private GitHub issue for tracking 3. Assign initial responder **Tools:** Email client, GitHub Issues (private) ### Step 1.2: Initial Assessment (2-24 hours) **Actions:** 1. Review report for completeness 2. Attempt to reproduce vulnerability 3. Determine affected components 4. Classify severity using decision tree **Decision Points:** * If cannot reproduce – Request clarification * If critical – Immediately notify maintainers * If valid – Proceed to Phase 2 ### Step 1.3: Form Response Team **For Critical/High severity:** * Lead: Project maintainer * Developer: Fix implementation * Reviewer: Code review and testing * Communicator: External updates **For Medium/Low severity:** * Lead: Available maintainer * Developer: Fix implementation ## Phase 2: Investigation & Containment ### Step 2.1: Deep Dive Analysis (Day 1-2) **Actions:** 1. Set up isolated test environment 2. Reproduce vulnerability with minimal test case 3. Identify root cause 4. Document attack vectors 5. Check for similar vulnerabilities **Checklist:** * \[ ] Vulnerability reproduced * \[ ] Root cause identified * \[ ] Attack surface mapped * \[ ] Similar code patterns checked * \[ ] Impact assessment complete ### Step 2.2: Temporary Mitigation (If Critical) **Actions:** 1. Develop temporary workaround 2. Test workaround doesn't break functionality 3. Document workaround for users 4. Publish security bulletin with mitigation **Template for Security Bulletin:** ```markdown ## Security Bulletin: [TITLE] **Date**: [DATE] **Severity**: [CRITICAL/HIGH] **Status**: Under Investigation ### Summary We are investigating a security vulnerability in React Starter Kit. ### Temporary Mitigation Until a patch is available, users should: 1. [Specific mitigation steps] 2. [Additional steps] ### Timeline - Patch expected: [DATE] - Full disclosure: After patch ### Contact Report issues to: `security@kriasoft.com` ``` ## Phase 3: Development & Testing ### Step 3.1: Develop Fix (Varies by severity) **Actions:** 1. Create private branch for fix 2. Implement minimal fix (no refactoring) 3. Add regression tests 4. Document code changes **Code Review Checklist:** * \[ ] Fix addresses root cause * \[ ] No new vulnerabilities introduced * \[ ] Tests cover vulnerability scenario * \[ ] Changes are minimal and focused * \[ ] No sensitive info in comments/commits ### Step 3.2: Testing Protocol **Test Environments:** 1. Local development 2. Isolated staging 3. Integration testing 4. Performance impact **Test Cases:** * \[ ] Original PoC no longer works * \[ ] Legitimate functionality preserved * \[ ] No performance regression * \[ ] No new error conditions * \[ ] Edge cases handled ### Step 3.3: Prepare Release **Actions:** 1. Update version numbers 2. Write release notes 3. Prepare security advisory 4. Request CVE (if applicable) **CVE Request Template:** ``` [Contact GitHub Security for CVE] Repository: react-starter-kit Vulnerability Type: [TYPE] Affected Versions: < X.Y.Z Fixed Version: X.Y.Z Description: [DESCRIPTION] ``` ## Phase 4: Release & Disclosure ### Step 4.1: Coordinated Release **Release Checklist:** * \[ ] Code merged to main branch * \[ ] Version tagged and released * \[ ] Security advisory drafted * \[ ] Reporter notified of release date * \[ ] Release notes prepared ### Step 4.2: Public Disclosure **Actions:** 1. Publish GitHub Security Advisory 2. Update SECURITY.md if needed 3. Send notification to users (if critical) 4. Credit reporter **Security Advisory Template:** ```markdown ## [CVE-YYYY-NNNNN] [Vulnerability Title] **Severity**: [Critical/High/Medium/Low] **Affected Versions**: < X.Y.Z **Patched Version**: X.Y.Z ### Description [Clear description of vulnerability] ### Impact [Potential impact on users] ### Patches Update to version X.Y.Z or later. ### Workarounds [If any temporary workarounds exist] ### References - [Links to fixes] - [Links to documentation] ### Credit Reported by [Name] ([Organization]) ``` ### Step 4.3: User Communication **For Critical vulnerabilities:** 1. Email registered users (if applicable) 2. Post on project blog/website 3. Social media announcement 4. Update documentation **Communication Template:** ``` Subject: [ACTION REQUIRED] Security Update for React Starter Kit A critical security vulnerability has been discovered and patched. Action Required: 1. Update to version X.Y.Z immediately 2. Review security advisory: [LINK] 3. Apply any additional mitigations Details: [BRIEF DESCRIPTION] Questions: `security@kriasoft.com` ``` ## Phase 5: Post-Incident Review ### Step 5.1: Incident Retrospective (Within 1 week) **Meeting Agenda:** 1. Timeline review 2. What went well 3. What could improve 4. Action items 5. Policy updates needed **Questions to Answer:** * How was the vulnerability introduced? * Why wasn't it caught earlier? * How can we prevent similar issues? * Was our response adequate? * What tools/processes need improvement? ### Step 5.2: Implement Improvements **Common Improvements:** * Add security linting rules * Enhance test coverage * Update coding guidelines * Improve dependency management * Add security checkpoints to CI/CD ### Step 5.3: Documentation Updates **Update as needed:** * This playbook * SECURITY.md * Development guidelines * CI/CD configurations * Security checklist ## Appendix A: Tools & Resources ### Security Tools * **Dependency Scanning**: `bun audit`, Dependabot * **Static Analysis**: ESLint security plugins * **Secret Scanning**: GitHub secret scanning, truffleHog * **SAST**: Semgrep, CodeQL * **Testing**: Vitest for security tests ### Communication Tools * **Encrypted Email**: PGP/GPG * **Secure File Transfer**: Age encryption * **Private Issues**: GitHub Security Advisories ### External Resources * [GitHub Security Advisories](https://docs.github.com/en/code-security/security-advisories) * [CVE Request Process](https://cve.mitre.org/cve/request_id.html) * [OWASP Incident Response](https://owasp.org/www-project-incident-response) * [NIST Incident Handling Guide](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf) ## Appendix B: Contact Templates ### Reporter Follow-up ``` Subject: Re: [RSK-SEC-YYYY-NNN] Status Update Thank you for your patience. Here's an update on your report: Status: [In Progress/Testing Fix/Ready for Release] Severity: [Confirmed as X] Timeline: [Expected resolution date] [Any questions for reporter] We'll notify you before public disclosure. ``` ### Maintainer Escalation ``` Subject: [URGENT] Critical Security Issue - Immediate Action Required A critical vulnerability has been reported: Tracking: RSK-SEC-YYYY-NNN Type: [Vulnerability type] Impact: [Brief impact description] Status: [Confirmed/Under Investigation] Required Actions: 1. [Immediate actions needed] 2. [Review assignments] Details in private issue: [Link] ``` ### Release Notification ``` Subject: Security Release Scheduled - [DATE] Security release details: Version: X.Y.Z Release Date: [DATE TIME UTC] Severity: [Level] CVE: [If assigned] Pre-release checklist: - [ ] Code reviewed and tested - [ ] Advisory prepared - [ ] Reporter notified - [ ] Release notes ready Please confirm readiness by [DATE]. ``` ## Revision History * v1.0.0 - Initial playbook creation * Updates logged in commit history *** *This playbook is a living document. Update it based on lessons learned from each incident.* --- --- url: /security/policy-template.md --- # Security Policy & Incident Response Plan ## Our Security Commitment The \[PROJECT\_NAME] team takes security seriously. We appreciate responsible disclosure of vulnerabilities and are committed to working with security researchers to keep our project secure. This document outlines our security policy, incident response procedures, and how to report vulnerabilities. ## Scope This security policy applies to vulnerabilities discovered within the `[REPOSITORY_NAME]` repository. The scope includes: * \[List specific components, modules, or features] * \[Example: Core application code and configurations] * \[Example: API endpoints and authentication systems] * \[Example: Database schemas and data handling] * \[Example: Build and deployment processes] ### Out of Scope The following are considered **out of scope** for this policy: * Vulnerabilities in third-party dependencies already publicly disclosed * Issues requiring physical access or compromised credentials * Social engineering attacks * \[Add project-specific exclusions] ## Supported Versions We provide security updates for the following versions: | Version | Supported | | ------- | ------------------ | | \[X.Y.Z] | :white\_check\_mark: | | \[X.Y-1] | :x: | ## Incident Response * **Report Security Issues**: `[SECURITY_EMAIL]` * **Initial Response**: Within \[RESPONSE\_TIME] * **Critical Issues**: Escalated immediately to maintainers ## Reporting a Vulnerability **⚠️ DO NOT report security vulnerabilities through public GitHub issues.** Report to: **`[SECURITY_EMAIL]`** ### Include in Your Report 1. **Description**: Clear explanation of the vulnerability and impact 2. **Steps to Reproduce**: Minimal steps to demonstrate the issue 3. **Proof of Concept**: Code or screenshots if applicable 4. **Affected Version**: Branch or commit hash 5. **Suggested Fix**: Optional recommendations ## Incident Response Process ### Severity Classification | Level | Description | Examples | | ----------------- | ----------------------------- | --------------------------------------------------------- | | **Critical (P0)** | Immediate threat to all users | Remote code execution, authentication bypass, data breach | | **High (P1)** | Significant security impact | Privilege escalation, data exposure, XSS in auth flows | | **Medium (P2)** | Limited security impact | XSS in non-critical areas, CSRF vulnerabilities | | **Low (P3)** | Minor security issues | Information disclosure, security misconfigurations | ### Response Timeline | Severity | Initial Response | Fix Target | Disclosure | | -------- | ---------------- | ----------- | ------------ | | Critical | 2 days | 14 days | Upon patch | | High | 3 days | 30 days | Upon patch | | Medium | 5 days | 60 days | Upon patch | | Low | 7 days | Best effort | With release | ### Incident Response Phases #### Phase 1: Detection & Analysis * Acknowledge receipt and assign tracking ID * Validate and reproduce vulnerability * Assess severity and impact * Notify team if critical #### Phase 2: Containment * Implement temporary mitigations * Document affected components * Begin fix development * Prepare communication plan #### Phase 3: Remediation * Develop and test permanent fix * Prepare security patch * Request CVE if appropriate * Coordinate disclosure timeline #### Phase 4: Recovery & Disclosure * Release patched version * Publish security advisory * Update documentation * Credit reporter #### Phase 5: Post-Incident Review * Document lessons learned * Update security practices * Improve detection * Update policies ## Communication Expectations * All communications via email * Regular updates throughout process * Clear explanation if not a vulnerability * Confidentiality until patched ## Safe Harbor We consider security research conducted in good faith to be: * Authorized under applicable laws * Exempt from Terms of Service restrictions * Protected from legal action Requirements for safe harbor: * Follow this policy * Report responsibly * Avoid privacy violations * No exploitation beyond demonstration ## Recognition We value security researchers' contributions: * Public credit in advisories (unless anonymous) * Security acknowledgments * Letter of appreciation upon request ## Security Best Practices for Users ### Essential Security Measures 1. **Secret Management** * Never commit secrets to version control * Use environment variables for sensitive data * Implement secret rotation * Enable secret scanning 2. **Authentication & Authorization** * Implement proper session management * Use strong password policies * Enable multi-factor authentication * Regular auth system updates 3. **Dependencies** * Regular security audits * Keep dependencies updated * Review licenses and advisories * Use dependency scanning tools 4. **Deployment** * HTTPS everywhere * Proper CORS policies * Security headers (CSP, HSTS, etc.) * Regular security assessments 5. **Code Security** * Input validation * Output encoding * Parameterized queries * Principle of least privilege ## Additional Resources * [Security Advisories]([GITHUB_SECURITY_URL]) * [OWASP Top 10](https://owasp.org/www-project-top-ten/) * [Project Documentation]([DOCS_URL]) * [Security Checklist]([CHECKLIST_URL]) *** *** *Template Instructions: Replace all \[BRACKETS] with project-specific information and adjust timelines to match your team's capacity.* Thank you for helping us keep \[PROJECT\_NAME] secure! --- --- url: /database/seeding.md --- # Seeding Seed scripts populate your database with test data for development. They live in `db/seeds/` and are orchestrated by `db/scripts/seed.ts`. ## Running Seeds ```bash bun db:seed # seed development database bun db:seed:staging # seed staging bun db:seed:prod # seed production ``` Seeds use `onConflictDoNothing()`, so they're safe to rerun without duplicating data. ## Project Structure ``` db/ ├── seeds/ │ └── users.ts # Creates 10 test user accounts └── scripts/ └── seed.ts # Entry point – connects to DB, calls seed functions ``` The seed runner imports your Drizzle config for environment resolution, creates a single-connection client, and calls each seed function in sequence. ## Writing a Custom Seed **1. Create a seed file** in `db/seeds/`: ```ts // db/seeds/products.ts import type { PostgresJsDatabase } from "drizzle-orm/postgres-js"; import type * as schema from "../schema"; import { product } from "../schema"; export async function seedProducts(db: PostgresJsDatabase) { const data = [ { name: "Starter Plan Guide", price: 0, organizationId: "org_..." }, { name: "Pro Onboarding Kit", price: 4900, organizationId: "org_..." }, ]; await db.insert(product).values(data).onConflictDoNothing(); console.log(`Seeded ${data.length} products`); } ``` **2. Register in the seed runner:** ```ts // db/scripts/seed.ts import { seedUsers } from "../seeds/users"; import { seedProducts } from "../seeds/products"; // [!code ++] // In the main function: await seedUsers(db); await seedProducts(db); // [!code ++] ``` ## Guidelines * Use realistic but obviously fake data (`alice@example.com`, not real addresses) * Always include `onConflictDoNothing()` so seeds are idempotent * Provide variety – mix of verified/unverified users, different roles, multiple orgs * Keep seed datasets small but representative of real usage patterns * Order seed calls by dependency – users before organizations before memberships --- --- url: /auth/sessions.md --- # Sessions & Protected Routes Session state is managed exclusively through TanStack Query. The auth client fetches session data, TanStack Query caches it, and route guards use the cache to protect pages – no direct `auth.useSession()` calls or local storage. ## Session Query The session query is defined in `apps/app/lib/queries/session.ts`: ```ts export function sessionQueryOptions() { return queryOptions({ queryKey: ["auth", "session"], queryFn: async () => { const response = await auth.getSession(); if (response.error) throw response.error; return response.data; }, staleTime: 30_000, // 30 seconds retry(failureCount, error) { const status = getErrorStatus(error); if (status === 401 || status === 403) return false; return failureCount < 3; }, }); } ``` Key behaviors: * Returns `null` when unauthenticated (not an error) * 30-second stale time keeps auth state current without excessive requests * 401/403 errors are not retried – retrying won't help for auth failures * Inherits global `gcTime`, `refetchOnWindowFocus`, and `refetchOnReconnect` from QueryClient defaults ### Session Data Shape ```ts interface SessionData { user: User; // id, name, email, emailVerified, image, ... session: Session; // id, token, expiresAt, activeOrganizationId, ... } ``` Both `user` and `session` must be present for valid auth state. Partial data (only user, only session) is treated as unauthenticated. ### Reading Session Data ```ts // In components – triggers fetch if stale const { data } = useSessionQuery(); // With Suspense const { data } = useSuspenseSessionQuery(); // Sync check of cache only – no network request const session = getCachedSession(queryClient); const loggedIn = isAuthenticated(queryClient); ``` ## Protected Route Guard The `(app)/route.tsx` layout route protects all app pages with a cache-first auth check: ```ts // apps/app/routes/(app)/route.tsx export const Route = createFileRoute("/(app)")({ beforeLoad: async ({ context, location }) => { // Check cache first – instant navigation if session is cached let session = getCachedSession(context.queryClient); // Fetch only if cache is empty (first load or after cache clear) if (session === undefined) { session = await context.queryClient.fetchQuery(sessionQueryOptions()); } if (!session?.user || !session?.session) { throw redirect({ to: "/login", search: { returnTo: location.href }, }); } return { user: session.user, session }; }, component: AppLayout, }); ``` This pattern means: * **Cached session** → navigation is instant (no network request) * **No cache** → fetches session, redirects to `/login` if unauthenticated * **`returnTo`** → preserves the original URL so users land back after login The session data is returned from `beforeLoad` and available to all child routes via `Route.useRouteContext()`. ## Login Page The login route (`(auth)/login.tsx`) handles the inverse – redirecting authenticated users away: ```ts // apps/app/routes/(auth)/login.tsx export const Route = createFileRoute("/(auth)/login")({ validateSearch: searchSchema, beforeLoad: async ({ context, search }) => { try { const session = await context.queryClient.fetchQuery( sessionQueryOptions(), ); if (session?.user && session?.session) { throw redirect({ to: search.returnTo ?? "/" }); } } catch (error) { if (isRedirect(error)) throw error; // Fetch errors → show login form } }, }); ``` After successful authentication, the login page revalidates the session and navigates: ```ts async function handleSuccess() { await revalidateSession(queryClient, router); await router.navigate({ to: search.returnTo ?? "/" }); } ``` `revalidateSession` removes the cached session (forcing a fresh fetch) and invalidates the router so `beforeLoad` re-runs with new data. ## Sign Out The `signOut` function clears the server session, updates the cache, and performs a hard redirect: ```ts // apps/app/lib/queries/session.ts export async function signOut( queryClient: QueryClient, options?: { redirect?: boolean }, ) { try { await auth.signOut(); } finally { queryClient.setQueryData(sessionQueryKey, null); if (options?.redirect !== false) { window.location.href = "/login"; } } } ``` The hard redirect (`window.location.href`) resets all in-memory state – Jotai atoms, component state, TanStack Query cache – ensuring a clean slate between user sessions. Pass `{ redirect: false }` for programmatic sign-out without navigation. `setQueryData(null)` is used instead of `invalidateQueries` to avoid a wasted refetch of a session that no longer exists. ## Auth Error Boundary The `AuthErrorBoundary` wraps protected route layouts and catches authentication errors that occur during rendering (e.g., a tRPC call returns 401): ```ts // apps/app/components/auth/auth-error-boundary.tsx export function AuthErrorBoundary({ children }) { return ( { if (isUnauthenticatedError(error)) { queryClient.removeQueries({ queryKey: sessionQueryKey }); } }} > {children} ); } ``` The fallback UI shows two options: * **Try Again** – resets the error boundary and refetches the session * **Sign In** – clears session cache and redirects to `/login` with `returnTo` Auth errors (401) get the auth-specific fallback. Other errors (500, network) get a generic error fallback with a retry button. ## Auth Hint Cookie The API worker manages a lightweight routing cookie alongside the session. On sign-in, it sets `__Host-auth=1` (HTTPS) or `auth=1` (HTTP dev). On sign-out or invalid session, it clears it. The web edge worker reads this cookie to decide how to route `/`: ```ts // apps/web/worker.ts const hasAuthHint = getCookie(c, "__Host-auth") === "1" || getCookie(c, "auth") === "1"; const upstream = hasAuthHint ? c.env.APP_SERVICE : c.env.ASSETS; ``` This cookie is **not a security boundary** – it's a performance optimization. False positives (stale cookie after session expiry) cause one extra redirect to `/login`. The app worker is always the authority for session validation. The cookie lifecycle is managed by Better Auth hooks: | Event | Action | | ------------------------------------- | ------------------ | | New session (sign-in, sign-up, OAuth) | Set cookie | | Sign-out | Clear cookie | | Session check with no valid session | Clear stale cookie | See [ADR-001](/adr/001-auth-hint-cookie) for the design rationale. --- --- url: /auth/social-providers.md --- # Social Providers Google OAuth is configured out of the box. The flow redirects users to Google's consent screen, then back to your app where Better Auth creates or links the account. ## Server Configuration Google OAuth credentials are set in `apps/api/lib/auth.ts`: ```ts socialProviders: { google: { clientId: env.GOOGLE_CLIENT_ID, clientSecret: env.GOOGLE_CLIENT_SECRET, }, }, ``` ### Setting Up Google OAuth 1. Go to the [Google Cloud Console](https://console.cloud.google.com/apis/credentials) 2. Create an OAuth 2.0 Client ID (Web application type) 3. Add authorized redirect URI: `https://your-domain.com/api/auth/callback/google` * For local development: `http://localhost:5173/api/auth/callback/google` 4. Copy the client ID and secret to your `.env.local`: ```sh GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com GOOGLE_CLIENT_SECRET=GOCSPX-your-client-secret ``` ## Client Component The `GoogleLogin` component in `apps/app/components/auth/google-login.tsx` handles the OAuth redirect: ```ts const handleGoogleLogin = async () => { // Clear stale session before OAuth redirect queryClient.removeQueries({ queryKey: sessionQueryKey }); // OAuth redirects to /login which validates session and redirects to returnTo const callbackURL = returnTo ? `/login?returnTo=${encodeURIComponent(returnTo)}` : "/login"; const result = await auth.signIn.social({ provider: "google", callbackURL, }); }; ``` The flow works as follows: 1. User clicks "Continue with Google" 2. Stale session cache is cleared (prevents showing old data after redirect) 3. `auth.signIn.social()` redirects to Google's consent screen 4. After consent, Google redirects back to `/api/auth/callback/google` 5. Better Auth creates/links the account and sets the session cookie 6. The callback redirects to `callbackURL` (`/login?returnTo=...`) 7. The login page detects the active session and redirects to `returnTo` ### Preserving Return URL The `returnTo` parameter survives the OAuth round-trip by being encoded into the `callbackURL`. When the user lands back on `/login`, the search params schema validates and sanitizes the URL: ```ts const searchSchema = z.object({ returnTo: z .string() .optional() .transform((val) => { const safe = getSafeRedirectUrl(val); return safe === "/" ? undefined : safe; }) .catch(undefined), }); ``` Only same-origin relative paths are accepted – absolute URLs and protocol-relative URLs (`//evil.com`) are rejected. ## Adding Another Provider Better Auth supports [30+ OAuth providers](https://www.better-auth.com/docs/concepts/oauth). To add one: **1. Add server config** in `apps/api/lib/auth.ts`: ```ts socialProviders: { google: { ... }, github: { // [!code ++] clientId: env.GITHUB_CLIENT_ID, // [!code ++] clientSecret: env.GITHUB_CLIENT_SECRET, // [!code ++] }, // [!code ++] }, ``` **2. Add env vars** to `apps/api/lib/env.ts` and your `.env.local`. **3. Update the providers list** in `apps/app/lib/auth-config.ts`: ```ts oauth: { providers: ["google", "github"] as const, // [!code ++] }, ``` **4. Create a login button component** following the `GoogleLogin` pattern – clear session cache, call `auth.signIn.social({ provider: "github" })`, handle errors. **5. Add the button** to the `MethodSelection` component in `auth-form.tsx`. --- --- url: /frontend/state.md --- # State & Data Fetching Server state is managed with [TanStack Query](https://tanstack.com/query/latest) through a tRPC integration. Client state uses [Jotai](https://jotai.org/) atoms when needed. ## tRPC Client The tRPC client in `apps/app/lib/trpc.ts` provides two exports: ```tsx import { trpcClient } from "@/lib/trpc"; // Raw tRPC client import { api } from "@/lib/trpc"; // TanStack Query integration ``` * **`trpcClient`** – call procedures directly (useful in query functions, `beforeLoad`, and non-React code) * **`api`** – creates `queryOptions` objects for use with TanStack Query hooks The client sends requests to `/api/trpc` with batched HTTP transport and includes credentials for cookie-based auth. A logger link is added in development. ## TanStack Query Defaults The `QueryClient` in `apps/app/lib/query.ts` is configured with sensible defaults: | Option | Value | Rationale | | ---------------------- | ---------- | ---------------------------------------------------- | | `staleTime` | 2 min | Prevents redundant API calls during typical sessions | | `gcTime` | 5 min | Balances memory with instant data on back-navigation | | `retry` | 3 | Exponential backoff: 1s, 2s, 4s (capped at 30s) | | `refetchOnWindowFocus` | `true` | Keeps data current after tab switches | | `refetchOnReconnect` | `"always"` | Overrides `staleTime` after connectivity loss | Mutations retry once with a 1s delay. ## Session Query The session query (`apps/app/lib/queries/session.ts`) is the canonical example of a query module. It overrides global defaults where auth requires different behavior: ```tsx export function sessionQueryOptions() { return queryOptions({ queryKey: ["auth", "session"], queryFn: async () => { const response = await auth.getSession(); if (response.error) throw response.error; return response.data; }, // Auth state should stay fresher than general data staleTime: 30_000, // Don't retry 401/403 – retrying won't help retry(failureCount, error) { const status = getErrorStatus(error); if (status === 401 || status === 403) return false; return failureCount < 3; }, }); } ``` Returns `null` when unauthenticated – not an error. The module also exports helpers for cache access: | Export | Purpose | | ---------------------------------------- | ----------------------------------------------------------- | | `useSessionQuery()` | Basic hook | | `useSuspenseSessionQuery()` | Suspense-enabled version | | `getCachedSession(queryClient)` | Sync cache read (no network) | | `isAuthenticated(queryClient)` | Binary check – requires both `user` and `session` | | `signOut(queryClient)` | Clears server session, sets cache to `null`, hard redirects | | `revalidateSession(queryClient, router)` | Removes cached query so `beforeLoad` fetches fresh | ## Billing Query The billing query demonstrates multi-tenant key design – including `activeOrgId` in the key causes automatic refetch when the user switches organizations: ```tsx // apps/app/lib/queries/billing.ts export function billingQueryOptions(activeOrgId?: string | null) { return queryOptions({ queryKey: ["billing", "subscription", activeOrgId ?? null], queryFn: () => trpcClient.billing.subscription.query(), }); } ``` Usage in a component: ```tsx function BillingCard() { const { data: session } = useSessionQuery(); const activeOrgId = session?.session?.activeOrganizationId; const { data: billing, isLoading } = useBillingQuery(activeOrgId); // ... } ``` ## Calling Procedures from Components Use the `api` proxy to create query options, then pass them to TanStack Query hooks: ```tsx import { useSuspenseQuery } from "@tanstack/react-query"; import { api } from "@/lib/trpc"; function UserList() { const { data: users } = useSuspenseQuery(api.user.list.queryOptions()); return (
    {users.map((user) => (
  • {user.name}
  • ))}
); } ``` For mutations: ```tsx import { useMutation, useQueryClient } from "@tanstack/react-query"; import { trpcClient } from "@/lib/trpc"; function CreateUserButton() { const queryClient = useQueryClient(); const mutation = useMutation({ mutationFn: (input: { name: string; email: string }) => trpcClient.user.create.mutate(input), onSuccess: () => { queryClient.invalidateQueries({ queryKey: ["user"] }); }, }); return ( ); } ``` ## Cache Invalidation Invalidate by query key prefix to refresh related data after mutations: ```tsx // Invalidate all user queries queryClient.invalidateQueries({ queryKey: ["user"] }); // Invalidate all billing queries (any org) queryClient.invalidateQueries({ queryKey: ["billing", "subscription"] }); ``` For session changes, use `removeQueries` instead of `invalidateQueries` – this forces `beforeLoad` guards to fetch fresh data rather than serving stale cache: ```tsx queryClient.removeQueries({ queryKey: ["auth", "session"] }); await router.invalidate(); ``` ## Jotai Store A global Jotai store is set up in `apps/app/lib/store.ts` for cross-component client state. It's wired into the app via `StoreProvider` but not heavily used – TanStack Query handles most state needs. Use Jotai for UI state that doesn't belong in server cache (theme preference, sidebar open/closed, local filters). ```tsx import { atom, useAtom } from "jotai"; const sidebarOpenAtom = atom(true); function Sidebar() { const [open, setOpen] = useAtom(sidebarOpenAtom); // ... } ``` See [Forms & Validation](./forms.md) for mutation patterns in form submissions. For library reference, see the [TanStack Query docs](https://tanstack.com/query/latest/docs/framework/react/overview), [tRPC docs](https://trpc.io/docs/client/react), and [Jotai docs](https://jotai.org/docs/introduction). --- --- url: /specs/billing.md --- # Stripe Billing Integration ## Overview Stripe billing via `@better-auth/stripe` plugin. Billing is tightly coupled with auth – customer lifecycle, subscription state, and webhook handling are managed by the same system that manages sessions and organizations. **Non-goals:** Usage-based billing, metered pricing, one-time payments, invoicing, Stripe Elements/embedded checkout, tax calculation, multi-currency. These can be added incrementally. ## Decision Rationale **Better Auth plugin over raw Stripe SDK:** RSK already uses Better Auth for auth, organizations, and sessions. The plugin handles customer sync, subscription lifecycle, webhook verification, and org-level billing – eliminating significant glue code. **Hosted Checkout over embedded Elements:** Stripe Checkout is PCI-compliant out of the box, requires no `@stripe/stripe-js` client dependency, and handles payment method selection, 3D Secure, and receipts. The upgrade path to embedded Elements exists but isn't needed initially. **`createCustomerOnSignUp: true`:** Creates Stripe customer records eagerly on signup. Simplifies the upgrade flow and enables Stripe-side analytics. Trade-off: creates unused records for users who never upgrade. ## Architecture ```text ┌─────────────┐ POST /api/auth/subscription/upgrade ┌───────────────┐ │ Browser │ ──────────────────────────────────────────→ │ API Worker │ │ (app) │ │ (Hono) │ │ │ ←── 302 redirect │ │ │ │──→ Stripe Checkout (hosted) │ Better Auth │ │ │ │ + stripe() │ │ │ POST /api/auth/stripe/webhook │ plugin │ │ │ │ │ │ │ Stripe ────────→│ webhook ──→ │ │ │ │ update DB │ │ │ GET /api/trpc/billing.subscription │ │ │ │ ──────────────────────────────────────────→ │ tRPC router │ └─────────────┘ ←── subscription data (TanStack Query) └───────────────┘ ``` **Data flow:** 1. User clicks "Upgrade" – Better Auth client calls `auth.subscription.upgrade()` 2. Plugin creates Stripe Checkout session – redirects browser to Stripe 3. User completes payment – Stripe sends webhook to `/api/auth/stripe/webhook` 4. Plugin verifies signature, updates `subscription` table – client refetches via tRPC **Why tRPC for reads, Better Auth client for mutations:** Subscription queries benefit from TanStack Query caching, batching, and stale-while-revalidate. Mutations (upgrade, cancel, portal) go through the auth client because the plugin handles Stripe API calls, session validation, and org authorization internally. ## Billing Reference Billing is tied to `session.activeOrganizationId` when present; otherwise falls back to `user.id` for personal use. The plugin enforces one active subscription per reference ID. * **Organization context:** `referenceId = activeOrganizationId` – only org owner/admin can manage billing * **No organization:** `referenceId = user.id` – user manages their own subscription * The server derives `referenceId` from the session – no client-side param needed * The billing query key includes `activeOrgId`, so switching organizations automatically fetches fresh billing data via TanStack Query ## Database Schema The plugin uses `stripeCustomerId` on the `user` and `organization` tables, and a `subscription` table. The plugin manages the subscription table – no manual inserts/updates needed. Schema must match plugin expectations. After auth config changes, update the schema in `db/schema/` and run `bun db:generate` to create migrations. ## Plan Configuration Plan limits defined in `apps/api/lib/plans.ts` (single source of truth), referenced by both auth plugin config and tRPC router. Price IDs come from environment variables (`STRIPE_*_PRICE_ID`). Config-as-code is the simplest correct approach – plans rarely change and this makes them testable and version-controlled. **Escape hatch:** The plugin accepts `plans: () => StripePlan[]` for dynamic plans. Switch to this only when a real use case requires runtime plan management (e.g., admin dashboard for plan CRUD). **Limits enforcement:** The `limits` object is returned by the `billing.subscription` tRPC query. Enforce limits in application logic (tRPC middleware, UI guards), not in the plugin itself. ## Environment Variables | Variable | Prefix | | ---------------------------- | -------- | | `STRIPE_SECRET_KEY` | `sk_` | | `STRIPE_WEBHOOK_SECRET` | `whsec_` | | `STRIPE_STARTER_PRICE_ID` | `price_` | | `STRIPE_PRO_PRICE_ID` | `price_` | | `STRIPE_PRO_ANNUAL_PRICE_ID` | `price_` | Set in `.env.local` (local dev), Cloudflare secrets (staging/prod). ## Webhook Setup The plugin registers `POST /api/auth/stripe/webhook` automatically. It handles: * `checkout.session.completed` – activates subscription * `customer.subscription.created` – records new subscription * `customer.subscription.updated` – syncs status, cancellation scheduling * `customer.subscription.deleted` – marks subscription canceled ### Stripe Dashboard Configuration ``` Endpoint URL: https:///api/auth/stripe/webhook Events: - checkout.session.completed - customer.subscription.created - customer.subscription.updated - customer.subscription.deleted ``` ### Local Development ```bash stripe listen --forward-to localhost:5173/api/auth/stripe/webhook # Copy the whsec_... signing secret to .env.local ``` ### Raw Body Requirement Stripe webhook verification requires the raw request body. The plugin handles this via `request.text()` – no special Hono middleware needed. ## Testing The plugin tests its own internals (webhooks, checkout, subscription lifecycle, authorization). App tests cover the seams we own: * **Router** (`apps/api/routers/billing.test.ts`) – free plan fallback, plan limits mapping, unknown plan rejection, response shape * **Query** (`apps/app/lib/queries/billing.test.ts`) – cache key includes org ID, null normalization, distinct keys per org, prefix for bulk invalidation Checkout and webhook flows are not retested at app level – verified via `stripe listen` during development. ## File Map | Layer | Files | | ------ | ------------------------------------------------------------------------------------------------------ | | Schema | `db/schema/subscription.ts`, `stripeCustomerId` in `db/schema/user.ts` and `db/schema/organization.ts` | | Server | `apps/api/lib/plans.ts`, `apps/api/lib/stripe.ts`, stripe plugin in `apps/api/lib/auth.ts` | | Router | `apps/api/routers/billing.ts`, registered in `apps/api/lib/app.ts` | | Client | `stripeClient` in `apps/app/lib/auth.ts`, `apps/app/lib/queries/billing.ts` | | UI | Billing card in `apps/app/routes/(app)/settings.tsx` | | Tests | `apps/api/routers/billing.test.ts`, `apps/app/lib/queries/billing.test.ts` | --- --- url: /testing.md --- # Testing The project uses [Vitest](https://vitest.dev/) for both API and frontend tests. Two test projects run from a single root config – API tests in Node, frontend tests in [Happy DOM](https://github.com/capricorn86/happy-dom). ## Configuration The root config defines both projects: ```ts // vitest.config.ts export default defineConfig({ test: { projects: ["apps/api", "apps/app"], }, }); ``` `apps/api` has its own `vitest.config.ts`; `apps/app` uses an inline `test` block in `vite.config.ts`: | Project | Environment | Setup file | | ---------- | -------------- | ----------------- | | `apps/api` | Node (default) | – | | `apps/app` | `happy-dom` | `vitest.setup.ts` | The app setup file registers [jest-dom](https://github.com/testing-library/jest-dom) matchers like `toBeInTheDocument()`: ```ts // apps/app/vitest.setup.ts import "@testing-library/jest-dom/vitest"; ``` ## Running Tests ```bash bun test # All projects, watch mode bun test --run # Single run (no watch) bun test --project @repo/api # API tests only bun test --project @repo/app # Frontend tests only bun test billing # Filter by filename ``` ## File Conventions * Test files live next to the code they test – `billing.ts` → `billing.test.ts` * Import everything from `vitest`, not globals: ```ts import { describe, expect, it, vi } from "vitest"; ``` ## Testing tRPC Procedures Use `createCallerFactory` to invoke procedures directly without HTTP. Build a minimal context mock with only the fields the procedure accesses: ```ts // apps/api/routers/billing.test.ts import { describe, expect, it, vi } from "vitest"; import type { TRPCContext } from "../lib/context"; import { createCallerFactory } from "../lib/trpc"; import { billingRouter } from "./billing"; const createCaller = createCallerFactory(billingRouter); function testCtx({ userId = "user-1", activeOrgId = undefined as string | undefined, subscription = undefined as Record | undefined, } = {}) { const ctx: TRPCContext = { req: new Request("http://localhost"), info: {} as TRPCContext["info"], session: { id: "s-1", createdAt: new Date(), updatedAt: new Date(), userId, expiresAt: new Date(Date.now() + 60_000), token: "token", activeOrganizationId: activeOrgId, }, user: { id: userId, createdAt: new Date(), updatedAt: new Date(), email: "test@example.com", emailVerified: true, name: "Test User", }, db: { query: { subscription: { findFirst: vi.fn().mockResolvedValue(subscription), }, }, } as unknown as TRPCContext["db"], dbDirect: {} as TRPCContext["dbDirect"], cache: new Map(), env: {} as TRPCContext["env"], }; return ctx; } describe("billing.subscription", () => { it("returns free plan defaults when no subscription exists", async () => { const result = await createCaller(testCtx()).subscription(); expect(result).toEqual({ plan: "free", status: null, periodEnd: null, cancelAtPeriodEnd: false, limits: { members: 1 }, }); }); it("throws on unknown plan name", async () => { await expect( createCaller( testCtx({ subscription: { plan: "enterprise", status: "active" } }), ).subscription(), ).rejects.toThrow('Unknown plan "enterprise"'); }); }); ``` Key points: * `createCallerFactory(router)` from `@trpc/server` – calls procedures in-process, no network layer * Cast partial DB mocks with `as unknown as TRPCContext["db"]` – only stub the methods your procedure actually calls * Use `vi.fn().mockResolvedValue()` for async Drizzle query methods ## Testing Utility Functions Pure functions need no mocking – just import and assert: ```ts // apps/app/lib/errors.test.ts import { describe, expect, it } from "vitest"; import { getErrorMessage, isUnauthenticatedError } from "./errors"; describe("getErrorMessage", () => { it("extracts message from Error instances", () => { expect(getErrorMessage(new Error("Something broke"))).toBe( "Something broke", ); }); it("returns fallback for unknown shapes", () => { expect(getErrorMessage(null)).toBe("An unexpected error occurred"); }); }); ``` ## Testing Query Options Test TanStack Query option factories by inspecting query keys. Use a real `QueryClient` with retries disabled to test cache helpers: ```ts // apps/app/lib/queries/session.test.ts import { QueryClient } from "@tanstack/react-query"; import { describe, expect, it } from "vitest"; import { getCachedSession, isAuthenticated, sessionQueryKey } from "./session"; function createQueryClient() { return new QueryClient({ defaultOptions: { queries: { retry: false } }, }); } describe("isAuthenticated", () => { it("returns true when both user and session exist", () => { const queryClient = createQueryClient(); queryClient.setQueryData(sessionQueryKey, { user: { id: "user-1", email: "test@example.com" }, session: { id: "session-1", expiresAt: new Date() }, }); expect(isAuthenticated(queryClient)).toBe(true); }); it("returns false when no session data cached", () => { expect(isAuthenticated(createQueryClient())).toBe(false); }); }); ``` ## Testing React Components The app project includes [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/) with Happy DOM. Components render in a simulated DOM: ```ts // apps/app/components/example.test.tsx import { render, screen } from "@testing-library/react"; import userEvent from "@testing-library/user-event"; import { describe, expect, it, vi } from "vitest"; import { MyComponent } from "./my-component"; describe("MyComponent", () => { it("renders the label", () => { render(); expect(screen.getByText("Hello")).toBeInTheDocument(); }); it("calls onClick when button is pressed", async () => { const user = userEvent.setup(); const onClick = vi.fn(); render(); await user.click(screen.getByRole("button")); expect(onClick).toHaveBeenCalledOnce(); }); }); ``` ::: tip Use `userEvent` over `fireEvent` for user interactions – it simulates real browser behavior (focus, keyboard events, pointer events) rather than dispatching synthetic events. ::: ## Mocking ### Function mocks ```ts const fn = vi.fn(); fn.mockReturnValue(42); fn.mockResolvedValue({ data: "ok" }); // async fn.mockImplementation((x) => x + 1); ``` ### Partial object mocks Cast partial mocks when you only need a subset of a typed interface: ```ts const db = { query: { user: { findFirst: vi.fn().mockResolvedValue({ id: "user-1" }) }, }, } as unknown as TRPCContext["db"]; ``` ### Module mocks ```ts vi.mock(import("./some-module.js"), () => ({ myFunction: vi.fn().mockReturnValue("mocked"), })); ``` For partial module mocks that keep the original implementation: ```ts vi.mock(import("./some-module.js"), async (importOriginal) => { const mod = await importOriginal(); return { ...mod, myFunction: vi.fn() }; }); ``` ::: warning Module mocks are hoisted – they run before imports regardless of where you write them. See [Vitest mocking docs](https://vitest.dev/guide/mocking) for details. ::: ## Where Tests Live ``` apps/ ├── api/ │ └── routers/ │ └── billing.test.ts # tRPC procedure tests └── app/ └── lib/ ├── errors.test.ts # utility function tests └── queries/ ├── billing.test.ts # query option tests └── session.test.ts # cache helper tests ``` Place test files next to the source they test. No separate `__tests__` directories. --- --- url: /frontend/ui.md --- # UI The project uses [shadcn/ui](https://ui.shadcn.com/) (new-york style) with [Tailwind CSS v4](https://tailwindcss.com/) for styling. Components live in `packages/ui/` and are shared across all apps in the monorepo. ## Component Management Add components from the shadcn/ui registry: ```bash # Add a single component bun ui:add button # Add multiple components bun ui:add dialog card select # Interactive mode – browse and select bun ui:add # List installed components bun ui:list # Update all installed components bun ui:update ``` Run `bun ui:list` to see which components are currently installed. ## Component Structure Components are stored directly in `packages/ui/components/` – one file per component: ```bash packages/ui/ ├── components/ │ ├── avatar.tsx │ ├── button.tsx │ ├── card.tsx │ ├── ... │ └── textarea.tsx ├── lib/ │ └── utils.ts # cn() utility ├── scripts/ # CLI tooling (add, list, update) ├── index.ts # Barrel export └── package.json ``` All components and utilities are re-exported from the package root (`index.ts`), so importing is straightforward: ```tsx import { Button, Card, CardHeader, CardTitle, Input, cn } from "@repo/ui"; ``` ## Using Components ```tsx import { Card, CardContent, CardDescription, CardHeader, CardTitle, } from "@repo/ui"; function FeatureCard({ title, description, children }) { return ( {title} {description} {children} ); } ``` ## The `cn()` Utility Use `cn()` (from `clsx` + `tailwind-merge`) for conditional and merged class names: ```tsx import { Button, cn } from "@repo/ui";