For Builders
How Whitmore is actually built.
This page is for the hiring manager who wants depth, the technical investor who asks good questions, and the founding engineer who's evaluating whether this is real. It is.
01 — Architecture
Multi-tenant with row-level security
Whitmore is a multi-tenant Next.js application running on Vercel, backed by Supabase (PostgreSQL). Every piece of data is scoped to an organization via row-level security policies — no cross-tenant data leakage by construction, not by convention.
The schema is organized around organizations, users, connectors, campaigns, posts, artifacts, and activity. RLS policies on all tables enforce that every query is automatically filtered to the authenticated org — even if the application layer forgets to add a WHERE clause.
Auth is Supabase Auth with JWTs. The org ID is embedded in the JWT and read server-side for all data operations. Client-side components never receive raw org IDs from URL params — they're derived from the authenticated session.
02 — Agent Orchestration
Claude Sonnet via Anthropic SDK with tool calling
The agent layer is built on the Anthropic SDK with Claude Sonnet as the default model. The agent runs as a streaming tool-use loop: it receives a system prompt containing org context (brand voice, active connectors, recent history), calls tools as needed, and streams the response back to the client via Server-Sent Events.
Prompt caching is used on the system prompt to reduce latency and cost on repeated calls within a session. The system prompt is structured to cache the brand brain (stable, long-lived) while leaving the conversation history uncached (changes every message).
Tool calling follows the Anthropic tool_use / tool_result pattern. Each connector exposes a set of tools that are conditionally included in the agent's tool list based on which connectors are active for the org. This keeps the context window lean — an org without Google Ads doesn't pay for Google Ads tool tokens.
03 — Connector Framework
Composio + native integrations
Third-party integrations are handled through two layers: Composio for high-level tool wrappers (Meta, Google Calendar, Canva Content Planner) and native API clients for integrations requiring deeper control (Google Ads, Google Analytics).
Each connector in the registry defines: a type identifier, an array of agent tools it provides, an array of skills (scheduled tasks) it enables, and an array of sidebar views it unlocks. When an org activates a connector, all three dimensions become available automatically — no per-connector UI wiring required.
OAuth tokens are stored encrypted (see token encryption section) and refreshed automatically. The connector framework handles the OAuth HMAC state parameter for CSRF protection during the authorization flow.
04 — Token Encryption
AES-256-GCM with envelope design
OAuth tokens (access tokens, refresh tokens) are encrypted at rest using AES-256-GCM. The encryption key is stored as a Vercel environment variable (ENCRYPTION_KEY). Each token is encrypted with a random 12-byte IV, stored alongside the ciphertext and auth tag as a colon-delimited string.
The current architecture uses a single symmetric key. The roadmap includes migrating to AWS KMS for key rotation support — the envelope design means re-encryption requires only a key rotation, not touching every stored token.
Token encryption was retrofitted onto an existing system: a migration script ran against all existing rows, encrypted the plaintext tokens in-place, and verified the decrypt round-trip before committing. The migration ran in production without downtime.
05 — Campaign System
Structured assets with multi-channel delivery
Campaigns are containers for marketing assets. Each campaign has structured copy fields (headline, body, CTA, hashtags) plus an array of artifact roles (hero image, carousel slides, story, PDF brief, PPTX deck). Assets can be sourced from chat-generated content, direct uploads, or Canva via the compound Canva tool (create + export + store in one call).
The carousel approval workflow implements a draft → review → approve → publish flow. The agent can draft carousel copy and suggest image descriptions; the human approves and publishes. Published posts are delivered to Meta via the Graph API with the correct asset ordering.
Campaigns are multi-channel by design: a single campaign brief can generate social posts, ad copy variants, a PDF leave-behind, and a slide deck — each tracked as an artifact with a source field indicating its origin.
06 — Eval Architecture
Three-layer testing: unit, agent capability, browser
The eval architecture has three layers. Layer 1: unit tests for deterministic functions (tool implementations, schema validators, encryption round-trips). Layer 2: agent capability evals — scripted conversations that exercise the agent's tool-calling behavior end-to-end against the real API (not mocked). Layer 3: browser evals using Playwright against a local dev server and local Supabase instance — never production.
Agent evals are the most valuable for catching regressions. A capability eval for 'create a campaign' sends a natural-language prompt, waits for the tool-use loop to complete, and asserts that the expected database records exist. These run in CI but are gated — they require real Anthropic API keys and are not parallelized.
The eval runner is invoked via `npm run eval`. Results are logged with pass/fail per capability, token usage, and latency. The goal is to catch model behavior regressions (especially around tool selection and argument formatting) before they reach production.
07 — Migration Strategy
Sequential SQL migrations with CI enforcement
Database migrations are plain SQL files in a numbered sequence (001_, 002_, ...) applied via the Supabase CLI. CI runs `supabase db diff` on every PR to detect unapplied migrations. Migrations are applied to production after merge, not before — so the code and schema change together.
RLS policies are defined in the migrations themselves — not in application code or via the Supabase dashboard. This makes the security posture reviewable in code review and reproducible in any environment.
The development workflow uses a local Supabase instance with Docker. Agents doing implementation work are required to test against local Supabase — production is never used for development or QA.
Full Stack
The complete picture
Next.js 15 App Router, TypeScript, Tailwind CSS, Framer Motion
Next.js API routes, Supabase Edge Functions
PostgreSQL via Supabase, RLS on all tables
Supabase Auth, JWT-based org scoping
Anthropic SDK, Claude Sonnet, prompt caching, tool calling
Composio, Google Ads API, Meta Graph API, Canva MCP
Vercel (hosting, CI/CD), Supabase (db, storage, auth)
AES-256-GCM, Node.js crypto module
Unit tests, agent capability evals, Playwright browser tests
Vercel Analytics, Vercel logs
Want to talk architecture, stack decisions, or tradeoffs?
Eugene is happy to go deep. Book a call — bring your hardest questions.