Three weeks. 310 commits. One fully working event platform — from empty repository to live on staging, including multi-tenancy, payments, e-tickets, seating charts, an AI help function and a production-ready CI/CD pipeline.
This is the story of how we built Evender. And what AI meant in that process — honestly, without hyperbole.
What is Evender?
Evender is a multi-tenant event platform for the Dutch market. Organisations can set up their own ticket shop — including branding, custom domain, seating charts, products, payment via Adyen, and barcode scanning at the entrance. On top of the platform, separate tenants each run their own storefront, admin and configuration.
Technically: a NestJS API, two Next.js 14 frontends (storefront and admin), a shared Prisma schema on PostgreSQL, all in one Turborepo monorepo. Deployed via Docker containers to a dedicated server, orchestrated by GitHub Actions.
That's a substantial application. Normally such a foundation takes months. We did it in three weeks.
Day 1: from empty to foundation
On March 1st it started with the first commit: feat: initial monorepo setup with Turborepo.
That same day:
- A complete Prisma schema with seed data
- The NestJS API scaffold with all core modules
- Two Next.js frontends
- A design system with 16 components and Storybook
That sounds unbelievable. But it's what the git history shows. The approach: no blank page, but direct collaboration with Claude as co-developer. Not as a code generator spitting out loose snippets — but as a pair programmer who thinks along about architecture, consistency and the "why" behind decisions.
Day 2 we built the authentication system (JWT, roles, frontend), multi-tenancy, a product catalogue with admin, a complete shopping cart, checkout flow with dummy payment provider, and order management with barcode system and e-ticket generation.
Eighteen commits in one day. Everything logically grouped, everything descriptive. Not because we typed fast — but because we had thought through the structure beforehand, and AI could execute the implementation while we set the direction.
The architecture choices
Building a platform is about making choices. We walk through the most important ones.
Monorepo with Turborepo — We wanted API, two frontends and shared packages in one repository. Turborepo provides smart build caching and parallelisation. The @evender/* package scope keeps everything organised. Downside: setup time. Upside: changes in packages/shared are automatically picked up by all consumers.
NestJS for the API — NestJS's dependency injection, module architecture and decorator-based guards are ideal for adding security layers without spaghetti. The JwtAuthGuard is globally active; new endpoints are secured by default. You use @Public() deliberately, and document why.
Prisma as ORM — Type-safe queries, automatic migrations, and a schema that serves as single source of truth for both database and TypeScript types. Migrations run via CI with prisma migrate deploy — never manually in production.
Multi-tenancy via subdomains — Each tenant gets its own subdomain (or custom domain). The API resolves the tenant via the Host header; the frontend via NEXT_PUBLIC_PLATFORM_DOMAIN. Tenant isolation runs via a TenantGuard — no ad-hoc checks in services.
Security: baked in from day one
Security is not a feature you bolt on afterwards. We chose to embed it in the architecture from day one.
A selection from what's in the commits:
- No hardcoded secrets:
config.getOrThrow()everywhere — no fallback values, no defaults. If a secret is missing, the application crashes deliberately on startup. - CORS via allowlist:
origin: trueis a security risk. We explicitly disabled it and setCORS_ORIGINSas an environment variable. - Helmet middleware: HTTP security headers (X-Frame-Options, Content-Security-Policy, etc.) via NestJS Helmet — enabled by default, not optional.
- EXIF stripping: Users upload photos. EXIF data can contain GPS coordinates. We strip it via
sharp.rotate()before storage. - SVG sanitisation: SVGs can contain executable JavaScript. We removed SVG from the upload whitelist and sanitise before storage.
- Encrypted webhook secrets: Adyen webhook secrets are not stored in plaintext in the database. A migration script encrypted existing secrets; new ones are never stored in plaintext.
- SSRF guard: External HTTP fetches (e.g. for the AI theme wizard) run via
assertPublicUrl()— a guard that blocks requests to internal IP ranges. - CSP headers: Both storefront and admin received explicit Content-Security-Policy headers in the Next.js configuration.
The beauty of AI as co-developer in this process: every time we built a new endpoint or module, Claude proactively checked whether the right guards were in place. Not as a replacement for a security review — but as a first filter that closes obvious gaps before they even make it into the codebase.
Testing: self-contained from day one
Testing with shared test data is a time bomb. Your seed script runs today, your test expects data tomorrow that no longer exists, and your CI is broken for reasons that have nothing to do with your code.
We chose a principle we consistently applied: e2e tests are self-contained. They create their own test data in beforeAll, and clean up in afterAll. No dependency on seed data. No shared state between tests.
Seed data exists only for development and CI test databases. Never in production. The CI test job runs the seed with NODE_ENV=test — explicitly, not implicitly.
AI helped in two ways. First, writing the tests themselves — boilerplate for setup and teardown is tedious work where you easily make mistakes. Second, detecting race conditions and timing issues in asynchronous flows (like holding seat reservations during checkout).
Deployment: Docker, GitHub Actions, staging-first
The deployment setup is simple in principle, but the devil is in the details.
Each app has its own Dockerfile. The GitHub Actions pipeline builds the images, pushes to ghcr.io/nielsdigitalspirit/evender, and deploys to the server. There are two triggers: the staging branch deploys to staging.evender.nl, a git tag (v{x.y.z}) triggers the production deploy.
The git workflow is deliberately structured:
Hotfixes may go directly on main; then bump a patch tag.
What we learned: build arguments for Next.js (NEXT_PUBLIC_* variables) must already be available during the Docker build — not just at runtime. That cost a handful of CI fixes to get right.
Multi-stage builds keep images small. The uploads directory gets a persistent Docker volume so uploaded files don't disappear on deploy.
AI in the product itself: the help function
So far we've talked about AI as a tool for building. But Evender also uses AI internally — in the help function for admins.
The setup: a knowledge base of help articles is processed via an n8n workflow (triggered on every GitHub push to the docs/help folder). The texts are embedded as vectors via the Anthropic API and stored in a PostgreSQL table with the pgvector extension.
When an admin clicks the help button, a ChatWidget opens with streaming SSE. The question is converted to a vector, the nearest knowledge fragments are retrieved (RAG), and Claude generates an answer based on that context — streamed, so the user sees immediate feedback.
A second n8n workflow (triggered on a new git tag) automatically checks whether the release content is consistent with the documentation — a kind of automated release QA.
Both workflows run on staging via n8n, an open-source workflow automation tool.
The honest balance: what AI does and doesn't do well
AI makes you faster. But it doesn't replace judgement.
What works well:
- Generating boilerplate: DTOs, modules, guards, test setups — consistent and fast
- Maintaining consistency: if you have a pattern (e.g.
TenantGuardon all tenant endpoints), AI ensures new endpoints follow that pattern - Refactoring: large, mechanical changes (e.g. adding tooltips to 72 form fields) without mental overhead
- Finding bugs in asynchronous code
What needs human direction:
- Architecture choices: which modules? which abstractions? — those are conversations, not questions
- Security: AI helps follow patterns, but doesn't understand business context
- Product decisions: what are we actually building, and why?
- Integrations: Adyen, barcode scanners, custom domains — real systems behave differently than the documentation
The most valuable sessions were not "generate this" — but "what are the trade-offs if we approach X this way?"
Results
- 310 commits in 21 working days
- Day 1: empty repository → complete monorepo scaffold
- Day 2: auth, multi-tenancy, catalogue, cart, checkout, orders — everything
- Week 2: security hardening, webhooks, CI/CD, help function with RAG
- Week 3: seated events, seating charts, AI-generated SVGs, design system, storefront variants
A platform of this scope normally takes 4–6 months for a small team. That we could deliver it in 3 weeks is entirely thanks to the combination of clear architecture choices, disciplined git workflow, and AI as a full pair programmer.
What other teams can learn from this
- Start with structure, not features. The first day we spent on monorepo setup, schema, and design system. That investment paid off for the rest of the project.
- AI works best as a sparring partner, not a code generator. Think out loud, explain context, ask about trade-offs. You get better code and better understanding.
- Security is an architecture decision. Global guards, required env vars, explicit
@Public()decorators — not something you add afterwards. - Self-contained tests from the start. It costs slightly more time per test, but your CI is stable and your tests are reliable.
- AI in the product itself is possible — but do it deliberately. The help function works because we had a clear use case (quick answers to admin questions) and the knowledge base is well-structured.