Skip to content

danistor/apero-tracker

Repository files navigation

🍷 Apéro Time: The AI Agent Experiment

Because Friday apéros deserve proper democracy

A fun, interactive voting app for weekly team apéro suggestions.

This repository is a Proof of Concept (PoC) built entirely using the Claude Code CLI agent.

The goal was not just to build an app, but to stress-test the capabilities of a modern coding agent. Could an AI agent handle the entire lifecycle of a complex, full-stack application. From scaffolding a Monorepo and configuring Docker to handling complex deployments on cloud platforms like Render and Vercel with minimal human intervention? Even more, taking the oportunity to learn how Statamic CMS is built and the differences between React and Vue.

🧪 The Experiment

This project was created as a fun challenge to answer the question: Can an AI agent build and deploy a production-ready, multi-service architecture from scratch?

I pushed Claude Code CLI to handle:

  • Monorepo Structure: Managing distinct backend (Laravel) and frontend (Nuxt) codebases in a single Git repository.
  • Infrastructure as Code: Writing Dockerfiles, docker-compose.yml, and render.yaml blueprints.
  • Full Stack Logic: Implementing a Laravel 12 + Statamic CMS backend serving a GraphQL API to a Nuxt 4 Frontend.
  • Production Deployment: Solving real-world DevOps issues like CORS, SSL, Docker Layer Caching, and Region mismatches.

🛠 Tech Stack

  • Agent Tool: Claude Code CLI
  • Backend: Laravel 12 + Statamic 5 Pro (Headless CMS)
  • API: GraphQL
  • Frontend: Nuxt 4 (Vue.js) + Tailwind CSS
  • Database: PostgreSQL (Production) / MySQL (Local)
  • Infrastructure: Docker & Docker Compose
  • Hosting ($0 Cost Architecture):
    • Backend: Render (Docker Runtime - Free Tier)
    • Frontend: Vercel (Free Tier)

✨ Features

  • 🗳️ Vote for your favorite apéro ideas
  • 🎨 Beautiful dark mode
  • ♿ Accessibility-first (WCAG AA)
  • 📱 Fully responsive
  • 🎊 Fun animations & confetti
  • 🔥 Real-time vote counts
  • 😄 Playful copy with personality

🎨 Design Decisions

  • Flat-file CMS: Statamic for easy content management
  • GraphQL: Clean API interface
  • Dark mode: Because developers
  • Accessibility: Keyboard navigation, screen reader support, reduced motion
  • Fun copy: Matches a playful culture

📝 What I Learned

  • Statamic's elegant content modeling (free vs Pro features)
  • Working with Statamic Graphql and custom endpoints
  • Nuxt 4 composables architecture
  • Laravel + Statamic integration patterns
  • Animation performance optimization

📊 Retrospective: Agent Performance

✅ What Went Well

  • Rapid Scaffolding: The agent quickly generated the initial boilerplate and correctly structured the monorepo.
  • Complex Configuration: It successfully wrote valid render.yaml blueprints and Docker configurations that linked Redis, Database, and Web services.
  • Advanced Debugging: The agent was surprisingly effective at diagnosing obscure errors, such as:
    • Vite Manifest Errors: Identifying missing build steps in the Dockerfile.
    • Docker Optimization: Refactoring the Dockerfile to use Layer Caching, reducing build times from 5m to 2m.
    • Region Mismatches: diagnosing that the App and DB were in different Render regions (Oregon vs. Frankfurt) causing connection failures.

⚠️ The Challenges (Human-in-the-Loop Required)

  • Context Blindness: The agent occasionally assumed files (like .env or package-lock.json) existed or were committed when they were actually .gitignored, causing build failures.
  • Nuance in Deployment: It initially missed subtle cloud-specific settings, such as Statamic requiring vendor:publish for Control Panel assets or the 4KB session cookie limit on local Docker setups.
  • Security vs. Convenience: The agent leaned towards "getting it to work" (e.g., HTTP links), which triggered "Mixed Content" errors on HTTPS production environments. We had to iteratively enforce SSL and Proxy Trust.
  • Statamic licensing confusion
  • Statamic super user creation

⏱ Timeframe & Methodology

The project was built over several sessions in spare time. This highlighted the agent's ability to maintain context over disconnected coding sessions, though it occasionally needed reminders about previous architectural decisions (like the fact that I was using Docker).

🚀 Future Optimizations

To improve the workflow with AI Agents in the future:

  1. Seed the Context: Explicitly provide the agent with .gitignore and directory structure constraints at the start to prevent "File Not Found" loops.
  2. Request LTS: Explicitly ask for Long Term Support versions (e.g., Node v24) to avoid the agent defaulting to older, "safer" versions (Node v20).
  3. Production-First Mindset: Ask the agent to generate a "Production Checklist" script early on to verify keys, database connections, and asset generation before the first deploy attempt.
  4. MCP: Add them at the begining of the process when technologies, frameworks and cloud provider are deciced. It will make the development much faster and give the possibility to the agent to check the logs automatically so no more human-in-the-loop requirement
  5. Skills: Check this new feature on Claude Code how it works. Apparently it improves the agent compatibilities. Also create already what commands it can execute or not. So it doesn't require extra human input

💻 How to Run (Local)

  1. Clone the repo:

    git clone https://github.com/danistor/apero-tracker.git
    cd apero-tracker
  2. Start Docker:

    docker-compose up -d
  3. Access:

    • Frontend: http://localhost:3000
    • Backend: http://localhost:80
    • Statamic CP: http://localhost:80/cp

Built with Claude Code 🤖 and ☕.

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors