A room booking platform built for university campuses. Students, faculty, and staff can find and book rooms across campus buildings. The system is built to handle large numbers of users without slowing down.
Demo Link: https://youtu.be/UFw6WoNP3_o
Users register with their role (student, faculty, staff) and wait for admin approval before they can book rooms. Once approved, they can browse an interactive floor plan, pick a room, and book it instantly. Admins manage everything from a dashboard including approvals, room configuration, and system settings.
Each role has different privileges. Faculty can book any room without restrictions. Students have time slot limits and cannot access faculty only rooms. Staff fall somewhere in between. All of this is enforced at the API level, not just the frontend.
The system is organized into three layers.
API Gateway Layer
The edge service is the single entry point for all client requests. Every request from the frontend goes through it first. It handles routing to the right service, checks authentication, and takes care of anything that would otherwise need to be duplicated across every service. Redis sits alongside it as a session store so session lookups are fast without hitting the database on every request.
Microservices Layer
Five backend services, each with its own database so there is no shared state between them.
User Service manages user accounts, registration, login, and OTP verification. Room Service handles buildings, rooms, and room configuration. Reservation Service orchestrates the booking workflow and enforces time slot rules per role. Edge Service is the API gateway. Config Service holds centralized configuration so all services pull their settings from one place instead of having them scattered across individual deployments.
Infrastructure Layer
RabbitMQ handles asynchronous communication between services. Each service has its own PostgreSQL database. Redis handles distributed session caching at the gateway level.
We started thinking about a monolith because it is simpler to build and debug. We switched to microservices because of one specific requirement: independent scaling.
Room availability queries are read heavy and spike during class registration periods. User authentication traffic is much more stable. If everything was one service, we would have to scale the entire application just to handle the room query spike. That wastes resources and money. With separate services, we scale only what needs to scale.
The downside is real. More services means more complexity, more pipelines, and harder debugging across service boundaries. That was a cost we accepted because Kubernetes auto scaling was a hard project requirement and microservices make scaling boundaries explicit.
This was one of the more deliberate design decisions we made. The room service and reservation service do not call each other directly over REST. They communicate through RabbitMQ using a publish subscribe pattern.
When a user books a room, several things need to happen at once. The reservation record needs to be created, room availability needs to update, and any conflicts need to be resolved. With direct REST calls, a slow or temporarily unavailable room service would cause the entire booking request to fail from the user's perspective.
With async messaging the reservation service publishes a Reservation Created event and returns a response to the user. The room service picks up that event, updates availability, and publishes a Room Status Updated event back. The reservation service receives that and confirms the booking.
The full event flow looks like this:
- User requests a room booking through the reservation service
- Reservation service publishes a Reservation Created event to RabbitMQ
- Room service picks up the event and updates room availability
- Room service publishes a Room Status Updated event
- Reservation service receives the event and confirms the booking
The benefits are that the services stay independent, a failure in one does not cascade into the other, and both can scale separately based on their own load. The trade off is eventual consistency. There is a small window where a reservation exists but room availability has not updated yet. We decided that was acceptable for a booking system where real time consistency to the millisecond is not a hard requirement.
We looked at lighter frameworks like Quarkus and Micronaut. We went with Spring Boot for three reasons.
Spring Data JPA fits our data model well. Users, buildings, rooms, and reservations all have relationships between them. JPA handles that cleanly and the repository pattern keeps queries readable without writing raw SQL.
Spring Security handles role based access control at the framework level. We have four roles with different permissions. Building that ourselves would have been error prone and time consuming.
Spring Actuator gave us health check endpoints out of the box. Those were immediately useful for Kubernetes liveness and readiness probes and for validating the system during load testing.
The trade off is that Spring Boot is heavier than the alternatives. It uses more memory and takes longer to start up. When Kubernetes scales up new pods during a traffic spike, those pods take a bit longer to become ready. We accepted that cost given the team familiarity and the security requirements we had to meet.
We chose Angular over React mainly because of Angular Material. Our UI has complex components including an interactive floor plan for room selection, admin data tables, and date time pickers for bookings. Angular Material gave us accessible, production ready versions of all of these without building them from scratch.
The trade off is that Angular has a larger bundle size and is slower to prototype with than React. If we were building something simple we would have picked React. We prioritized consistency and accessibility over development speed.
SQLite was not allowed by the project requirements. Between PostgreSQL and MongoDB we picked PostgreSQL because our data is relational.
A reservation belongs to a user, a room belongs to a building, a user has a role. These relationships are fixed and well defined. Enforcing them at the database level with foreign keys and constraints prevents bad data without relying on the application to catch everything.
MongoDB would have made sense if the schema was flexible or unpredictable, for example if different buildings had wildly different room configurations with custom attributes. In our case the schema is stable so the extra flexibility of MongoDB was not worth giving up referential integrity.
The trade off is that PostgreSQL requires migration management. Every schema change needs a migration script. MongoDB would have let us iterate on the data model faster in the early stages of the project.
We used k6 to run an extreme stress test. The test ramped from 100 to 1000 concurrent virtual users over 4.5 minutes and hit the buildings, rooms, and health check endpoints through the API gateway.
| Metric | Result |
|---|---|
| Total requests | 1,637,275 |
| Test duration | 270 seconds |
| Throughput | 6,063 requests per second |
| Average response time | 1.06ms |
| Median response time | 0.49ms |
| p90 response time | 2.21ms |
| p95 response time | 3.67ms |
| Max response time | 43.45ms |
| p95 under 5000ms threshold | Passed |
The p95 of 3.67ms is well within the 5 second threshold. However under the extreme 1000 user load the overall success rate dropped to around 33%. That needs context.
This test was run against a local development environment, not the full Kubernetes cluster with auto scaling active. At 1000 concurrent users hitting a single local instance, failures are expected. The same test against a properly scaled cluster would look very different. We treated this test as a way to validate that the system fails fast and recovers cleanly, which is what you want to see before setting up proper horizontal scaling.
At 100 to 300 concurrent users the system held above 95% success rate with consistent sub 5ms p95 latency. That is the realistic baseline for a single service instance.
What we would do differently: run the stress test against a staging Kubernetes deployment with horizontal pod autoscaling active so the results reflect the actual production setup.
| Layer | Technology |
|---|---|
| Backend | Spring Boot (Java) |
| API Gateway | Spring Cloud Gateway |
| Frontend | Angular |
| Database | PostgreSQL (one per service) |
| Cache | Redis |
| Message Broker | RabbitMQ |
| Configuration | Spring Cloud Config |
| Container Orchestration | Kubernetes |
| Load Testing | k6 |
| Repo | What it does |
|---|---|
| user service | Registration, login, OTP verification, role management |
| room service | Buildings, rooms and room configuration |
| reservation service | Booking management, availability checks, time slot enforcement |
| edge service | API gateway and routing |
| config service | Centralized configuration for all services |
| ui | Angular frontend |
| LoadTest | k6 load test scripts and results |
Every service has its own Dockerfile. For local development start the infrastructure first then run each service.
docker compose up -d
./gradlew bootRunCheck each service repo for the environment variables and port it runs on.
| Decision | What we picked | What we considered | Why |
|---|---|---|---|
| Architecture | Microservices | Monolith | Scale services independently under load |
| Service communication | Async via RabbitMQ | Direct REST calls | Resilience, loose coupling, independent scaling |
| Backend | Spring Boot | Quarkus, Micronaut | Better fit for our auth and data model |
| Frontend | Angular | React | Angular Material for complex accessible UI |
| Database | PostgreSQL | MongoDB | Relational data, referential integrity |
| Cache | Redis | In memory | Distributed session storage across pods |
| Load testing | k6 | JMeter | Scriptable, lightweight, CI friendly |
| Orchestration | Kubernetes | Docker Swarm | Auto scaling requirement |
