diff --git a/README.md b/README.md index 8d6b87e..3c90f83 100644 --- a/README.md +++ b/README.md @@ -9,6 +9,7 @@ BrainBytes is an AI-powered tutoring platform designed to provide accessible aca Explore the documentation to understand how **BrainBytes** is architected, deployed, and developed. ### Technical Guides + - **[System Architecture and Design Documentation](docs/system-design-documentation.md)** Learn how the BrainBytes platform is structured—covering backend APIs, data flow, authentication mechanisms, and overall infrastructure. This document also includes two key items users often look for: **System Architecture Documentation** and **Security Implementation Documentation**. @@ -18,6 +19,12 @@ Explore the documentation to understand how **BrainBytes** is architected, deplo - **[Deployment Process Documentation](docs/deployment-plan-documentation.md)** Step-by-step deployment process, from VPS setup to CI/CD integration, using Docker Compose and GitHub Actions. +- **[Monitoring System Documentation](docs/monitoring-documentation.md)** + Understand how BrainBytes ensures reliability and visibility through robust monitoring and alerting. + +- **[Simulation Documentation](docs/simulation-documentation.md)** + A detailed guide for running the API simulation script used for testing and traffic generation. + - **[Docker Development Setup](docs/docker-dev-setup.md)** Get started with local development using Docker and Traefik. Includes service structure, workflow tips, and environment management. @@ -25,6 +32,7 @@ Explore the documentation to understand how **BrainBytes** is architected, deplo Documentation of the GitHub Actions pipeline for automated builds and deployments using GHCR. ### Visual References + - **[Architecture Diagram](docs/images/architecture.png)** High-level visual of the platform’s microservices, ingress, databases, and CI/CD flow. @@ -53,10 +61,9 @@ These files define and support the core automation and infrastructure setup of t - **[Ansible Playbook File(`playbook.yml`)](ansible/playbooks/playbook.yml)** Ansible playbook used to provision the production VPS with required packages, security hardening, and initial app setup. -- **[Screenshot of Cloud Dashboard and Testing Results (Validation Report)](https://docs.google.com/document/d/1gfU2dtmo8PnKXEZZlr5iMl9UzHSvCOctWRax_l4ybCU/edit?usp=sharing)** +- **[Screenshot of Cloud Dashboard and Testing Results (Validation Report)](https://docs.google.com/document/d/1gfU2dtmo8PnKXEZZlr5iMl9UzHSvCOctWRax_l4ybCU/edit?usp=sharing)** Contains visual evidence of successful deployment and testing. - ## Team Members - Kristopher Santos - Team Lead - lr.ksantos@mmdc.mcl.edu.ph @@ -79,7 +86,7 @@ These files define and support the core automation and infrastructure setup of t - Containerization: Docker - CI/CD: GitHub Actions - VPS Provider: OVHCloud -- Monitoring: TBD +- Monitoring: Prometheus and Grafana ## Project Setup @@ -89,14 +96,14 @@ Get BrainBytes up and running on your local machine with these simple steps. Before you start, make sure you have the following installed: -* **Git**: For cloning the repository. -* **Node.js** (LTS version recommended): Includes npm, which is needed to install pnpm. -* **pnpm**: Our preferred package manager for faster and more efficient dependency management. - * If you don't have pnpm, you can install it globally via npm: - ```bash - npm install -g pnpm - ``` -* **Docker Desktop**: Required to run the application using Docker Compose. +- **Git**: For cloning the repository. +- **Node.js** (LTS version recommended): Includes npm, which is needed to install pnpm. +- **pnpm**: Our preferred package manager for faster and more efficient dependency management. + - If you don't have pnpm, you can install it globally via npm: + ```bash + npm install -g pnpm + ``` +- **Docker Desktop**: Required to run the application using Docker Compose. ### Installation @@ -104,26 +111,27 @@ Before you start, make sure you have the following installed: You can clone the BrainBytes repository using either **GitHub Desktop** or **Git via your terminal**. - * **Using GitHub Desktop**: - 1. Open GitHub Desktop. - 2. Go to **File > Clone Repository**. - 3. Paste the repository URL: - ``` - [https://github.com/Morfusee/MO-IT122-DevOps](https://github.com/Morfusee/MO-IT122-DevOps) - ``` - 4. Choose your desired local path and click **Clone**. + - **Using GitHub Desktop**: - * **Using Git via terminal**: - ```bash - git clone [https://github.com/Morfusee/MO-IT122-DevOps.git](https://github.com/Morfusee/MO-IT122-DevOps.git) - ``` + 1. Open GitHub Desktop. + 2. Go to **File > Clone Repository**. + 3. Paste the repository URL: + ``` + [https://github.com/Morfusee/MO-IT122-DevOps](https://github.com/Morfusee/MO-IT122-DevOps) + ``` + 4. Choose your desired local path and click **Clone**. + + - **Using Git via terminal**: + ```bash + git clone [https://github.com/Morfusee/MO-IT122-DevOps.git](https://github.com/Morfusee/MO-IT122-DevOps.git) + ``` 2. **Configure Environment Variables** Navigate to the root directory of the cloned repository and set up your environment variables. - * Duplicate the `.env.example` file and rename the copy to `.env`. - * Open the newly created `.env` file and fill in all the required values, such as database connection strings or API keys. + - Duplicate the `.env.example` file and rename the copy to `.env`. + - Open the newly created `.env` file and fill in all the required values, such as database connection strings or API keys. 3. **Install Dependencies** @@ -132,6 +140,7 @@ Before you start, make sure you have the following installed: ```bash pnpm install:all ``` + This command leverages pnpm workspaces to efficiently install dependencies across your monorepo. ### Running the Application @@ -147,8 +156,8 @@ This method is ideal for active development, as it allows for hot-reloading and pnpm dev ``` 2. Once the services are running, access the application at the following URLs: - * **Frontend**: `http://localhost:3000` - * **Backend API Docs**: `http://localhost:3333/docs` + - **Frontend**: `http://localhost:3000` + - **Backend API Docs**: `http://localhost:3333/docs` #### Run with Docker (Containerized) @@ -161,7 +170,7 @@ For a more production-like environment or to ensure consistency across different ``` This will build the Docker images (if not already built) and start the containers for both the frontend and backend. 3. Once the containers are up and running, access the application at: - * **Frontend**: `http://localhost:3002` - * **Backend API Docs**: `http://localhost:3001/docs` + - **Frontend**: `http://localhost:3002` + - **Backend API Docs**: `http://localhost:3001/docs` --- diff --git a/docs/images/alert-rules-page.png b/docs/images/alert-rules-page.png new file mode 100644 index 0000000..cfe4b39 Binary files /dev/null and b/docs/images/alert-rules-page.png differ diff --git a/docs/images/cloud-platform-w-monitoring-architecture.png b/docs/images/cloud-platform-w-monitoring-architecture.png new file mode 100644 index 0000000..b82b5ce Binary files /dev/null and b/docs/images/cloud-platform-w-monitoring-architecture.png differ diff --git a/docs/images/graphs-from-metrics.png b/docs/images/graphs-from-metrics.png new file mode 100644 index 0000000..7da9a2d Binary files /dev/null and b/docs/images/graphs-from-metrics.png differ diff --git a/docs/images/monitoring-architecture.png b/docs/images/monitoring-architecture.png new file mode 100644 index 0000000..6d616b4 Binary files /dev/null and b/docs/images/monitoring-architecture.png differ diff --git a/docs/images/prometheus-connection-grafana.png b/docs/images/prometheus-connection-grafana.png new file mode 100644 index 0000000..abf62a3 Binary files /dev/null and b/docs/images/prometheus-connection-grafana.png differ diff --git a/docs/monitoring-documentation.md b/docs/monitoring-documentation.md new file mode 100644 index 0000000..d8eef6b --- /dev/null +++ b/docs/monitoring-documentation.md @@ -0,0 +1,207 @@ +# Monitoring System Documentation + +This documentation outlines the setup, configuration, and usage of the Prometheus-Grafana monitoring stack. It includes service metrics collected from custom applications, system exporters, and Traefik, along with alerting rules and visual dashboards. The goal is to ensure observability, performance tracking, and proactive issue detection across the deployed infrastructure. + +--- + +## 1. Monitoring System Architecture Documentation + +![Monitoring System Architecture](./images/cloud-platform-w-monitoring-architecture.png) + +### Components Overview + +| Component | Description | +| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | +| **Prometheus** | Time-series database and metrics scraper. Configured with rules and alerting logic. | +| **Grafana** | Visualization layer that reads from Prometheus to display metrics dashboards. | +| **Node Exporter** | Gathers system-level metrics from host OS (CPU, memory, disk, etc.). | +| **AdonisJS App (Custom Exporter)** | Exposes application metrics via `/metrics` endpoint, including request counters, durations, and internal Node.js stats. | +| **Traefik (Reverse Proxy)** | Exposes its own HTTP metrics at `/metrics` endpoint for monitoring HTTP traffic. | +| **Alertmanager** | Manages alert routing and dispatching (e.g., to Discord via webhook relay). | +| **alertmanager-discord-relay** | Bridges Alertmanager webhook alerts to Discord. | +| | + +### Data Flow + +![Monitoring Architecture Data Flow](./images/monitoring-architecture.png) + +1. **Exporters (Node Exporter, AdonisJS, Traefik)** expose metrics via HTTP endpoints. +2. **Prometheus** scrapes each target at defined intervals (e.g., every 15s). +3. Prometheus **stores** raw time-series data in local TSDB. +4. **Recording rules** generate pre-aggregated metrics for fast querying. +5. **Alerting rules** check metrics and trigger alerts based on thresholds. +6. **Alertmanager** groups and routes alerts to the configured Discord webhook. +7. **Grafana** pulls metrics from Prometheus and renders them into dashboards for visualization. + +--- + +## 2. Metrics Catalog + +### Custom Application Metrics (AdonisJS) + +| Metric | Description | Example Query | +| --------------------------------------------------------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- | +| `sum(increase(_http_requests_total[$__range]))` | Total number of HTTP requests over time window. | Total requests since $\_\_range. | +| `sum(rate(_http_requests_total{ok="true"}[$__range])) / sum(rate(_http_requests_total[$__range])) * 100` | Percentage of successful HTTP requests. | Success rate %. | +| `sum(rate(_http_requests_total{ok="false"}[$__range])) / sum(rate(_http_requests_total[$__range])) * 100` | Percentage of failed HTTP requests. | Error rate %. | +| `topk(15, sum by (route, method, status) (rate(_http_requests_total[$__rate_interval])) > 0)` | Top 15 routes by traffic. | Route activity insight. | +| `nodejs_eventloop_lag_seconds` | Current Node.js event loop lag. | Measures responsiveness under load. | + +--- + +### System Metrics (Node Exporter) + +| Metric | Description | Example Query | +| ------------------------------------------------------------------------------ | -------------------------------- | --------------------------- | +| `irate(node_pressure_cpu_waiting_seconds_total{...})` | Instant rate of CPU pressure. | CPU wait pressure trend. | +| `irate(node_pressure_memory_waiting_seconds_total{...})` | Instant rate of memory pressure. | Memory wait conditions. | +| `irate(node_pressure_io_waiting_seconds_total{...})` | Instant rate of I/O pressure. | Disk I/O bottleneck signal. | +| `irate(node_pressure_irq_stalled_seconds_total{...})` | Instant rate of IRQ stalls. | Interrupt handling delay. | +| `100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle"}[$__rate_interval])))` | System-wide CPU usage. | Overall CPU load %. | +| `scalar(node_load1) * 100 / count(count(node_cpu_seconds_total) by (cpu))` | Load average per core. | CPU saturation % per core. | +| `(1 - (MemAvailable / MemTotal)) * 100` | Memory usage %. | Tracks available memory. | +| `(size - avail) / size * 100` | Disk usage % (root FS) | Filesystem saturation. | +| `count(count(node_cpu_seconds_total) by (cpu))` | CPU core count. | Total logical CPUs. | +| `node_memory_MemTotal_bytes` | Total memory in bytes. | Base capacity reference. | +| `node_filesystem_size_bytes{...}` | Root filesystem total size. | Base FS storage value. | +| `node_time_seconds - node_boot_time_seconds` | System uptime in seconds. | Duration since last boot. | +| `node_reboot_required` | 1 if reboot required. | Security/patch awareness. | + +--- + +### Reverse Proxy Metrics (Traefik) + +| Metric | Description | Example Query | | +| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ---------------------------------- | ----------------------------------------- | +| `topk(15, label_replace(sum by (service, code)(rate(traefik_service_requests_total{...})) > 0, ...))` | Top HTTP services by status code. | See request volume per service. | | +| `sum(traefik_open_connections{entrypoint=~"$entrypoint"}) by (entrypoint)` | Active open connections per entrypoint. | Tracks concurrent load. | | +| `topk(15, label_replace(sum by (service, method, code)(rate(traefik_service_requests_total{code=~"2.."})) > 0, ...))` | Top successful (2xx) requests. | Most actively responding services. | | +| `topk(15, label\_replace(sum by (service, method, code)(rate(traefik\_service\_requests\_total{code!\~"2..5.."}) > 0, ...))`| Top 3xx/4xx requests. | Indicates possible routing/client issues. | + +--- + +### Prometheus Self-Metrics + +| Metric | Description | Example Query | +| --------------------------------------------------------- | --------------------------------------------- | ----------------------------- | +| `prometheus_ready` | Prometheus self-readiness. | Health check metric. | +| `sum(prometheus_target_scrape_pool_targets)` | Total number of scrape targets. | Target discovery validation. | +| `prometheus_notifications_alertmanagers_discovered` | Number of discovered Alertmanager instances. | Alert routing topology check. | +| `sum(increase(prometheus_notifications_sent_total[10m]))` | Number of alerts sent in the last 10 minutes. | Active alert volume. | +| `rate(process_cpu_seconds_total[$__rate_interval])` | Prometheus process CPU usage. | Resource usage profiling. | + +--- + +## 3. PromQL Query Reference Guide + +| Query | Purpose | +| --------------------------------------------------------------------------------------------------------- | ---------------------------------------- | +| `sum(increase(_http_requests_total[$__range]))` | Total requests over time. | +| `sum(rate(_http_requests_total{ok="true"}[$__range])) / sum(rate(_http_requests_total[$__range])) * 100` | % of successful HTTP requests. | +| `sum(rate(_http_requests_total{ok="false"}[$__range])) / sum(rate(_http_requests_total[$__range])) * 100` | % of failed HTTP requests. | +| `topk(15, sum by (route, method, status)(rate(_http_requests_total[$__rate_interval]))) > 0` | Top 15 routes by traffic. | +| `nodejs_eventloop_lag_seconds` | Event loop lag (Node.js responsiveness). | +| `irate(node_pressure_cpu_waiting_seconds_total{...})` | CPU pressure. | +| `irate(node_pressure_memory_waiting_seconds_total{...})` | Memory pressure. | +| `irate(node_pressure_io_waiting_seconds_total{...})` | I/O pressure. | +| `irate(node_pressure_irq_stalled_seconds_total{...})` | IRQ pressure. | +| `100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle"}[$__rate_interval])))` | CPU usage %. | +| `scalar(node_load1) * 100 / count(count(node_cpu_seconds_total) by (cpu))` | Load per CPU. | +| `(1 - (MemAvailable / MemTotal)) * 100` | Memory usage %. | +| `(node_filesystem_size_bytes - node_filesystem_avail_bytes) / node_filesystem_size_bytes * 100` | Root filesystem usage %. | +| `count(count(node_cpu_seconds_total) by (cpu))` | Total CPU cores. | +| `node_time_seconds - node_boot_time_seconds` | Uptime. | +| `node_reboot_required` | Check for pending reboots. | +| `sum(traefik_open_connections{entrypoint=~"$entrypoint"}) by (entrypoint)` | Active Traefik connections. | +| `topk(15, label_replace(sum by (service, code)(rate(traefik_service_requests_total{...})), ...))` | Top Traefik services by requests. | +| `prometheus_ready` | Prometheus health. | +| `sum(prometheus_target_scrape_pool_targets)` | Target count. | +| `sum(increase(prometheus_notifications_sent_total[10m]))` | Alerts sent. | +| `rate(process_cpu_seconds_total[$__rate_interval])` | Prometheus CPU usage. | + +--- + +## 4. Alert Rules Documentation + +### Defined Alerts + +| Alert Name | Expression | Threshold | Duration | Severity | Description | +| --------------------------- | -------------------------------------------- | --------- | -------- | -------- | -------------------------------------- | +| NodeDown | `up{job="node_exporter"} == 0` | No data | 1m | critical | Node Exporter is down. | +| HighMemoryUsage | `(1 - (MemAvailable / MemTotal)) * 100 > 80` | >80% | 10s | warning | Memory usage exceeds threshold. | +| HighCPUUsage | `100 - avg(rate(idle_cpu[2m])) * 100 > 85` | >85% | 2m | warning | CPU usage is high. | +| LowDiskSpace | `UsedDisk > 90%` | >90% | 3m | warning | Disk space is running low. | +| NodeExporterMissing | `absent(up{job="node_exporter"})` | none | 1m | critical | Node Exporter is not reporting at all. | +| HighTraefikErrorRate | `5xx errors > 5%` | >5% | 2m | warning | High HTTP error rate in Traefik. | +| AppEndpointDown | `up{job="adonisjs-app"} == 0` | No data | 1m | critical | AdonisJS app is unreachable. | +| PrometheusSelfScrapeFailing | `up{job="prometheus"} == 0` | No data | 1m | warning | Prometheus cannot scrape itself. | + +### Response Procedures + +| Alert | Standard Procedure | +| --------------------------- | --------------------------------------------------------- | +| NodeDown | Verify server uptime and Prometheus target config. | +| HighMemoryUsage | Check app/container memory usage, restart if needed. | +| HighCPUUsage | Check for app overload or infinite loops. | +| LowDiskSpace | Clear unused files or extend volume. | +| NodeExporterMissing | Ensure service is running; check Docker logs. | +| HighTraefikErrorRate | Inspect service logs for 5xx error causes. | +| AppEndpointDown | Confirm AdonisJS is running and reachable. | +| PrometheusSelfScrapeFailing | Ensure Prometheus instance is healthy and not overloaded. | + +### Alert Grouping and Routing + +- Alerts are grouped by `alertname` for clarity. +- Discord integration via: + + - `alertmanager-discord-relay` + - Receives all alert payloads + - Uses Discord webhook from environment variable + +--- + +## How to Run the Monitoring Stack + +1. Create external network `main-network` if not already present: + +```bash +docker network create main-network +``` + +2. Navigate to your project root and launch the monitoring stack: + +```bash +docker compose -f compose.monitor.yml up -d +``` + +3. Ensure you have a valid `.env` file containing: + + - `DISCORD_WEBHOOK_URL` + - `GRAFANA_URL_FQDN` + - `DOCKER_USER_GROUP` + +4. Access Grafana: + +``` +https:// +``` + +--- + +## Screenshot Evidence of Working Prometheus & Grafana Installation + +### Prometheus Connection on Grafana + +![Prometheus Connection on Grafana](./images/prometheus-connection-grafana.png) + +--- + +### Graphs from Metrics Queries + +![Graphs from Metrics Queries](./images/graphs-from-metrics.png) + +--- + +### Alert Rules Page + +![Alert Rules Page](./images/alert-rules-page.png) \ No newline at end of file diff --git a/docs/simulation-documentation.md b/docs/simulation-documentation.md new file mode 100644 index 0000000..b4fdec1 --- /dev/null +++ b/docs/simulation-documentation.md @@ -0,0 +1,107 @@ +# Simulation Script Documentation + +## Overview + +This script simulates realistic user interactions with an API-based chat application for the purpose of testing and generating traffic. It performs the following: + +- User registration and login +- Chat creation +- Retrieval and update of chat data +- Sending and retrieving messages +- Cleanup by deleting the chat + +This script is ideal for load testing, development environment traffic simulation, and basic integration checks. + +--- + +## File Location + +``` +simulation/index.js +``` + +--- + +## Requirements + +- [Node.js](https://nodejs.org/) installed +- [pnpm](https://pnpm.io/) as the package manager +- `.env` file with the following environment variables: + +``` + +API_BASE_URL=https://api.brainbytes.mcube.uk +TEST_EMAIL=test@email.com +TEST_PASSWORD=yourpassword + +``` + +--- + +## How to Run + +1. **Navigate to the `simulation` folder**: + + ```bash + cd simulation + ``` + +2. **Install dependencies (if you haven’t already):** + + ```bash + pnpm install + ``` + +3. **Run the simulation script:** + + ```bash + pnpm run dev + ``` + +--- + +## Script Breakdown + +### 1. `.env` Usage + +The script reads these values from your `.env` file: + +- `API_BASE_URL`: Base URL of the backend API +- `TEST_EMAIL` & `TEST_PASSWORD`: Credentials for simulated user + +### 2. User Flow + +The simulation follows this sequence: + +1. **Register** the user (silently fails if already exists) +2. **Login** and retrieve an `accessToken` +3. **Create a chat** +4. Perform the following actions using the `accessToken`: + - Fetch `/me` + - Get all chats + - Get specific chat by ID + - Update chat name + - Send a message + - Retrieve messages + - Delete the chat + +--- + +## Output + +You will see console logs like: + +``` +Create chat response: { chat: { id: "abc123", ... } } +Simulated run finished. +``` + +--- + +## Troubleshooting + +- Ensure `.env` is correctly configured in the `simulation` directory. +- Make sure the API server is running and accessible at `API_BASE_URL`. +- If you see `Login failed: 401`, check the credentials in `.env`. + +--- diff --git a/docs/system-design-documentation.md b/docs/system-design-documentation.md index 38f796e..d267dbb 100644 --- a/docs/system-design-documentation.md +++ b/docs/system-design-documentation.md @@ -202,8 +202,9 @@ In the unlikely event that a deployed version introduces critical issues, a robu ### Monitoring and Observability -The team is currently planning to implement **Grafana and Prometheus** for comprehensive monitoring. -For immediate network monitoring, the **Traefik Dashboard** is used to observe networking activity and service status. +The team is currently using **Grafana and Prometheus** for comprehensive monitoring. + +For more information, check this [documentation](./monitoring-documentation.md) out --- diff --git a/simulation/.env.template b/simulation/.env.template new file mode 100644 index 0000000..32cdfad --- /dev/null +++ b/simulation/.env.template @@ -0,0 +1,3 @@ +API_BASE_URL= +TEST_EMAIL= +TEST_PASSWORD= \ No newline at end of file diff --git a/simulation/README.md b/simulation/README.md new file mode 100644 index 0000000..14bce1c --- /dev/null +++ b/simulation/README.md @@ -0,0 +1,6 @@ +# Simulation Script Documentation + +This script simulates realistic user interactions with an API-based chat application for the purpose of testing and generating traffic. + +> ⚠️ **Looking for the full documentation?** +> 👉 [View the complete guide here »](../docs/simulation-documentation.md) \ No newline at end of file diff --git a/simulation/index.js b/simulation/index.js new file mode 100644 index 0000000..61758fe --- /dev/null +++ b/simulation/index.js @@ -0,0 +1,114 @@ +import "dotenv/config"; + +const BASE_URL = process.env.API_BASE_URL; + +const userCreds = { + email: process.env.TEST_EMAIL, + password: process.env.TEST_PASSWORD, + firstName: "Mark", + lastName: "Ngo", +}; + +const dummyChat1 = { prompt: "Can you teach me about biology?" }; +const dummyChat2 = { prompt: "How does cells work?" }; + +async function registerUser() { + try { + await fetch(`${BASE_URL}/register`, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(userCreds), + }); + } catch (_) { + // Ignore if already registered + } +} + +async function login() { + const res = await fetch(`${BASE_URL}/login`, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(userCreds), + }); + + if (!res.ok) { + const text = await res.text(); + throw new Error(`Login failed: ${res.status} - ${text}`); + } + + const data = await res.json(); + if (!data.accessToken) { + throw new Error("Login response did not contain accessToken"); + } + + return data.accessToken; +} + +async function simulate() { + await registerUser(); + const token = await login(); + + const headers = { + "Content-Type": "application/json", + cookie: `accessToken=${token}`, + }; + + // Create chat first + const createRes = await fetch(`${BASE_URL}/chats`, { + method: "POST", + headers, + body: JSON.stringify(dummyChat1), + }); + + if (!createRes.ok) { + const errorText = await createRes.text(); + throw new Error( + `Failed to create chat: ${createRes.status} - ${errorText}` + ); + } + + const data = await createRes.json(); + console.log("Create chat response:", data); + + const { chat } = data; + if (!chat || !chat.id) { + throw new Error("Chat creation failed. No chat object returned."); + } + + // Proceed only after chat is confirmed + // 1. Get /me + await fetch(`${BASE_URL}/me`, { headers }); + + // 2. Get all chats + await fetch(`${BASE_URL}/chats`, { headers }); + + // 3. Get chat by ID + await fetch(`${BASE_URL}/chats/${chat.id}`, { headers }); + + // 4. Update chat name + await fetch(`${BASE_URL}/chats/${chat.id}`, { + method: "PATCH", + headers, + body: JSON.stringify({ name: "Updated from traffic sim" }), + }); + + // 5. Send message + await fetch(`${BASE_URL}/chats/${chat.id}/messages`, { + method: "POST", + headers, + body: JSON.stringify(dummyChat2), + }); + + // 6. Get messages + await fetch(`${BASE_URL}/chats/${chat.id}/messages`, { headers }); + + // 7. Delete chat + await fetch(`${BASE_URL}/chats/${chat.id}`, { + method: "DELETE", + headers, + }); + + console.log(`Simulated run finished.`); +} + +simulate(); \ No newline at end of file diff --git a/simulation/package.json b/simulation/package.json new file mode 100644 index 0000000..de698fc --- /dev/null +++ b/simulation/package.json @@ -0,0 +1,18 @@ +{ + "name": "simulation", + "version": "1.0.0", + "description": "", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1", + "sim": "node index.js" + }, + "type": "module", + "keywords": [], + "author": "", + "license": "ISC", + "packageManager": "pnpm@10.10.0", + "dependencies": { + "dotenv": "^16.5.0" + } +} diff --git a/simulation/pnpm-lock.yaml b/simulation/pnpm-lock.yaml new file mode 100644 index 0000000..2b00f34 --- /dev/null +++ b/simulation/pnpm-lock.yaml @@ -0,0 +1,23 @@ +lockfileVersion: '9.0' + +settings: + autoInstallPeers: true + excludeLinksFromLockfile: false + +importers: + + .: + dependencies: + dotenv: + specifier: ^16.5.0 + version: 16.5.0 + +packages: + + dotenv@16.5.0: + resolution: {integrity: sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg==} + engines: {node: '>=12'} + +snapshots: + + dotenv@16.5.0: {}