This document describes the various scripts included in the package.json of the project.
dev: Runs the project in development mode usingts-node, with environment variables loaded viadotenv.clean: Removes thenode_modulesdirectory to clean up the project dependencies.build: Compiles TypeScript files into JavaScript using the TypeScript compiler.watch: Runs the project in development mode with live reloading enabled, usingts-node-dev.lint: Executes ESLint to identify and report on patterns found in TypeScript files within thesrcdirectory.lint:fix: Runs thelintscript with the--fixoption to automatically correct linting errors.start: Starts the project using Node.js, targeting the compiled JavaScript files in thedistdirectory.typecheck: Runs the TypeScript compiler to check types without generating JavaScript files.commit: Facilitates conventional commits using the interactivegit-czCLI tool.prepare: Sets up Husky for managing Git hooks.test: Runs tests using Jest in isolated environments.plugin:generate: Executes a custom js script (wezard-scripts/plugin-generator/index.js) to generate boilerplate code for new plugins, including necessary files and directories based on a template. It requires the plugin name as second paramater (e.g., user) and generates files and folders in thesrc/pluginsdirectory.schema:generate: Executes a custom script (wezard-scripts/schema-generator/index.js) to generate schema-related artifacts. It generates a TypeScript schema from.types.tsfiles in thesrcdirectory usingtypescript-json-schema, outputting to_schema.ts. This final file can be used for validation in.routes.tsfiles. For typescript annotations see this docprisma:generate: generate Prisma typesterraform:init:staging: init Terraform staging configurationterraform:init:prod: init Terraform prod configurationterraform:apply:staging: apply Terraform staging configurationterraform:apply:prod: apply Terraform prod configuration
This document outlines the structure of a typical plugin, consisting of several key components that work together to provide functionality. Each file in the plugin has a distinct role:
Defines the API endpoints, including paths, permissions, authentication requirements, and validation rules. Ensures secure and structured access to the plugin's functionalities.
Acts as the intermediary between the services and routes. It invokes necessary services, handles errors, and formats the response to ensure consistency and error management.
Responsible for the business logic. It calls various repositories with prepared input data and processes the output, returning "clean" data back to the controller.
Directly interacts with the database, performing CRUD operations. It abstracts the database layer from the service layer, ensuring separation of concerns.
Contains type declarations for the plugin. It defines interfaces, types, and enums used across the plugin, ensuring type safety and clarity in data handling.
The hooks folder contains predefined Fastify server hooks to decorate and enrich Fastify Request and Reply objects:
WezardApiResponse.tsdefines two methods (successanderror) which are appended to the FastifyReply object aswezardSuccessandwezardErrorto standardize success and error responses. Thesuccessfunction is also responsible for outgoing validation given a schema passed as a third parameter.WezardErrorHandler.tsdefines error management and logging. It handles all responses in case an error occurs.WezardReqInitializer.tssets the logger for the request and logs the initial message.WezardResponseLogger.tslogs the final request message.authenticationfolder contains authentication functions to use in.routes.tsplugin files. E.g.,instance.post('/login', { preHandler: [authToken], schema: { body: _schema.LoginBody } }, authController.login). Here you can define different kind of authentication (eg: authToken) and for each of them define their implementation (eg:firebase-auth.ts)
Prisma for PostgreSQL focuses on defining database schemas and generating Prisma Client through the npx prisma generate command. This process bridges the gap between database schemas and application code, enhancing type safety and data access. For in-depth details, refer to the official Prisma documentation.
The prisma folder contains:
schema.prisma, which is the DB model definition (you can switch from PostgreSQL to MongoDB by changing the provider line).- The
migrationsfolder, where migrations will be stored after the commandnpx prisma migrate dev. - The
dbmlfolder contains a DBML document of the DB model (you can paste it on dbdiagram.io).
The Prisma client is then instantiated in src/utils/db.ts.
In the src/utils folder, you can also
find:
- Firebase notification utility (sends notifications given device tokens, and notification title and message).
- Storage utility (standard implementation is with Google Cloud Storage).
- Mail utility (standard implementations are with SendGrid and MailJet).
- Definition of Wezard Error, used whenever an error needs to be thrown. You can throw it in two different ways:
throw WezardError.fromDef(APIErrors.InvalidToken), in this case, you are using a predefined error from theconsts.tsfile.throw new WezardError("message", 500, "GENERIC_CODE", {"data": "error data"}, previousError, isVisible), in this case, you are constructing a new error. Follow the documentation inWezardError.tsto understand how to use the input parameters.
The provided logger setup in Node.js using Winston showcases the system's flexibility and extensibility, particularly in how it integrates various transports for comprehensive logging solutions. This setup is designed to be adaptable to various environments and requirements, from simple console outputs for development to sophisticated logging mechanisms like Logtail and Sentry for production, ensuring effective logging practices across different stages of application deployment.
For a detailed walkthrough and code snippets, please refer to the original content or documentation associated with Winston and the specific transports you are interested in integrating.
This file defines the main configuration for the Google Cloud infrastructure required by the Fastify service. It includes:
- Google Cloud Provider: Specifies the GCP project and region.
- API Enabling: Activates the necessary APIs to manage resources like Cloud Run, Cloud SQL, Artifact Registry, and IAM.
- GitHub Actions Service Account: Creates a service account that is used to deploy from GitHub Actions with the following roles:
roles/artifactregistry.adminroles/iam.serviceAccountUserroles/run.admin
- Cloud SQL (PostgreSQL 16): Creates and configures a Cloud SQL instance running PostgreSQL 16.
- Artifact Registry: Creates a Docker repository to host the Docker images used by the service.
- Cloud Run: Defines the Cloud Run service, including environment variables, CPU and memory configurations, and autoscaling.
- IAM Binding: Grants necessary permissions to the service account for deploying on Cloud Run.
- Outputs: Returns key information like the Cloud Run URL and the service account key in JSON format.
This file defines all the variables needed to configure the infrastructure. Each variable can be overridden by a .tfvars file specific to the environment (e.g., staging.tfvars).
Key variables include:
project_id: The GCP project ID.service_name: The name of the Cloud Run service.db_instance_name: The name of the Cloud SQL (PostgreSQL) instance.db_name,db_user,db_pwd: Database configurations.min_instances,max_instances: Parameters for the minimum and maximum number of Cloud Run instances.cpu_limit,memory_limit: Resource limits for Cloud Run.bucket_name: The name of the Cloud Storage bucket.github_sa_name: The name of the service account used by GitHub Actions.tmp_image_enabled: A boolean that determines whether to use a temporary image for the initial deployment when the Docker image is not yet available in the Artifact Registry. This is useful during the first deployment to avoid failures if the target image has not been pushed yet. Iftrue, it allows the use of a placeholder image until the final image is available.
This file contains variable values specific to the staging environment. It allows you to quickly configure the infrastructure for a specific environment.
Example variables in the file:
project_id = "app-wezard"
service_name = "wezard-svc-staging"
db_instance_name = "wezard-staging"
db_name = "staging"
db_user = "staging"
db_pwd = "AAAAAA"
min_instances = "0"
max_instances = "1"
cpu_limit = "1"
memory_limit = "256Mi"
bucket_name = "wezard-staging"
github_sa_name = "github-actions-sa"
tmp_image_enabled = trueThese variables are loaded when running Terraform with the -var-file="staging.tfvars" flag.
This file defines the outputs that Terraform will return after running the deployment:
- Cloud Run URL: The public URL of the Cloud Run service.
- Database Connection String: The connection string to the Cloud SQL PostgreSQL database.
- Artifact Registry URL: The Docker repository URL for deployment.
- Service Account Key: The service account key in BASE64 format, to be used for configuring GitHub Actions.
First, initialize the Terraform project for the staging environment:
yarn terraform:init:stagingFor production, use:
yarn terraform:init:prodTo apply the configuration and create the resources on GCP for staging:
yarn terraform:apply:stagingFor production, use:
yarn terraform:apply:prodYou can define environment variables in the .tfvars file or in the variables.tf. These variables will be automatically passed to the Fastify service when deploying on Cloud Run.
The variable tmp_image_enabled is a boolean flag that controls whether a temporary placeholder image should be used for Cloud Run when the target Docker image is not yet available in the Artifact Registry.
This flag is primarily used during the initial deployment to avoid failures when the Docker image that should be deployed hasn't been pushed to the Artifact Registry yet. If this is set to true, a temporary Docker image (like nginx or hello-world) is used in place of the final image until it is available.
In the Terraform code, you can find this variable being checked and used to determine which Docker image to use for the Cloud Run service.
The Bruno collections are stored under the bruno folder and mirror the example Fastify plugins:
bruno/Auth/*.brucalls theAuthroutes (for example the/registerendpoint).bruno/User/*.brucalls theUserroutes (for example theGET /users/:userIdendpoint).bruno/Health/*.bruandbruno/Docs/*.brutarget the health check and OpenAPI documentation routes.bruno/environments/local.brudefines shared variables such asbaseUrl,apiVersionandtoken.
The collection is generated from the OpenAPI document using the yarn bruno:generate script, keeping the requests in sync with the documented API.
- Install the Bruno CLI and make sure the service is running locally.
- Edit
bruno/environments/local.bruif needed to pointbaseUrlto your local URL and settokenwhen testing authenticated routes. - Open the
brunofolder in the Bruno app, or run requests from the CLI, for example:- Run all checks against the local environment from inside the
brunodirectory:bru run --env local
- Run or tweak the single
Auth/Userrequests to manually exercise the example endpoints.
- Run all checks against the local environment from inside the
The test suite is built with Jest and split into unit and integration layers:
- Unit tests live under
tests/unit:auth.service.test.tscoversauthService.register, mockingusers.service.users.service.test.tscoversusersServicewhile mocking the underlying repository.
- Integration tests live under
tests/integrationand are meant to exercise Fastify endpoints end‑to‑end (seeexample.test.tsas a template). - Shared test utilities and seeders (for example
tests/auth.seeders.ts) are used to prepare and clean the database for auth‑related scenarios.
The following scripts are available in package.json:
yarn test:unitruns only the unit tests.yarn test:integrationruns only the integration tests.yarn test:allruns the full suite with the.env.testconfiguration.
- Ensure the test database is up and migrated (see the Mock DB section below, or run
yarn test:setup). - Run unit tests during development:
yarn test:unit
- Run integration tests when you need to verify the example HTTP endpoints:
yarn test:integration
- Before pushing, run the full suite locally with:
yarn test:all
Tests use an isolated PostgreSQL instance defined in tests/docker-compose.test.yml:
- Spins up a
postgres:16-alpinecontainer on port5433with an in‑memory data directory (tmpfs), so test data is ephemeral. - Applies a health check (
pg_isready) to ensure the database is ready before tests run. - Enables the
pgcryptoextension through the container command.
The test database lifecycle is orchestrated by scripts in package.json:
yarn test:db:upstarts the database container.yarn test:db:downstops and removes it.yarn test:setupbrings the DB up and applies Prisma migrations against it.yarn test:resetcleans auth data, recreates the database and reruns migrations.yarn test:teardownstops the database container.
- Start the test database and run migrations:
yarn test:setup
- Run whichever test command you need (
yarn test:unit,yarn test:integration, oryarn test:all). - When you are done testing, stop the database:
yarn test:teardown
The .env.test file defines the connection string used by Prisma and the application when running the test scripts.
Request and response validation are implemented with Zod schemas co‑located with each plugin:
src/plugins/auth/auth.validation.tsdefinesRegisterBodySchemaandRegisterResponseSchema.src/plugins/users/users.validation.tsdefinesGetUserParamsSchemaandCreateUserBodySchema.src/plugins/response.validation.tsexports:ResponseSchemaand a genericResponse<T>type.generateSchemato convert a Zod schema to a JSON Schema used by Fastify for validation and OpenAPI generation.generateResponseSchemato wrap a payload schema inside the standardstatus/code/datashape.
The route handlers plug these schemas into Fastify:
auth.routes.tsusesgenerateSchema(RegisterBodySchema)for the request body andgenerateResponseSchema(RegisterResponseSchema)for the200response.users.routes.tsusesgenerateSchema(GetUserParamsSchema)for URL params.
This ensures strong runtime validation while keeping TypeScript types in sync via z.infer<...>.
When adding a new route:
- Create or extend a
*.validation.tsfile in the relevant plugin folder with your Zod schemas for params, body, query and/or response. - Export TypeScript types with
z.infer<typeof YourSchema>so services and controllers can use strongly‑typed data. - In the corresponding
.routes.tsfile:- Import the schema(s).
- Wrap them with
generateSchema/generateResponseSchemaand attach them to the Fastifyschemafield.
- Regenerate or inspect the OpenAPI documentation to verify the new schemas are correctly applied.
The CI pipeline is defined in .github/workflows/test.yml and runs on every push, pull request, and on manual trigger:
- Provisions a
postgres:16-alpineservice on port5433, mirroring the local test DB setup. - Loads the
.env.testfile into the GitHub Actions environment. - Installs dependencies with
yarn install --frozen-lockfile. - Generates the Prisma client with
yarn prisma:generateand applies migrations withyarn prisma migrate deploy. - Cleans auth test data with
yarn test:auth:clean. - Runs static checks (
yarn typecheckandyarn lint). - Executes the full test suite with
yarn test:all.
- On pushes and pull requests: the workflow runs automatically and will fail the check if type checking, linting, or tests fail.
- Manual runs: from the GitHub Actions tab, trigger the “Units and integrations tests” workflow via
workflow_dispatchto re‑run the pipeline on any branch. - To reproduce the same steps locally, run the commands in the same order:
yarn test:setupyarn test:auth:cleanyarn typecheckyarn lintyarn test:all