This project uses Quarkus, the Supersonic Subatomic Java Framework.
The project implements a distributed health check system where nodes can monitor each other's health status. Each node:
- Exposes its own health status via REST endpoints
- Monitors the health of other nodes in the cluster
- Provides aggregated health information about the entire cluster
-
Node Status Endpoint (
/node/status)- Simple endpoint that returns the node's ID and health status
- Used for direct node-to-node communication
- Returns:
{"nodeId": <id>, "isHealthy": true}
-
Cluster Health Check (
/q/health)- Aggregates health information from all nodes
- Shows total number of nodes and healthy nodes
- Provides detailed status for each node in the cluster
- Response includes:
- Node ID
- Cluster size
- Number of healthy nodes
-
Docker Configuration
- Each node runs in its own container
- Nodes communicate over a dedicated Docker network
- Health checks run every 5 seconds
- Configured timeouts: 2 seconds
- Automatic retries: 3 times
Start the entire cluster using:
./scripts/start.shThis will:
- Build the application
- Create a Docker network
- Start three nodes on ports 8081, 8082, and 8083
You can configure the log levels when starting the cluster:
./scripts/start.sh --log-level=INFO --app-log-level=INFOAvailable log levels:
TRACE: Most detailed loggingDEBUG: Detailed information for debuggingINFO: General information (default for system logs)WARN: Warning messagesERROR: Error messages onlyFATAL: Critical errors only
The --log-level parameter controls the system-wide logging level, while --app-log-level specifically controls the logging level for the application code (com.dant package).
A helper script is provided to easily view logs from the Docker containers:
./scripts/logs.shOptions:
--node=NODE: Show logs for specific node(s). Can be node1, node2, node3, or a comma-separated list--followor-f: Follow log output (like tail -f)--tail=LINES: Number of lines to show (default: all)
Examples:
# View logs from all nodes
./scripts/logs.sh
# View logs from node1 only
./scripts/logs.sh --node=node1
# View logs from node1 and node3
./scripts/logs.sh --node=node1,node3
# Follow logs from all nodes
./scripts/logs.sh --follow
# Show only the last 100 lines from all nodes
./scripts/logs.sh --tail=100- Check individual node status:
curl http://localhost:8081/node/status
curl http://localhost:8082/node/status
curl http://localhost:8083/node/status- Check cluster-wide health:
curl http://localhost:8081/q/health
curl http://localhost:8082/q/health
curl http://localhost:8083/q/healthYou can run your application in dev mode that enables live coding using:
./mvnw compile quarkus:devNOTE: Quarkus now ships with a Dev UI, which is available in dev mode only at http://localhost:8080/q/dev/.
The application can be packaged using:
./mvnw packageIt produces the quarkus-run.jar file in the target/quarkus-app/ directory.
Be aware that it's not an über-jar as the dependencies are copied into the target/quarkus-app/lib/ directory.
The application is now runnable using java -jar target/quarkus-app/quarkus-run.jar.
If you want to build an über-jar, execute the following command:
./mvnw package -Dquarkus.package.type=uber-jarThe application, packaged as an über-jar, is now runnable using java -jar target/*-runner.jar.
You can create a native executable using:
./mvnw package -DnativeOr, if you don't have GraalVM installed, you can run the native executable build in a container using:
./mvnw package -Dnative -Dquarkus.native.container-build=trueYou can then execute your native executable with: ./target/distributed-database-java-1.0.0-SNAPSHOT-runner
If you want to learn more about building native executables, please consult https://quarkus.io/guides/maven-tooling.
Easily start your REST Web Services
- In-memory storage engine
- Query engine with filtering and projection
- Parquet file support for data loading
- Java 17 or higher
- Maven 3.8.1 or higher
mvn clean packagejava -jar target/quarkus-app/quarkus-run.jarOr using the Quarkus dev mode:
mvn quarkus:devThe application provides two endpoints for loading Parquet files:
POST /parquet/load?filePath=/path/to/file.parquet&tableName=my_table
This endpoint loads a Parquet file from a local path into the database.
POST /parquet/upload
This endpoint accepts a multipart form with the following fields:
file: The Parquet file to uploadtableName: The name of the table to load the data into
POST /parquet/load
Query Parameters:
filePath: The path to the Parquet filetableName: The name of the table to load the data into
Response:
{
"rowsLoaded": 1000,
"message": "Successfully loaded 1000 rows into table my_table"
}POST /parquet/upload
Form Parameters:
file: The Parquet file to uploadtableName: The name of the table to load the data into
Response:
{
"rowsLoaded": 1000,
"message": "Successfully loaded 1000 rows into table my_table"
}This project is licensed under the MIT License - see the LICENSE file for details.