Skip to content

techishthoughts-org/java_serialization_frameworks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Java Serialization Frameworks Benchmark

A production-ready benchmarking suite for evaluating 13 modern Java serialization frameworks with comprehensive performance metrics, resource monitoring, and real-time visualization.

Java Spring Boot Frameworks License


Table of Contents

  1. Introduction
  2. Supported Frameworks
  3. Getting Started
  4. Architecture
  5. Benchmarking
  6. Metrics
  7. Monitoring
  8. Performance Results
  9. Framework Selection
  10. API Reference
  11. Configuration
  12. Troubleshooting
  13. Contributing
  14. Resources

1. Introduction

1.1 What This Project Does

This project provides a comprehensive evaluation platform for Java serialization frameworks. Each framework is implemented as an independent Spring Boot microservice with standardized REST APIs, enabling fair and consistent performance comparisons across 13 different serialization technologies.

1.2 Why Use This Benchmark

1.2.1 Problem Statement

Choosing the right serialization framework impacts application performance, scalability, and maintainability. However, comparing frameworks fairly requires:

  • Identical testing conditions
  • Consistent payload structures
  • Comprehensive metrics collection
  • Real-world scenario simulation

1.2.2 Solution Approach

This benchmark solves these challenges by providing:

  • Standardized V2 API across all 13 frameworks
  • Unified payload generator for fair comparison
  • Four-phase metrics collection (network, serialization, resources, transport)
  • Production-ready monitoring with Grafana and Prometheus

1.3 What Makes This Different

1.3.1 Real Production Metrics

Measures actual production concerns: network latency, serialization time, resource usage, and transport efficiency.

1.3.2 Unified Management

Single command-line interface for all operations: start, stop, benchmark, analyze.

1.3.3 Live Monitoring

Pre-configured Grafana dashboards with Prometheus metrics export for real-time analysis.

1.3.4 Fair Comparison

Standardized V2 API ensures identical testing conditions across all frameworks.

1.3.5 Comprehensive Testing

104 test scenarios: 4 payload sizes times 2 configurations times 13 frameworks.


2. Supported Frameworks

2.1 Framework Catalog

ID Framework Port Category Official Site
1 Jackson JSON 8081 Text-based https://github.com/FasterXML/jackson
2 Apache Avro 8083 Binary Schema https://avro.apache.org/
3 Kryo 8084 Binary Schema-less https://github.com/EsotericSoftware/kryo
4 MessagePack 8086 Binary Schema-less https://msgpack.org/
5 Apache Thrift 8087 Binary Schema https://thrift.apache.org/
6 Cap'n Proto 8088 Zero-Copy https://capnproto.org/
7 FST 8090 Binary Schema-less https://github.com/RuedigerMoeller/fast-serialization
8 gRPC 8092 RPC Framework https://grpc.io/
9 CBOR 8093 Binary Schema-less https://cbor.io/
10 BSON 8094 Binary Schema-less http://bsonspec.org/
11 Apache Arrow 8095 Columnar https://arrow.apache.org/
12 SBE 8096 Binary Schema https://github.com/real-logic/simple-binary-encoding
13 Apache Parquet 8097 Columnar https://parquet.apache.org/

2.2 Framework Categories

2.2.1 Text-Based Formats

Description: Human-readable serialization formats

  • Jackson JSON: Ubiquitous web API format with excellent tooling

2.2.2 Binary Schema Formats

Description: Require schema definition for serialization

  • Apache Avro: Schema evolution with backward and forward compatibility
  • Apache Thrift: Multi-language RPC with Interface Definition Language
  • SBE: Fixed-size encoding for ultra-low latency applications

2.2.3 Binary Schema-less Formats

Description: No schema definition required

  • Kryo: Fast Java-only serialization
  • FST: Drop-in replacement for Java serialization
  • MessagePack: Efficient binary JSON alternative
  • CBOR: Concise Binary Object Representation
  • BSON: Binary JSON with rich type system

2.2.4 Zero-Copy Formats

Description: Direct memory access without copying

  • Cap'n Proto: Zero-copy reads with schema evolution support

2.2.5 Columnar Formats

Description: Column-oriented storage for analytics

  • Apache Arrow: In-memory columnar format for data exchange
  • Apache Parquet: Columnar file format with excellent compression

2.2.6 RPC Frameworks

Description: Remote procedure call support

  • gRPC: HTTP/2-based RPC with streaming capabilities

2.3 Performance Characteristics

2.3.1 Speed Rankings

  1. SBE: 0.89ms average
  2. Cap'n Proto: 1.23ms average
  3. FST: 1.45ms average
  4. Kryo: 1.67ms average
  5. Apache Arrow: 2.01ms average

2.3.2 Compression Rankings

  1. Apache Parquet: 0.25 ratio (75% reduction)
  2. Apache Avro: 0.32 ratio (68% reduction)
  3. MessagePack: 0.38 ratio (62% reduction)
  4. BSON: 0.42 ratio (58% reduction)
  5. CBOR: 0.45 ratio (55% reduction)

3. Getting Started

3.1 Prerequisites

3.1.1 Required Software

  • Java 21 (GraalVM or OpenJDK)
  • Maven 3.9 or higher
  • Python 3.8 or higher
  • Git

3.1.2 Optional Software

  • Prometheus (for monitoring)
  • Grafana (for visualization)

3.2 Installation

3.2.1 Clone Repository

git clone https://github.com/yourusername/java_serialization_frameworks.git
cd java_serialization_frameworks

3.2.2 Install Python Dependencies

pip3 install psutil

3.2.3 Make Scripts Executable

chmod +x manage.sh enhanced_benchmark.py

3.3 Running Your First Benchmark

3.3.1 Start All Services

./manage.sh start

Wait 30-60 seconds for all services to initialize.

3.3.2 Verify Service Health

./manage.sh status

Expected output shows all 13 services running.

3.3.3 Run Benchmark

./manage.sh benchmark

Duration: Approximately 15-20 minutes for complete benchmark suite.

3.3.4 Analyze Results

./manage.sh analyze

View performance rankings and detailed metrics analysis.


4. Architecture

4.1 Project Structure

java_serialization_frameworks/
|
|-- README.md                           (this file)
|
|-- docs/                               (complete documentation)
|   |-- README.md                       (documentation index)
|   |-- frameworks/                     (framework deep-dives)
|   |   |-- jackson.md
|   |   |-- avro.md
|   |   |-- [11 more frameworks]
|   |-- guides/                         (comprehensive guides)
|       |-- BENCHMARK_SUMMARY.md
|       |-- SCREENSHOT_GUIDE.md
|       |-- VISUAL_DOCUMENTATION.md
|
|-- manage.sh                           (unified CLI)
|-- enhanced_benchmark.py               (metrics collection)
|-- analyze_metrics.py                  (results analysis)
|
|-- dashboards/                         (Grafana configs)
|   |-- serialization-performance.json
|   |-- resource-utilization.json
|   |-- README.md
|
|-- common-payload/                     (shared models)
|   |-- pom.xml
|   |-- src/main/java/org/techishthoughts/payload/
|       |-- generator/
|       |-- model/
|       |-- service/
|
|-- jackson-poc/                        (framework implementations)
|-- avro-poc/
|-- kryo-poc/
|-- msgpack-poc/
|-- thrift-poc/
|-- capnproto-poc/
|-- fst-poc/
|-- grpc-poc/
|-- cbor-poc/
|-- bson-poc/
|-- arrow-poc/
|-- sbe-poc/
|-- parquet-poc/
|
|-- results/                            (benchmark outputs)
|-- logs/                               (service logs)
|-- screenshots/                        (visual documentation)

4.2 Common Payload Module

4.2.1 Purpose

Ensures fair comparison by using identical payloads across all frameworks.

4.2.2 Payload Structure

  • User profiles with demographics
  • Order history with line items
  • Social connections and skills
  • Tracking events with timestamps

4.2.3 Complexity Levels

Level User Count Approximate Size Primary Use Case
SMALL 10 1 KB Latency measurement
MEDIUM 100 10 KB Standard benchmarking
LARGE 1,000 100 KB Throughput testing
HUGE 10,000 1 MB Stress testing

4.3 V2 Benchmark API

4.3.1 Standardized Endpoint

All frameworks implement: POST /api/{framework}/v2/benchmark

4.3.2 Request Format

{
  "complexity": "MEDIUM",
  "iterations": 50,
  "enableWarmup": true,
  "enableCompression": true,
  "enableRoundtrip": true,
  "enableMemoryMonitoring": true
}

4.3.3 Response Format

{
  "success": true,
  "framework": "Kryo",
  "complexity": "MEDIUM",
  "iterations": 50,
  "totalDurationMs": 845.23,
  "averageSerializationTimeMs": 16.90,
  "averageSerializedSizeBytes": 8234,
  "averageCompressionRatio": 0.72,
  "successRate": 100.0,
  "roundtripSuccess": true,
  "memoryMetrics": {
    "peakMemoryMb": 245.8,
    "memoryDeltaMb": 8.4
  }
}

5. Benchmarking

5.1 Management Commands

5.1.1 Service Management

./manage.sh start              # Start all 13 services
./manage.sh stop               # Stop all services
./manage.sh restart            # Restart all services
./manage.sh status             # Check service health

5.1.2 Benchmarking Operations

./manage.sh benchmark          # Run comprehensive benchmark
./manage.sh analyze            # Analyze latest results

5.1.3 Monitoring Operations

./manage.sh logs <framework>   # View framework logs
./manage.sh clean              # Clean logs and temporary files

5.1.4 Help

./manage.sh help               # Display detailed usage

5.2 Benchmark Execution Methods

5.2.1 Full Benchmark Suite

Command: ./manage.sh benchmark

What It Tests:

  • 13 frameworks
  • 4 payload sizes (SMALL, MEDIUM, LARGE, HUGE)
  • 2 configurations (baseline, with compression)
  • All 4 metrics phases

Duration: 15-20 minutes

5.2.2 Custom Benchmark

Command: python3 enhanced_benchmark.py

Customization: Modify SCENARIOS and BENCHMARK_CONFIGS variables in the script.

5.2.3 Single Framework Test

curl -X POST http://localhost:8081/api/jackson/v2/benchmark \
  -H "Content-Type: application/json" \
  -d '{
    "complexity": "SMALL",
    "iterations": 100,
    "enableWarmup": true,
    "enableCompression": false,
    "enableRoundtrip": true,
    "enableMemoryMonitoring": true
  }'

5.3 Best Practices

5.3.1 Environment Preparation

  1. Run benchmarks on idle system
  2. Close unnecessary applications
  3. Use consistent hardware
  4. Disable power-saving modes

5.3.2 Execution Guidelines

  1. Always enable warm-up for JVM optimization
  2. Run benchmarks multiple times (minimum 3)
  3. Average results for consistency
  4. Monitor system resources during execution

5.3.3 Result Validation

  1. Verify all services are healthy before starting
  2. Check for errors in logs
  3. Validate roundtrip success rates
  4. Compare results across runs for consistency

6. Metrics

6.1 Four-Phase Metrics Collection

6.1.1 Phase 1: Network Handshake Metrics

Purpose: Measure connection establishment overhead

Metrics Collected:

  • DNS lookup time: Name resolution latency
  • TCP connect time: TCP handshake duration
  • TLS handshake time: SSL/TLS negotiation time
  • Total connection time: Complete connection establishment

6.1.2 Phase 2: Serialization Performance

Purpose: Measure serialization and deserialization speed

Metrics Collected:

  • Average time: Mean serialization time
  • Minimum time: Best case performance
  • Maximum time: Worst case performance
  • P50 latency: Median performance
  • P95 latency: 95th percentile
  • P99 latency: 99th percentile
  • Throughput: Operations per second

6.1.3 Phase 3: Resource Utilization

Purpose: Measure system resource consumption

Metrics Collected:

  • CPU usage: Processor utilization percentage
  • Memory usage: Heap memory consumption in MB
  • Memory delta: Memory increase during operation
  • Thread count: Active thread count

6.1.4 Phase 4: Transport Efficiency

Purpose: Measure data transmission efficiency

Metrics Collected:

  • Payload size: Serialized data size in bytes
  • Compression ratio: Size reduction percentage
  • Network throughput: Data transfer rate in Mbps
  • Overhead: Protocol overhead percentage

6.2 Output Formats

6.2.1 JSON Results

Location: results/enhanced_benchmark_YYYYMMDD_HHMMSS.json

Contents:

  • Complete test results
  • All metrics for all test scenarios
  • Metadata and timestamps

6.2.2 Prometheus Metrics

Location: results/metrics_YYYYMMDD_HHMMSS.prom

Contents:

  • Time-series metrics
  • Grafana-compatible format
  • Real-time monitoring support

6.2.3 Console Output

Contents:

  • Real-time progress updates
  • Summary statistics
  • Performance rankings

7. Monitoring

7.1 Grafana Dashboards

7.1.1 Performance Overview Dashboard

Purpose: Comprehensive performance analysis and comparison

Visualizations:

  • Serialization time line chart
  • Throughput comparison
  • Payload size bar gauge
  • Compression ratio statistics
  • Memory and CPU usage trends
  • Performance heatmap
  • Framework rankings table

Filters:

  • Framework selection (multi-select)
  • Payload size (SMALL, MEDIUM, LARGE, HUGE)
  • Configuration (baseline, with compression)

7.1.2 Resource Utilization Dashboard

Purpose: Deep dive into resource consumption

Visualizations:

  • Memory usage time series
  • CPU usage by framework
  • Memory efficiency (MB per 1000 operations)
  • CPU efficiency (percentage per 1000 operations)
  • Resource utilization heatmap
  • Top resource consumers table
  • Alert statistics

7.2 Prometheus Setup

7.2.1 Installation (macOS)

brew install prometheus

7.2.2 Configuration

Edit /opt/homebrew/etc/prometheus.yml:

scrape_configs:
  - job_name: 'serialization-benchmarks'
    file_sd_configs:
      - files:
          - '/path/to/java_serialization_frameworks/results/*.prom'
        refresh_interval: 30s

7.2.3 Start Service

brew services start prometheus

7.2.4 Access UI

Navigate to: http://localhost:9090

7.3 Grafana Setup

7.3.1 Installation (macOS)

brew install grafana
brew services start grafana

7.3.2 Access UI

Navigate to: http://localhost:3000 Default credentials: admin/admin

7.3.3 Add Prometheus Data Source

  1. Navigate to Configuration > Data Sources
  2. Select Prometheus
  3. Set URL: http://localhost:9090
  4. Click Save and Test

7.3.4 Import Dashboards

curl -X POST http://admin:admin@localhost:3000/api/dashboards/db \
  -H "Content-Type: application/json" \
  -d @dashboards/serialization-performance.json

8. Performance Results

8.1 Top Performers by Serialization Speed

Rank Framework Average Time Payload Size Throughput
1 SBE 0.89ms 1.2KB 1,124 ops/s
2 Cap'n Proto 1.23ms 1.5KB 813 ops/s
3 FST 1.45ms 2.3KB 690 ops/s
4 Kryo 1.67ms 2.3KB 599 ops/s
5 Apache Arrow 2.01ms 3.4KB 498 ops/s

8.2 Best Compression Ratios

Rank Framework Compression Ratio Size Reduction
1 Apache Parquet 0.25 75%
2 Apache Avro 0.32 68%
3 MessagePack 0.38 62%
4 BSON 0.42 58%
5 CBOR 0.45 55%

8.3 Performance by Category

Category Average Time Best Framework Primary Use Case
Binary Schema 1.45ms SBE (0.89ms) Ultra-low latency systems
Binary Schema-less 2.12ms FST (1.45ms) High-performance Java applications
Zero-Copy 1.23ms Cap'n Proto Low-latency inter-process communication
Columnar 3.45ms Arrow (2.01ms) Analytics and data processing
RPC Framework 4.56ms gRPC Microservices communication
Text-based 5.67ms Jackson Web APIs and human-readable data

9. Framework Selection

9.1 Selection by Use Case

9.1.1 Web APIs and REST Services

Recommended: Jackson JSON Alternative: CBOR, MessagePack

Rationale: Human-readable format, universal HTTP client support, easy debugging.

9.1.2 Microservices Communication

Recommended: gRPC Alternative: Apache Avro with Kafka

Rationale: HTTP/2 support, streaming capabilities, strong typing with Protocol Buffers.

9.1.3 High-Performance Java Applications

Recommended: Kryo Alternative: FST, SBE

Rationale: Fastest serialization for Java-only environments, minimal overhead.

9.1.4 Ultra-Low Latency Systems

Recommended: SBE Alternative: Cap'n Proto, Kryo

Rationale: Fixed-size encoding, zero-allocation design, sub-millisecond performance.

9.1.5 Big Data and Analytics

Recommended: Apache Arrow Alternative: Apache Parquet

Rationale: Columnar format, zero-copy IPC, excellent for in-memory analytics.

9.1.6 IoT and Embedded Systems

Recommended: CBOR Alternative: MessagePack, Apache Avro

Rationale: Compact binary format, low bandwidth requirements, simple implementation.

9.1.7 Schema Evolution Requirements

Recommended: Apache Avro Alternative: Apache Thrift, Cap'n Proto

Rationale: Built-in schema evolution, backward and forward compatibility.

9.2 Decision Matrix

Requirement Primary Choice Alternative Reason
Fastest serialization SBE Cap'n Proto, FST Sub-millisecond performance
Best compression Apache Parquet Apache Avro 75% size reduction
Schema evolution Apache Avro Thrift, Cap'n Proto Forward and backward compatibility
Ease of use Jackson JSON Kryo, MessagePack Familiar format, extensive tooling
Cross-language support Apache Thrift Avro, gRPC Multi-language IDL
Zero-copy operations Cap'n Proto Apache Arrow Direct memory access
Streaming support gRPC Apache Avro with Kafka Built-in streaming capabilities
Analytics workloads Apache Arrow Apache Parquet Columnar format

10. API Reference

10.1 Management Script

Syntax: ./manage.sh <command> [options]

Available Commands:

  • start: Start all 13 framework services
  • stop: Stop all running services
  • restart: Restart all services
  • status: Display service health status
  • benchmark: Run comprehensive benchmark
  • analyze: Analyze latest results
  • logs: Tail logs for specific framework
  • clean: Remove logs and temporary files
  • help: Display help information

10.2 Health Check Endpoint

Method: GET URL: http://localhost:{port}/actuator/health

Response:

{
  "status": "UP"
}

10.3 Framework Information Endpoint

Method: GET URL: http://localhost:{port}/api/{framework}/v2/info

Response:

{
  "framework": "Kryo",
  "version": "2.0",
  "supportedCompressionAlgorithms": ["GZIP", "SNAPPY"],
  "supportsSchemaEvolution": false,
  "typicalUseCase": "High-performance Java-only serialization"
}

10.4 Benchmark Endpoint

Method: POST URL: http://localhost:{port}/api/{framework}/v2/benchmark Content-Type: application/json

Request Parameters:

  • complexity: SMALL, MEDIUM, LARGE, or HUGE
  • iterations: Number of test iterations (1-1000)
  • enableWarmup: Boolean for JVM warm-up
  • enableCompression: Boolean for compression testing
  • enableRoundtrip: Boolean for serialization verification
  • enableMemoryMonitoring: Boolean for memory tracking

10.5 Port Reference

Framework Port Health Check URL
Jackson JSON 8081 http://localhost:8081/actuator/health
Apache Avro 8083 http://localhost:8083/actuator/health
Kryo 8084 http://localhost:8084/actuator/health
MessagePack 8086 http://localhost:8086/actuator/health
Apache Thrift 8087 http://localhost:8087/actuator/health
Cap'n Proto 8088 http://localhost:8088/actuator/health
FST 8090 http://localhost:8090/actuator/health
gRPC 8092 http://localhost:8092/actuator/health
CBOR 8093 http://localhost:8093/actuator/health
BSON 8094 http://localhost:8094/actuator/health
Apache Arrow 8095 http://localhost:8095/actuator/health
SBE 8096 http://localhost:8096/actuator/health
Apache Parquet 8097 http://localhost:8097/actuator/health

11. Configuration

11.1 JVM Configuration

Location: {framework}-poc/pom.xml

Example:

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <configuration>
    <jvmArguments>
      -Xms512m
      -Xmx2048m
      -XX:+UseG1GC
      -XX:MaxGCPauseMillis=200
      -XX:+UseStringDeduplication
    </jvmArguments>
  </configuration>
</plugin>

11.2 Spring Boot Configuration

Location: {framework}-poc/src/main/resources/application.yml

Example:

server:
  port: 8081
  tomcat:
    threads:
      max: 200
      min-spare: 10
    max-connections: 8192

spring:
  application:
    name: jackson-serialization-service

logging:
  level:
    org.techishthoughts: INFO

11.3 Benchmark Configuration

Location: enhanced_benchmark.py

Scenarios Configuration:

SCENARIOS = [
    {'complexity': 'SMALL', 'iterations': 100},
    {'complexity': 'MEDIUM', 'iterations': 50},
    {'complexity': 'LARGE', 'iterations': 20},
    {'complexity': 'HUGE', 'iterations': 5}
]

Benchmark Configurations:

BENCHMARK_CONFIGS = [
    {'name': 'baseline', 'enableCompression': False},
    {'name': 'compressed', 'enableCompression': True}
]

12. Troubleshooting

12.1 Services Not Starting

Problem: Services fail to start or show as STOPPED

Diagnosis Steps:

  1. Check Java version: java -version (should be Java 21)
  2. Check port conflicts: lsof -i :8081
  3. View logs: ./manage.sh logs jackson or tail -f logs/jackson.log

Solutions:

  • Restart services: ./manage.sh restart
  • Kill conflicting processes
  • Verify Java 21 installation

12.2 Benchmark Failures

Problem: Benchmark fails with timeout or connection errors

Diagnosis Steps:

  1. Verify service health: ./manage.sh status
  2. Check individual service: curl http://localhost:8081/actuator/health
  3. Verify Python dependencies: pip3 list | grep psutil

Solutions:

  • Increase timeout in enhanced_benchmark.py (line 169: change 180 to 300)
  • Install missing dependencies: pip3 install psutil
  • Reduce iterations for initial testing

12.3 Memory Issues

Problem: OutOfMemoryError or high memory usage

Diagnosis Steps:

  1. List Java processes: jps
  2. Check GC statistics: jstat -gc <pid> 1000
  3. Monitor heap usage: jmap -heap <pid>

Solutions:

  • Increase JVM heap in pom.xml: -Xmx4096m
  • Reduce test iterations
  • Enable garbage collection logging
  • Monitor memory trends in Grafana

12.4 Grafana No Data

Problem: Grafana dashboards show no data

Diagnosis Steps:

  1. Verify Prometheus running: open http://localhost:9090
  2. Check metrics files exist: ls -l results/*.prom
  3. Verify Prometheus configuration: cat /opt/homebrew/etc/prometheus.yml

Solutions:

  • Restart Prometheus: brew services restart prometheus
  • Re-import dashboards
  • Check file permissions on results directory
  • Verify data source configuration in Grafana

12.5 Common Error Messages

Error Message Cause Solution
Connection refused Service not running Run ./manage.sh start
Port already in use Port conflict Kill conflicting process or change port
JSON parse error Malformed response Check service logs for errors
Timeout (180s) Service overloaded Reduce iterations or increase timeout
ModuleNotFoundError: psutil Missing Python dependency Run pip3 install psutil

13. Contributing

13.1 Reporting Issues

13.1.1 Where to Report

Use GitHub Issues for bug reports and feature requests.

13.1.2 Required Information

  • System information (OS, Java version, Maven version)
  • Steps to reproduce the issue
  • Expected behavior
  • Actual behavior
  • Relevant logs

13.2 Pull Request Process

13.2.1 Preparation

  1. Fork the repository
  2. Create feature branch: git checkout -b feature/amazing-feature
  3. Follow existing code style
  4. Add tests for new features
  5. Update documentation

13.2.2 Submission

  1. Commit with clear messages
  2. Push to your fork
  3. Submit pull request
  4. Wait for review
  5. Address feedback

13.3 Adding New Framework

13.3.1 Implementation Steps

  1. Create {framework}-poc module based on existing structure
  2. Implement V2 API endpoints
  3. Add framework to FRAMEWORKS in manage.sh
  4. Update enhanced_benchmark.py with framework configuration
  5. Test thoroughly with all payload sizes
  6. Update documentation

13.3.2 Testing Requirements

  • All payload sizes (SMALL, MEDIUM, LARGE, HUGE)
  • Both configurations (baseline, with compression)
  • Roundtrip verification
  • Memory monitoring
  • Health check endpoint

14. Resources

14.1 Complete Documentation

14.1.1 Documentation Index

Location: docs/README.md

Contents:

  • Framework deep-dive navigation
  • Quick navigation by use case
  • Quick navigation by performance
  • Framework comparison matrix
  • Reading guide for different audiences

14.1.2 Framework Deep-Dives

Location: docs/frameworks/

Available:

14.1.3 Guides

Location: docs/guides/

Available:

14.1.4 Monitoring Setup

Location: dashboards/README.md

Contents:

  • Grafana configuration
  • Prometheus setup
  • Dashboard import instructions

14.2 Contact Information

Issues: GitHub Issues Discussions: GitHub Discussions Email: your-email@example.com

14.3 License

This project is licensed under the MIT License. See the LICENSE file for details.

14.4 Acknowledgments

  • Framework maintainers for excellent serialization libraries
  • Spring Boot team for the application framework
  • Prometheus and Grafana teams for monitoring tools
  • All contributors to this project

Last Updated: 2025-10-22 Version: 2.0.0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •