ChunkProcessor implements the data transformation pipeline for individual chunks. It handles compression and encryption in the correct order for both writing and reading.
public final class ChunkProcessorOriginal Data
│
▼
┌─────────────┐
│ Compress │ (if enabled and reduces size)
└─────────────┘
│
▼
┌─────────────┐
│ Encrypt │ (if enabled)
└─────────────┘
│
▼
Stored Data
Stored Data
│
▼
┌─────────────┐
│ Decrypt │ (if chunk is encrypted)
└─────────────┘
│
▼
┌─────────────┐
│ Decompress │ (if chunk is compressed)
└─────────────┘
│
▼
Original Data
Creates a new builder for configuration.
public static Builder builder()Example:
ChunkProcessor processor = ChunkProcessor.builder()
.compression(zstdProvider, 6)
.encryption(aesProvider, secretKey)
.build();Creates a processor that applies no transformations.
public static ChunkProcessor passThrough()Example:
// For uncompressed, unencrypted archives
ChunkProcessor processor = ChunkProcessor.passThrough();public boolean isCompressionEnabled()Returns true if a compression provider is configured.
public boolean isEncryptionEnabled()Returns true if both encryption provider and key are configured.
public @Nullable CompressionProvider getCompressionProvider()Returns the compression provider, or null if disabled.
public @Nullable EncryptionProvider getEncryptionProvider()Returns the encryption provider, or null if disabled.
Processes data for writing (compress → encrypt).
public ProcessedChunk processForWrite(byte[] data, int originalSize)
throws IOExceptionParameters:
data- Original chunk dataoriginalSize- Number of valid bytes in data array
Returns: ProcessedChunk with transformed data and metadata
Behavior:
- Compression only applied if it reduces size
- Returns flags indicating which transformations were applied
Example:
byte[] chunkData = new byte[256 * 1024]; // 256 KB chunk
int bytesRead = input.read(chunkData);
ProcessedChunk result = processor.processForWrite(chunkData, bytesRead);
// Write chunk header with flags
ChunkHeader header = ChunkHeader.builder()
.originalSize(result.originalSize())
.storedSize(result.storedSize())
.compressed(result.compressed())
.encrypted(result.encrypted())
.build();
// Write processed data
output.write(result.data());Processes data for reading (decrypt → decompress).
public byte[] processForRead(
byte[] data,
int originalSize,
boolean compressed,
boolean encrypted)
throws IOExceptionParameters:
data- Stored chunk dataoriginalSize- Expected decompressed sizecompressed- Whether chunk is compressedencrypted- Whether chunk is encrypted
Returns: Original unprocessed data
Throws:
IOExceptionif decryption fails (wrong key, corrupted data)IOExceptionif decompression fails (corrupted data)IOExceptionif processor lacks required provider
Example:
// Read chunk header
ChunkHeader header = readChunkHeader(input);
// Read stored data
byte[] storedData = new byte[header.storedSize()];
input.readFully(storedData);
// Process (decrypt → decompress)
byte[] originalData = processor.processForRead(
storedData,
header.originalSize(),
header.isCompressed(),
header.isEncrypted()
);Result of processForWrite().
public record ProcessedChunk(
byte[] data, // Transformed data
int originalSize, // Original unprocessed size
int storedSize, // Size of transformed data
boolean compressed, // Whether compression was applied
boolean encrypted // Whether encryption was applied
)Usage:
ProcessedChunk result = processor.processForWrite(data, data.length);
System.out.println("Original: " + result.originalSize() + " bytes");
System.out.println("Stored: " + result.storedSize() + " bytes");
System.out.println("Compressed: " + result.compressed());
System.out.println("Encrypted: " + result.encrypted());The ChunkProcessor.Builder configures compression and encryption.
Enables compression with provider's default level.
builder.compression(CompressionRegistry.zstd())Enables compression with specific level.
builder.compression(CompressionRegistry.zstd(), 6)Level Guidelines:
| Provider | Level Range | Low | Medium | High |
|---|---|---|---|---|
| ZSTD | -7 to 22 | 1-3 | 4-6 | 7-22 |
| LZ4 | 0-16 | 0-1 | 2-9 | 10-16 |
Enables encryption.
SecretKey key = aesProvider.generateKey();
builder.encryption(EncryptionRegistry.aes256Gcm(), key)Creates the processor.
ChunkProcessor processor = builder.build();ChunkProcessor processor = ChunkProcessor.builder()
.compression(CompressionRegistry.zstd(), 6)
.build();
// Processing
ProcessedChunk result = processor.processForWrite(data, data.length);
// result.compressed() may be true or false (depends on compressibility)
// result.encrypted() is always falseSecretKey key = EncryptionRegistry.aes256Gcm().generateKey();
ChunkProcessor processor = ChunkProcessor.builder()
.encryption(EncryptionRegistry.aes256Gcm(), key)
.build();
// Processing
ProcessedChunk result = processor.processForWrite(data, data.length);
// result.compressed() is always false
// result.encrypted() is always trueChunkProcessor processor = ChunkProcessor.builder()
.compression(CompressionRegistry.zstd(), 3)
.encryption(EncryptionRegistry.aes256Gcm(), secretKey)
.build();
// Writing: compress → encrypt
ProcessedChunk result = processor.processForWrite(data, data.length);
// Reading: decrypt → decompress
byte[] original = processor.processForRead(
result.data(),
result.originalSize(),
result.compressed(),
result.encrypted()
);ChunkProcessor processor = ChunkProcessor.builder()
.compression(CompressionRegistry.lz4(), 0) // Fast mode
.build();ChunkProcessor processor = ChunkProcessor.builder()
.compression(CompressionRegistry.zstd(), 19) // High compression
.build();ChunkProcessor is typically created from ApackConfiguration:
ApackConfiguration config = ApackConfiguration.builder()
.compression(CompressionRegistry.zstd(), 6)
.encryption(EncryptionRegistry.aes256Gcm(), secretKey)
.build();
// Create processor from configuration
ChunkProcessor processor = config.createChunkProcessor();
// Use with reader/writer
try (AetherPackReader reader = AetherPackReader.open(path, processor)) {
// ...
}The processor only uses compressed data if it's smaller:
// If compression increases size (e.g., JPEG, encrypted data)
ProcessedChunk result = processor.processForWrite(jpegData, jpegData.length);
System.out.println(result.compressed()); // false - original data stored
// If compression reduces size
ProcessedChunk result2 = processor.processForWrite(textData, textData.length);
System.out.println(result2.compressed()); // true - compressed data storedThis prevents "negative compression" for incompressible data.
// Processor without encryption
ChunkProcessor processor = ChunkProcessor.passThrough();
// Attempt to read encrypted chunk
try {
processor.processForRead(data, originalSize, false, true);
} catch (IOException e) {
// "Data is encrypted but no encryption key provided"
}// Processor with wrong key
try {
processor.processForRead(encryptedData, originalSize, false, true);
} catch (IOException e) {
// "Decryption failed" - wraps GeneralSecurityException
}try {
processor.processForRead(corruptedData, originalSize, true, false);
} catch (IOException e) {
// Decompression failure from provider
}ChunkProcessorinstances are immutable and thread-safe- Can be shared across multiple threads
- Underlying providers may have their own threading requirements
// Safe to share across threads
ChunkProcessor processor = ChunkProcessor.builder()
.compression(zstdProvider, 6)
.build();
// Use from multiple threads
ExecutorService executor = Executors.newFixedThreadPool(4);
for (int i = 0; i < 100; i++) {
executor.submit(() -> {
ProcessedChunk result = processor.processForWrite(data, data.length);
// ...
});
}Previous: Entries | Back to: API Overview