|
| 1 | +# Release Notes |
| 2 | + |
| 3 | +## 0.1.8-SNAPSHOT |
| 4 | + |
| 5 | +### Transport Layer Improvements |
| 6 | + |
| 7 | +- **New TCP Transport Support**: Added TCP as an alternative to UDP for more reliable delivery over unstable networks |
| 8 | + - Configure with `transport=tcp` in upstream and downstream config files |
| 9 | + - Environment variable overrides: `AIRGAP_UPSTREAM_TRANSPORT`, `AIRGAP_DOWNSTREAM_TRANSPORT` |
| 10 | + - Automatic connection health checking with 1ms read deadline test |
| 11 | + - Automatic reconnection on connection loss |
| 12 | + - Lazy connection initialization - upstream doesn't fail on startup if downstream is unavailable |
| 13 | + - See [Transport Configuration.md](doc/Transport%20Configuration.md) for detailed TCP vs UDP comparison |
| 14 | + |
| 15 | +- **Enhanced Message Delivery Reliability**: |
| 16 | + - Automatic retry logic: 30 attempts with fixed 100ms intervals (3+ seconds total) for transient failures |
| 17 | + - Messages not marked as consumed in Kafka if send fails - automatic reprocessing when receiver comes back up |
| 18 | + - Prevents message loss due to temporary network issues |
| 19 | + - Only marks Kafka message consumed on successful delivery |
| 20 | + - UDP: Transient failures log at WARN level to indicate delivery uncertainty |
| 21 | + - TCP: Automatic reconnection handling with detailed error reporting |
| 22 | + |
| 23 | +- **Transport Status Monitoring**: |
| 24 | + - New `transport_status` field in periodic statistics showing transport health ("running" or error message) |
| 25 | + - Status change logging: ERROR level when status changes to error, INFO level when restored |
| 26 | + - Transport errors tracked and reported in structured statistics instead of separate log spam |
| 27 | + - Useful for monitoring and alerting on transport issues |
| 28 | + - UDP and TCP use same monitoring interface |
| 29 | + |
| 30 | +### Kafka Connection Monitoring |
| 31 | + |
| 32 | +- **Kafka Status Tracking**: New `kafka_status` field in statistics for both upstream and downstream |
| 33 | + - Monitors Kafka cluster availability and broker connectivity |
| 34 | + - Upstream: Custom Sarama logger captures metadata errors (leaderless partitions, connection refused, etc.) |
| 35 | + - Downstream: Kafka producer error monitoring via callback mechanism |
| 36 | + - Status changes logged at appropriate levels (ERROR for failures, INFO for recovery) |
| 37 | + - Both consumer and producer errors reported via unified callback system |
| 38 | + - Helps identify cluster issues early before message loss occurs |
| 39 | + |
| 40 | +- **Error Handling Improvements**: |
| 41 | + - Kafka consumer no longer panics on connection errors - retries with 5-second backoff instead |
| 42 | + - Producer errors monitored and reported to status system |
| 43 | + - Error messages include context (attempt number, error details) for debugging |
| 44 | + - Health check goroutines with 10-second periodic validation |
| 45 | + |
| 46 | +### Content-Based Filtering |
| 47 | + |
| 48 | +- **New Input Filtering feature**: Content-based filtering of events at upstream before transmission |
| 49 | + - Filter events using regex patterns with allow/deny rules |
| 50 | + - Configure with `inputFilterRules`, `inputFilterDefaultAction`, `inputFilterTimeout` parameters |
| 51 | + - Environment variable overrides: `AIRGAP_UPSTREAM_INPUT_FILTER_*` |
| 52 | + - Useful for security (high-severity events only), privacy (block PII), compliance, and performance |
| 53 | + - First-match-wins rule evaluation with configurable default action (allow/deny) |
| 54 | + - Protection against ReDoS attacks with configurable regex timeout (default 100ms) |
| 55 | + - Dangerous pattern detection at startup (nested quantifiers) |
| 56 | + - New statistics counters: `filtered`, `unfiltered`, `filter_timeouts` per interval |
| 57 | + - Cumulative counters: `total_filtered`, `total_unfiltered`, `total_filter_timeouts` |
| 58 | + - See [InputFilter.md](doc/InputFilter.md) for detailed documentation and use cases |
| 59 | + |
| 60 | +### Documentation |
| 61 | + |
| 62 | +- Release Notes have been extracted to a separate document |
| 63 | +- A FAQ (Frequently Asked Questions) document has been created |
| 64 | +- Transport Configuration documentation added with TCP vs UDP comparison, tuning, and troubleshooting |
| 65 | +- Input Filtering documentation with examples for security, privacy, and compliance use cases |
| 66 | +- Configuration and Monitoring documentation updated with new transport and Kafka status fields |
| 67 | + |
| 68 | +### Internal Improvements |
| 69 | + |
| 70 | +- Protocol parser: Removed panic statements, now returns errors gracefully |
| 71 | +- Code formatting: Fixed spacing in message parsing logic |
| 72 | +- Kafka adapter: Updated to pass error callbacks for status monitoring |
| 73 | +- Configuration: Support for all environment variable overrides (AIRGAP_UPSTREAM_*, AIRGAP_DOWNSTREAM_*) |
| 74 | +- Test configurations: Added test cases 19 and 20 for input filtering and TCP testing |
| 75 | + |
| 76 | +## 0.1.7-SNAPSHOT |
| 77 | + |
| 78 | +- Fixed the following issues: |
| 79 | + - Support encrypted key files for communication with Kafka #5 |
| 80 | + |
| 81 | +## 0.1.6-SNAPSHOT |
| 82 | + |
| 83 | +- Removed `topic` configuration from downstream. Downstream uses upstream's topic name, or a translation of that name. |
| 84 | +- Fixed the following issues: |
| 85 | + - resend will not accept payloadSize=auto #1 |
| 86 | + - Separate internal logging from event stream from Upstream Kafka #2 |
| 87 | + - Dedup can't use TLS to connect to Kafka #3 |
| 88 | + |
| 89 | +## 0.1.5-SNAPSHOT |
| 90 | + |
| 91 | +- Multiple sockets with SO_REUSEPORT for faster and more reliable UDP receive in Linux and Mac for downstream. Fallback to single thread in Windows. |
| 92 | +- `create` application to create resend bundle files downstream |
| 93 | +- `resend` application to resend missing events from the resend bundle created by `create` |
| 94 | +- `compressWhenLengthExceeds` setting for upstream and resend to compress messages when length exceeds this value. As of now gzip is the only supported algorithm. |
| 95 | +- More configuration for upstream and downstream for buffer size optimizations |
| 96 | +- Upstream and downstream can translate topic names to other names. Useful in multi source and/or target setups. |
| 97 | +- Statistics logging in upstream, downstream and dedup |
| 98 | + |
| 99 | +## 0.1.4-SNAPSHOT |
| 100 | + |
| 101 | +- Changed the logging for the go applications to include log levels. Monitoring and log updates. |
| 102 | +- Changed the logging for the go applications to include log levels. Monitoring and log updates. |
| 103 | +- Documented redundancy and load balancing (see doc folder) |
| 104 | +- Documented resend (future updates will implement the new resend algorithm) |
| 105 | + |
| 106 | +## 0.1.3-SNAPSHOT |
| 107 | + |
| 108 | +- Added a Kafka Streams Java Application for deduplication and gap detection. Gap detection not finished. |
| 109 | +- Added upstreams filter to filter on the offset number for each partition (used in redundancy an load balancing setups) |
| 110 | +- Added a topic name mapping in downstream so a topic with a specified name upstream can be written to another topic downstream (used in redundancy an load balancing setups) |
| 111 | +- Added documentation for the new features. |
| 112 | +- Added JMX monitoring of the deduplication application. Added system monitoring documentation |
| 113 | + |
| 114 | +## 0.1.2-SNAPSHOT |
| 115 | + |
| 116 | +- All configuration from files can be overridden by environment variables. See Configuration Upstream |
| 117 | +- UDP sending have been made more robust |
| 118 | +- Transfer of binary data from upstream to downstream is now supported |
| 119 | +- Sending a sighup to upstream or downstream will now force a re-write of the log file, so you can rotate the log file and then sighup the application to make it log to a new file with the name specified in the upstream or downstream configuration. |
| 120 | +- air-gap now supports TLS and mTLS to Kafka upstream and downstream. |
| 121 | +- air-gap now supports TLS and mTLS to Kafka upstream and downstream. |
| 122 | + |
| 123 | +## 0.1.1-SNAPSHOT |
| 124 | + |
| 125 | +air-gap now supports several sending threads that all have a specified time offset, so you can start one thread that consumes everything from Kafka as soon as it's available, one that inspects Kafka content that was added for an hour ago and so on. See Automatic resend above. |
0 commit comments