A CLI tool for MySQL binlog analysis, designed to help DBAs quickly identify hot tables, large transactions, write spikes, and workload patterns from local ROW binlog files.
BinlogViz answers critical operational questions:
- Which tables have the heaviest writes?
- Are there abnormally large transactions?
- Did write spikes occur at specific minutes?
- What does the workload summary look like for a given time window?
Download the release archive for your platform from GitHub Releases, verify the checksum, and move the binary onto your PATH.
The authoritative release artifacts are produced by the GitHub Actions release workflow on native runners. Local goreleaser is only intended for config checks and optional current-host validation.
Example for darwin/arm64 and the current Phase 2 release v0.2.2:
curl -fsSLO https://github.com/Fanduzi/BinlogVisualizer/releases/download/v0.2.2/binlogviz_0.2.2_darwin_arm64.tar.gz
curl -fsSLO https://github.com/Fanduzi/BinlogVisualizer/releases/download/v0.2.2/binlogviz_0.2.2_checksums.txt
shasum -a 256 -c binlogviz_0.2.2_checksums.txt 2>/dev/null | grep "binlogviz_0.2.2_darwin_arm64.tar.gz: OK"
tar -xzf binlogviz_0.2.2_darwin_arm64.tar.gz
install ./binlogviz /usr/local/bin/binlogvizOr use the included install helper:
curl -fsSL https://raw.githubusercontent.com/Fanduzi/BinlogVisualizer/main/install.sh | sh -s -- --version v0.2.2To preview the resolved artifact without downloading:
./install.sh --version v0.2.2 --dry-rungit clone <repository-url>
cd BinlogVisualizer
# Build locally
go build -o binlogviz .
# Or install into GOPATH/bin
go install .
# Or run directly
go run . analyze <binlog files...># Analyze a single binlog file
binlogviz analyze mysql-bin.000123
# Analyze multiple files
binlogviz analyze mysql-bin.000123 mysql-bin.000124
# Use shell expansion for multiple files
binlogviz analyze mysql-bin.*# Analyze a specific time range (RFC3339 format)
binlogviz analyze mysql-bin.* \
--start "2026-03-15T10:00:00Z" \
--end "2026-03-15T10:30:00Z"# JSON output for scripting or further processing
binlogviz analyze mysql-bin.* --json
# Adjust number of top items shown
binlogviz analyze mysql-bin.* --top-tables 20 --top-transactions 20# Enable spike detection
binlogviz analyze mysql-bin.* --detect-spikes
# Customize large transaction thresholds
binlogviz analyze mysql-bin.* \
--large-trx-rows 5000 \
--large-trx-duration 60s| Flag | Default | Description |
|---|---|---|
--start |
(none) | Start time (inclusive, RFC3339 format) |
--end |
(none) | End time (inclusive, RFC3339 format) |
--json |
false | Output in JSON format |
--sql-context |
summary | SQL context presentation mode: summary, off, or full |
--top-tables |
10 | Number of top tables to show |
--top-transactions |
10 | Number of top transactions to show |
--detect-spikes |
false | Enable write spike detection |
--large-trx-rows |
1000 | Rows threshold for large transaction alerts |
--large-trx-duration |
30s | Duration threshold for large transaction alerts |
The output contains five sections:
Overall statistics for the analyzed time window:
- Total transactions count
- Total rows affected
- Total events processed
- Time range and duration
Tables ranked by total rows affected, showing:
- Schema and table name
- Total row count
- Breakdown by operation (INSERT/UPDATE/DELETE)
- Number of distinct transactions touching the table
Largest transactions ranked by total rows, showing:
- Transaction identifier
- Row count and duration
- Event count
Per-minute breakdown of write activity:
- Rows written per minute
- Transaction count per minute
Detected anomalies including:
- Large Transaction: Transactions exceeding row or duration thresholds
- Write Spike: Minutes with abnormally high write activity (when
--detect-spikesis enabled)
See example outputs in:
- MySQL ROW-format binlog files
- Go 1.26.1+ (for building)
BinlogViz is designed for MVP efficiency and has the following characteristics:
The current implementation uses a streaming command path with DuckDB-backed finalize-time result assembly:
- Parser: Streams raw binlog events via callbacks
- Command Layer: Immediately normalizes and forwards events to
analyzer.Consume - Analyzer: Keeps bounded live state in memory
- DuckDB Temp Store: Persists completed high-cardinality results for
Finalize() - Renderer: Outputs the final assembled report
From benchmarks on Apple M4 Pro:
| Input Size | Time/op | Memory/op | Allocs/op |
|---|---|---|---|
| 1 event | ~1μs | 2.5 KB | 32 |
| 100 events | ~40μs | 55 KB | 756 |
| 1000 events | ~492μs | 665 KB | 7.1K |
| 100 tables | ~41μs | 55 KB | 756 |
| 10 transactions | ~245 ns | 469 B | 12 |
For large binlog files:
- Prefer analyzing ordered binlog ranges directly; the command path is already streaming.
- Ensure sufficient disk space for the temporary DuckDB result store used during analysis.
- ROW binlog only: STATEMENT and MIXED formats are not supported in MVP
- Local files only: Cannot connect to MySQL servers directly
- No real-time streaming: Analysis is performed on static files
- Bounded SQL context only: When binlog input includes
Rows_query_log_event, BinlogViz can show bounded SQL context via--sql-context summary|full, but it does not support SQL replay or full statement reconstruction - No row values: Focuses on operation patterns, not data content
BinlogViz is intentionally not:
- A replication debugger
- A SQL replayer
- A real-time monitoring tool
- A Prometheus exporter
- A web-based dashboard
- An AI-powered anomaly detector
BinlogViz uses a single-pass streaming analysis pipeline:
binlog files → parser → normalizer → analyzer → renderer → output
Components:
- Parser: Wraps
go-mysql-org/go-mysql/replicationfor binlog parsing - Normalizer: Converts parser events to stable internal format
- Analyzer: Reconstructs transactions, aggregates tables/minutes, detects alerts
- Renderer: Produces text or JSON output
MIT