Skip to content

[skip-ci] docs: Add draft release notes for ProxySQL 3.0.6 / 4.0.6#5419

Open
renecannao wants to merge 95 commits intov3.0from
release-notes-3.0.6-4.0.6-draft
Open

[skip-ci] docs: Add draft release notes for ProxySQL 3.0.6 / 4.0.6#5419
renecannao wants to merge 95 commits intov3.0from
release-notes-3.0.6-4.0.6-draft

Conversation

@renecannao
Copy link
Contributor

@renecannao renecannao commented Feb 23, 2026

Summary

Add comprehensive draft release notes covering all changes since commit 7e9e00997d7d9fa4811c86c3a3bec9c886386e1f (725+ commits).

Version Scheme

ProxySQL 3.0.x and 4.0.x share the same codebase:

  • 3.0.6: Core proxy functionality (MySQL, PostgreSQL, routing, caching, monitoring)
  • 4.0.6: All 3.0.x features + GenAI/MCP/RAG capabilities (build with PROXYSQLGENAI=1)

Files Added

File Description
doc/release-notes/DRAFT-3.0.6-4.0.6.md Main release notes with version-specific sections
doc/release-notes/CHANGELOG-3.0.6-4.0.6-detailed.md All 725 commits with details
doc/release-notes/CHANGELOG-3.0.6-4.0.6-commits.md Commits categorized by type/component
doc/release-notes/README.md Workflow documentation

Major Features Documented

Both Versions (3.0.6 & 4.0.6)

  • FFTO (Fast Forward Traffic Observer): Real-time traffic inspection for MySQL and PostgreSQL
  • TSDB (Time Series Database): Historical metrics storage with REST API and UI
  • Noise Testing Framework: Realistic load testing infrastructure
  • Query Processor Enhancements: Optimizer hints handling, comment parsing

4.0.6 Only (GenAI Build)

  • MCP Server: Model Context Protocol for AI agent integration
  • GenAI Integration: LLM-powered query analysis and NL2SQL
  • RAG Capabilities: Vector search, embeddings, document ingestion

Status

This is a draft PR. The release notes will be updated as more commits are merged. Final review will be done before the actual release.

Related

Test plan

  • Review release notes for accuracy
  • Verify all commit references
  • Check version-specific feature categorization
  • Update as needed before release

Summary by CodeRabbit

  • New Features

    • Fast Forward Traffic Observer (FFTO) for MySQL and PostgreSQL with passive traffic monitoring and performance metrics collection
    • Stats MCP endpoint replacing observe endpoint with enhanced query digest, processlist, and system metrics tools
    • In-memory processlist filtering, sorting, and pagination capabilities
    • Top-K query digest retrieval with multi-criteria sorting and filtering
    • Noise injection testing framework for concurrent workload generation
  • Bug Fixes

    • Added null-pointer safety check in connection termination logic
  • Configuration

    • New FFTO configuration variables for enabling and controlling buffer sizes
    • New stats and AI endpoint authentication variables
    • Query processor first-comment parsing configuration option

wazir-ahmed and others added 30 commits February 4, 2026 14:40
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
…entation

Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Signed-off-by: Wazir Ahmed <wazir@proxysql.com>
Signed-off-by: René Cannaò <rene@proxysql.com>
- Added TrafficObserver base interface
- Implemented MySQLFFTO and PgSQLFFTO classes for protocol-aware observation
- Integrated FFTO hooks into MySQL and PostgreSQL session handlers in FAST_FORWARD state
- Added configuration variables: mysql-ffto_enabled, mysql-ffto_max_buffer_size, pgsql-ffto_enabled, pgsql-ffto_max_buffer_size
- Implemented query digest extraction and reporting parity with internal stats
- Added session-level bypass when packet size exceeds buffer threshold
- Included architectural documentation in doc/ffto_design.md
- Implemented statement ID tracking for MySQL COM_STMT_PREPARE/EXECUTE
- Implemented Parse/Bind/Execute portal mapping for PostgreSQL Extended Query
- Corrected documentation regarding binary protocol support
- Ensured metric parity for prepared statements in Fast Forward mode
- Added comprehensive MySQL FFTO test (text and binary protocols)
- Added comprehensive PgSQL FFTO test (Simple and Extended Query protocols)
- Added FFTO memory bypass logic test
- Added bash orchestration scripts for each test group
- Integrated new tests into the TAP Makefile
… scripts

Following the CI convention where all binaries ending in -t in test/tap/tests/ are automatically executed, the subdirectories and run.sh scripts are no longer needed.
- Implemented MySQL OK_Packet parsing to extract affected_rows
- Implemented PostgreSQL CommandComplete parsing to extract rows affected/sent
- Updated TAP tests to validate sum_rows_affected and sum_rows_sent metrics
- Fixed incorrect assertion in test_ffto_mysql-t.cpp
- Resolved signature mismatch in report_query_stats
- Fixed missing type declarations for Query_Processor in FFTO classes
- Verified Metric Parity for affected_rows and rows_sent
- Confirmed full coverage in TAP tests for text/binary protocols and bypass logic
- Verified clean build with make debug
Problem
MCP stats tools were querying stats.* tables directly through admindb.
Unlike admin-session SQL, that path bypassed GenericRefreshStatistics() and could
return stale or empty data for runtime-populated tables.

What changed
- Updated Stats_Tool_Handler::execute_admin_query() to mirror admin-session
  semantics for stats reads:
  - acquire GloAdmin->sql_query_global_mutex
  - optionally invoke ProxySQL_Admin::GenericRefreshStatistics(sql, ..., false)
  - execute admindb statement
  - release sql_query_global_mutex
- Added strict input validation and explicit lock/unlock error reporting.
- Added refresh_before_query parameter (default true) to avoid duplicate refresh
  passes for secondary count queries.
- Switched secondary COUNT(*) calls in show_processlist/show_queries/
  show_errors/show_query_rules to refresh_before_query=false.

Documentation
- Expanded Doxygen in include/Stats_Tool_Handler.h for execute_admin_query()
  to document locking, refresh behavior, and performance tradeoff.
- Added detailed Doxygen above the execute_admin_query() implementation in
  lib/Stats_Tool_Handler.cpp.

Testing
- Added TAP integration test test/tap/tests/mcp_stats_refresh-t.cpp.
- The test injects a synthetic stale marker row into stats.stats_mysql_global,
  calls /mcp/stats show_status, and verifies the marker disappears after
  refresh-before-read.
- Added helper-level Doxygen in the test for setup/parsing behavior.

Build verification
- Compiled lib object: make -C lib obj/Stats_Tool_Handler.oo
- Compiled TAP target: make -C test/tap/tests mcp_stats_refresh-t
- Direct runtime execution of the TAP test in this workspace could not fully run
  because no local admin listener was available (connection to 127.0.0.1:6032 failed).
Replace stats table reads in show_queries with Query Processor in-memory aggregation to avoid stale/empty admin-db snapshots.\n\nIntroduce Top-K digest filtering primitives and wire them into Stats_Tool_Handler so filtering/sorting/limit are executed against live digest memory.

Add runtime MCP variable mcp-stats_show_queries_max_rows with validation, defaulting, and configuration plumbing, plus cap metadata in the tool response.\n\nEnforce configurable hard cap semantics while preserving deterministic ordering and pagination behavior for MCP clients.

Add internal DEBUG validator path (PROXYSQLTEST 56) in admin tests to stress generated digest data and validate the Top-K/filter pipeline against in-memory state.\n\nKeep the validator callable from startup/testing workflows without depending on MCP linkage.

Add TAP coverage in test/tap/tests/mcp_show_queries_topk-t.cpp for:\n- MCP endpoint setup and reachability\n- digest generation + optional DEBUG internal validator\n- payload validation and cap metadata assertions\n- descending count order validation\n- match_digest_text filter verification\n\nMake payload parsing backward-compatible in TAP (direct tool result object and legacy wrapped success/result payloads).
…andling

- Finalized MySQL and PostgreSQL protocol state machines.
- Implemented accurate affected_rows and rows_sent capture.
- Added defensive null checks to prevent early-session crashes.
- Enhanced TAP tests with result-set draining and success verification.
- Verified 100% pass rate for MySQL CRUD, binary protocol, and memory bypass.
…stress TAP

Replace direct stats-schema reads in MCP show_processlist with in-memory processlist access via SQL3_Processlist query options.

Add processlist query option types in proxysql_admin interfaces and implement filtering, sorting, and pagination post-processing in both MySQL and PgSQL thread implementations.

Introduce configurable MCP cap mcp_stats_show_processlist_max_rows (default 200, max 1000), wire it through MCP variable loading, and expose it in default config.

Expand Stats_Tool_Handler processlist handling to parse filter/sort arguments, enforce cap metadata, and return consistent MCP payload fields from in-memory snapshots.

Add extensive doxygen documentation in the touched headers and source blocks to describe API contracts, filtering behavior, and runtime constraints.

Add new TAP test mcp_pgsql_concurrency_stress-t that generates sustained concurrent PgSQL traffic (simple reads, read/write table workload, randomized pg_sleep) while polling MCP show_processlist and show_queries in parallel with filter and ordering assertions.

Validation performed locally: mcp_pgsql_concurrency_stress-t (31 assertions) and mcp_show_queries_topk-t (12 assertions).
… stress test

Add a new MySQL-focused TAP workload (mcp_mysql_concurrency_stress-t) that mirrors the PgSQL stress model and continuously generates mixed traffic through ProxySQL while MCP stats is queried in parallel.

The MySQL workload includes simple reads, read/write table traffic, and randomized sleep queries, while concurrent MCP pollers validate show_processlist and show_queries behavior for sorting, filtering, metadata, and final consistency checks.

Add a second TAP workload (mcp_mixed_mysql_pgsql_concurrency_stress-t) that drives MySQL and PgSQL traffic simultaneously, then polls MCP for both protocols in parallel to validate cross-protocol processlist and query-digest stability under mixed load.

Both tests create/drop their own workload tables, configure/restore MCP runtime settings, and expose deterministic TAP assertions on payload shape, cap metadata, filter correctness, ordering guarantees, and endpoint reachability.

To reduce false negatives under high concurrency, workload execution checks use a bounded error budget tied to observed traffic volume rather than requiring absolute zero transient query failures.

These TAPs can also be used as practical stress/load demonstrations because they run sustained concurrent workers and MCP poll loops with protocol-specific filtering.
… stress configurability

Enhance mcp_mixed_mysql_pgsql_concurrency_stress-t with environment-driven load parameters so it can be reused as a quick demo, sustained stress run, or heavier load scenario without source edits.

Add optional live MCP cap churn support that updates mcp-stats_show_processlist_max_rows and mcp-stats_show_queries_max_rows during active mixed MySQL+PgSQL traffic and concurrent MCP polling.

Generalize cap-metadata assertions in processlist/show_queries pollers to support both fixed-cap and churned-cap modes through accepted cap profiles.

Add mcp_mixed_stats_profile_matrix-t as an orchestrator TAP that executes multiple mixed-load profiles (quick, churn, heavy) and validates successful completion of each run.

Add mcp_mixed_stats_cap_churn-t as a focused orchestrator TAP for aggressive cap-churn scenarios under mixed protocol traffic.

Both orchestrator TAPs isolate child output to per-run log files, preserve parent TAP stream integrity, and emit diagnostic log tails on failures for easier triage.

Compilation and runtime validation performed locally before commit: enhanced mixed stress TAP plus both new orchestrator TAPs passed.
Refactor MCP stats connection tools so operational pool metrics remain lightweight while debug-level free-connection details are exposed through a dedicated gated tool.

Changes in this commit:

- Updated stats tool catalog and dispatch to add show_free_connections and keep show_connections focused on per-server pool metrics only.

- Removed free-connection payload from show_connections and added explicit compatibility error when callers still pass detail=true, with guidance to use show_free_connections.

- Implemented show_free_connections using in-memory hostgroup manager snapshots (MySQL and PgSQL) with hostgroup/server filtering and summary counters.

- Added MCP runtime variable mcp-stats_enable_debug_tools (default false), including variable registration, getter/setter handling, and config default in proxysql.cfg.

- Added extensive doxygen comments across modified code paths to document behavior, rationale, filters, and output contracts.

- Added TAP coverage in mcp_show_connections_commands_inmemory-t for: show_commands baseline, show_connections aggregate-only contract, debug tool gating behavior, and enabled-path validation for show_free_connections on both MySQL and PgSQL.

- Registered the new TAP test in test groups.
Summary of improvements based on PR reviews:

1. Performance Optimization:
   - Implemented read-offset indices in MySQLFFTO and PgSQLFFTO to replace linear std::vector::erase calls, reducing per-packet processing from O(N) to O(1).

2. Memory & Resource Safety:
   - Added explicit handlers for COM_STMT_CLOSE (MySQL) and Close (PostgreSQL) messages to correctly clear internal statement/portal maps and prevent memory leaks.
   - Enforced *-ffto_max_buffer_size checks on the server-to-client data path, ensuring large responses correctly trigger FFTO bypass.
   - Improved PostgreSQL message parsing with strnlen and msg_len validation to prevent buffer over-reads and underflows.
   - Updated MySQL_Session::reset and PgSQL_Session::reset to clear FFTO state during session re-use.

3. Protocol Correctness & Robustness:
   - Fixed MySQL CLIENT_DEPRECATE_EOF detection to use the correct 0xFE terminator logic.
   - Implemented PostgreSQL metric accumulation across multiple CommandComplete responses within a single query cycle.
   - Optimized PostgreSQL row extraction using a static local regex with an improved non-greedy pattern.
   - Stripped trailing NUL bytes from PostgreSQL Simple Query payloads for accurate digest generation.

4. Code Quality & Test Hardening:
   - Replaced unsafe sprintf calls with snprintf in all new TAP tests.
   - Stabilized TAP plans by ensuring a constant number of ok() assertions across all success and failure paths.
   - Hardened the bypass test to verify total digest counts.
   - Applied modern C++ standards: replaced <stddef.h> with <cstddef>, used = default for virtual destructors, and added override keywords.
   - Fixed indentation and removed temporary debug log artifacts from production code.
Rework stats.show_users so it no longer queries runtime-populated stats tables via admindb. The tool now reads user connection counters directly from authentication runtime structures, matching the direction taken for other MCP stats tools that must avoid stale stats schema reads.

Implementation details:

- Replaced show_users SQL path over stats_mysql_users/stats_pgsql_users with in-memory collection from GloMyAuth::dump_all_users(..., false) and GloPgAuth::dump_all_users(..., false).

- Preserved Admin semantics by excluding internal/admin-style accounts (default_hostgroup < 0).

- For MySQL, included LDAP user counters from GloMyLdapAuth->dump_all_users() to keep parity with stats___mysql_users population behavior.

- Added deterministic argument handling for db_type validation and pagination bounds (limit capped to 1000, offset clamped to >= 0).

- Kept output contract unchanged (username, frontend_connections, frontend_max_connections, utilization_pct, status) and ordering by frontend_connections DESC then username ASC.

Documentation and tests:

- Added extensive doxygen comments describing in-memory data sources, filtering semantics, and result construction in show_users implementation.

- Extended TAP test mcp_show_connections_commands_inmemory-t with show_users coverage for MySQL and PgSQL, including payload shape checks and username filter validation.

- Updated TAP plan count accordingly.

- Included current tap groups updates present in working tree (test_ffto_* entries in groups.json), as requested.
MCP server may need a moment to start after LOAD MCP VARIABLES TO RUNTIME.
Added a retry loop that waits up to 3 seconds (30 retries * 100ms) for the
MCP server to become reachable before failing the test.
The MCP variables were not being exposed in runtime_global_variables table
after LOAD MCP VARIABLES TO RUNTIME. This fix:

1. Adds runtime_global_variables handling to flush_mcp_variables___runtime_to_database:
   - Clears existing mcp-* entries from runtime_global_variables when runtime=true
   - Inserts current runtime values into runtime_global_variables

2. Calls flush_mcp_variables___runtime_to_database with runtime=true from
   flush_mcp_variables___database_to_runtime to ensure the runtime table
   is populated after variables are loaded to runtime.

This matches the behavior of other variable modules (admin, ldap, mysql, pgsql).
The runtime_global_variables table may not exist during early
initialization, so we should only prepare the statement for it
when the runtime flag is true. This fixes a crash on startup
when init_mcp_variables() is called before the admin database
schema is fully initialized.
New TAP test that verifies MCP variables are correctly populated into
runtime_global_variables after LOAD MCP VARIABLES TO RUNTIME:

1. Verifies runtime_global_variables contains at least 10 MCP variables
2. Changes multiple variables (timeout_ms, queries_max, processlist_max)
3. Verifies changed values are reflected in runtime_global_variables
4. Verifies runtime values match global_variables
Simplified the code to use db->execute() instead of prepared statements
for runtime_global_variables inserts. This avoids complex unique_ptr
deleter type matching issues and the table not existing during early
initialization.
Use the legacy db->prepare_v2(const char*, sqlite3_stmt**) overload
for the runtime_global_variables statement, which allows conditional
preparation only when the runtime flag is set. Properly finalize
the statement when done.
The test was using get_mcp_variable() but it wasn't defined. Added the
helper function that queries @@mcp-{var_name} to get variable values.
These variables were deprecated in favor of profile-based routing but
were never removed from the code. They existed in the struct definition,
getters, and setters but were not consumed anywhere - Query_Tool_Handler
uses target_auth_map via mcp_auth_profiles and mcp_target_profiles.

Removed:
- mcp-mysql_hosts, mcp-mysql_ports, mcp-mysql_user, mcp-mysql_password,
  mcp-mysql_schema from MCP_Thread.h struct
- Variable names from mcp_thread_variables_names array
- Initialization, cleanup, getters, and setters in MCP_Thread.cpp
- Documentation in doc/MCP/VARIABLES.md
MCPClient changes:
- Added use_ssl_ member and set_use_ssl(bool) method
- When SSL is enabled, uses https:// and disables cert verification
- set_host() and set_port() now respect the use_ssl_ flag

mcp_stats_refresh-t test fixes:
- Completely rewrote test - original tried to INSERT into read-only table
- New test: query Client_Connections_connected, create connections, verify count increases
- Try both HTTP and HTTPS when connecting to MCP server
- Fixed payload parsing to handle actual response format (variables directly in payload)
- Handle both lowercase (variable_name/value) and uppercase field names
- Added verbose diagnostics for debugging
Add 2-10 line descriptive messages at startup for all mcp_*-t.cpp test
files explaining what each test validates. This improves test output
readability and helps developers understand test purpose at a glance.
Three issues fixed:
1. Remove references to non-existent mcp-catalog_path variable
2. Lower expected MCP variable count from 15 to 10 (current count is 14)
3. Add initialization to reset variables to defaults before testing
   to ensure consistent state regardless of previous test runs
Comment out the WITH clause (CTE) test since MySQL 5.7 doesn't support
common table expressions. Reduced plan count from 20 to 19.
…test

When show_free_connections is disabled, the tool returns an error which
causes is_success() to return false. The test should check for transport
success (!is_transport_error()) rather than overall success, since we
expect a tool error response but the transport layer should work.
Print the first 5 queries with count, rows, and digest_text for visual
confirmation that the show_queries MCP tool is returning real data.
…_topk-t

Generate more digest entries for better visual confirmation of the
show_queries Top-K pipeline functionality.
Add multi-line descriptive messages at startup for:
- test_mcp_claude_headless_flow-t.sh
- test_mcp_llm_discovery_phaseb-t.sh
- test_mcp_query_rules-t.sh
- test_mcp_rag_metrics-t.sh
- test_mcp_static_harvest-t.sh

These messages explain what each test validates, improving output
readability and helping developers understand test purpose.
Three issues fixed:
1. Use TRUNCATE TABLE instead of DELETE for stats_pgsql_query_digest
   (DELETE doesn't actually clear the stats table)
2. Remove DROP TABLE digest verification - DDL statements are not
   tracked in stats_pgsql_query_digest
3. Fix SELECT digest pattern - simple query uses ? not $1

Reduced plan count from 22 to 19 to match remaining tests.
Add multi-line descriptive messages at startup for:
- test_ffto_bypass-t.cpp: Tests FFTO bypass for large queries
- test_ffto_mysql-t.cpp: Tests FFTO for MySQL connections
- test_ffto_pgsql-t.cpp: Tests FFTO for PostgreSQL connections

Explains FFTO (Fast Forward To Optimization) purpose and what
each test validates regarding query digest tracking.
Two fixes:
1. Use 'hostgroup' instead of 'destination_hostgroup' column
   (stats_mysql_query_digest doesn't have destination_hostgroup)
2. Properly consume query results with mysql_store_result/free_result
   to avoid "Commands out of sync" errors

Tests 2 and 3 still fail due to comment parsing feature behavior,
but these infrastructure fixes allow the test to run correctly.
The mysql-query_processor_first_comment_parsing modes 1 and 3 appear
to not be working correctly - comments are not being parsed before
rules are applied even when configured to do so.

- Added hostgroup 1000 setup/cleanup for proper test environment
- Skip tests 2 and 3 with TODO comments explaining the issue
- Keep test 1 which validates default behavior (mode 2)

The underlying feature issue should be investigated separately.
- Fixed column name: destination_hostgroup -> hostgroup
- Skip tests 2 and 3 with TODO comments (same feature regression
  as mysql version - pgsql-query_processor_first_comment_parsing
  modes 1 and 3 not working correctly)
The query tokenizers for both MySQL and PostgreSQL did not correctly
handle optimizer hint comments in the format /*+ ... */. When parsing
queries like `/*+ hostgroup=1000 */ SELECT 1`, the '+' character was
incorrectly included in the extracted first comment content, resulting
in the parsed key being '+hostgroup' instead of 'hostgroup'. This caused
the query_processor_first_comment_parsing variable (modes 1 and 3) to
not work correctly when using optimizer hint syntax.

Changes:
- c_tokenizer.cpp: Detect both /*! and /*+ comment formats
- pgsql_tokenizer.cpp: Detect /*+ comment format
- issue5384-t.cpp: Re-enable tests 2 and 3 (previously skipped)
- pgsql-issue5384-t.cpp: Re-enable tests 2 and 3, add hostgroup 1000 setup

Fixes #5413 (MySQL tokenizer)
Fixes #5414 (PostgreSQL tokenizer)
- Fix Top-K heap comparator in Query_Processor.cpp: use 'worse' comparator
  so heap top is the worst candidate (not best), enabling proper Top-K selection
- Add packet/message size guards in MySQLFFTO and PgSQLFFTO on_server_data()
  to prevent memory exhaustion from large result sets
- Add _exit(1) in utils.cpp when /dev/null open fails to prevent FD pollution
- Add NULL checks and consume query result in test_ffto_bypass-t.cpp
- Fix TAP message formatting in mcp_show_connections_commands_inmemory-t.cpp
- Add run_admin_checked() helper in pgsql-issue5384-t.cpp for proper error handling
Add comprehensive draft release notes covering all changes since commit
7e9e009 (725+ commits).

Key features documented:
- FFTO (Fast Forward Traffic Observer) for both versions
- TSDB (Time Series Database) for both versions
- MCP Server, GenAI, RAG (4.0.6 only, PROXYSQLGENAI=1)
- Noise testing framework
- Query processor enhancements
- Bug fixes and improvements

Files added:
- doc/release-notes/DRAFT-3.0.6-4.0.6.md - Main release notes
- doc/release-notes/CHANGELOG-3.0.6-4.0.6-detailed.md - All commits
- doc/release-notes/CHANGELOG-3.0.6-4.0.6-commits.md - Categorized
- doc/release-notes/README.md - Workflow documentation

Refs: #5418
@coderabbitai
Copy link

coderabbitai bot commented Feb 23, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a comprehensive overhaul of ProxySQL's observability and query instrumentation infrastructure. It replaces the "observe" endpoint with a new "stats" endpoint featuring specialized MCP tools for live and historical data access. Simultaneously, it adds Fast-Forward Traffic Observers (FFTO) for non-intrusive MySQL and PostgreSQL query metric collection, along with in-memory Top-K query digest filtering/sorting capabilities. A noise testing framework is added to support configurable background load generation during TAP tests. Extensive documentation and release notes accompany these features.

Changes

Cohort / File(s) Summary
MCP Stats Endpoint Architecture
include/Stats_Tool_Handler.h, lib/Stats_Tool_Handler.cpp, include/MCP_Thread.h, lib/MCP_Thread.cpp, include/MCP_Tool_Handler.h, lib/MCP_Tool_Handler.cpp
Introduces Stats_Tool_Handler class replacing Observe_Tool_Handler with 20+ tool implementations (show_status, show_processlist, show_queries, show_commands, show_connections, show_errors, show_cluster, show_connection_history, show_query_history, etc.). Adds helper methods for admin queries, stats database access, interval configuration, and percentile calculation.
FFTO MySQL Implementation
include/MySQLFFTO.hpp, lib/MySQLFFTO.cpp, include/MySQL_Session.h, lib/MySQL_Session.cpp
Introduces MySQLFFTO class with state machine (IDLE, AWAITING_RESPONSE, READING_COLUMNS, READING_ROWS) for MySQL protocol observation. Handles prepared statements, query lifecycle tracking, and digest-based metric reporting. Integration into MySQL_Session with ffto_bypassed state and observe_ffto_client_packet method.
FFTO PostgreSQL Implementation
include/PgSQLFFTO.hpp, lib/PgSQLFFTO.cpp, include/PgSQL_Session.h, lib/PgSQL_Session.cpp
Introduces PgSQLFFTO class supporting both simple and extended query flows with portal/prepared statement mapping. Tracks pipelined queries via deque and reports metrics through GloPgQPro processor. Integration into PgSQL_Session with FFTO lifecycle management.
Traffic Observer Interface & Configuration
include/TrafficObserver.hpp, include/MySQL_Thread.h, lib/MySQL_Thread.cpp, include/PgSQL_Thread.h, lib/PgSQL_Thread.cpp
Abstract TrafficObserver interface defining on_client_data, on_server_data, on_close callbacks. Adds FFTO configuration variables (ffto_enabled, ffto_max_buffer_size) and query_processor_first_comment_parsing to both MySQL and PostgreSQL thread handlers with in-memory processlist filtering/sorting.
Query Digest Top-K Functionality
include/query_digest_topk.h, include/proxysql_admin.h, include/query_processor.h, lib/Query_Processor.cpp, lib/ProxySQL_Admin.cpp
Introduces query_digest_topk_result_t structures and QueryDigestTopK template method with in-memory Top-K selection via min-heap, multi-criteria deterministic sorting, and case-insensitive digest matching. Adds get_query_digests_topk method with filtering, sorting, and pagination.
Utility Functions & Helpers
include/proxysql_utils.h, lib/proxysql_utils.cpp, lib/c_tokenizer.cpp, lib/pgsql_tokenizer.cpp
Adds sql_escape and calculate_percentile_from_histogram utilities. Extends tokenizer to recognize /\*+ optimizer hint markers for both MySQL and PostgreSQL comment parsing.
Noise Testing Framework
test/tap/tap/noise_utils.h, test/tap/tap/noise_utils.cpp, test/tap/tap/utils.h, test/tap/tap/utils.cpp, test/tap/tap/command_line.h, test/tap/tap/command_line.cpp, test/tap/tap/tap.cpp
Comprehensive noise generation subsystem with spawn_internal_noise, stop_internal_noise_threads orchestration. Implements internal noise routines: admin_pinger, stats_poller, prometheus_poller, mysql/pgsql traffic generators, random stats poller. Integration with TAP test lifecycle via exit_status and noise failure tracking.
TAP Test Infrastructure Updates
test/tap/tap/Makefile, test/tap/tap/mcp_client.h, test/tap/tap/mcp_client.cpp
Extends Makefile with noise_utils compilation and PostgreSQL/RE2 linking. Adds set_use_ssl method to MCPClient for HTTPS support. Updates library dependencies to include MCP_Tool_Handler.oo and new FFTO object files.
New Stress & Concurrency Tests
test/tap/tests/mcp_mysql_concurrency_stress-t.cpp, test/tap/tests/mcp_pgsql_concurrency_stress-t.cpp, test/tap/tests/mcp_mixed_mysql_pgsql_concurrency_stress-t.cpp, test/tap/tests/mcp_mixed_stats_cap_churn-t.cpp, test/tap/tests/mcp_mixed_stats_profile_matrix-t.cpp
Multi-threaded workload generators with concurrent MCP stats polling validation. Instruments processlist and query digest retrieval under sustained load across MySQL and PostgreSQL. Includes cap-churn profiles and per-operation metrics collection.
Issue-Specific & Feature Tests
test/tap/tests/issue5384-t.cpp, test/tap/tests/mcp_runtime_variables-t.cpp, test/tap/tests/mcp_module-t.cpp, test/tap/tests/kill_connection-t.cpp, test/tap/tests/admin_show_create_table-t.cpp, test/tap/tests/admin_various_commands-t.cpp
Tests for query processor first-comment parsing modes, MCP runtime variable synchronization, endpoint authentication configuration updates, and noise integration into existing tests. Updates plan counts and test assertions to accommodate noise overhead.
Noise Injection Scripts
test/tap/noise/noise_admin_pinger.sh, test/tap/noise/noise_pgsql_poller.sh, test/tap/noise/noise_stats_poller.py
External noise tools: MySQL admin pinger, PostgreSQL poller, and Python-based stats table poller for background query generation.
Test Infrastructure & Configuration
test/tap/tests/Makefile, test/tap/tests/mcp_rules_testing/configure_mcp.sh, test/tap/groups/groups.json
Adds new FFTO test targets (test_ffto_mysql-t, test_ffto_pgsql-t, test_ffto_bypass-t) and noise_injection test. Expands MCP test groups with stress/concurrency variants. Updates PostgreSQL and RE2 library linkage.
MCP Endpoint & Configuration Updates
lib/MCP_Endpoint.cpp, lib/ProxySQL_MCP_Server.cpp, src/proxysql.cfg
Transitions from mcp-observe_endpoint_auth to mcp-stats_endpoint_auth; adds mcp-ai_endpoint_auth and mcp-rag_endpoint_auth. Updates endpoint registration and tool handler initialization with defensive checks and generic registration lambda.
Global Configuration & Initialization
include/proxysql.h, include/proxysql_glovars.hpp, include/proxysql_structs.h, lib/ProxySQL_GloVars.cpp, src/main.cpp
Conditional AI-related header includes; removes genai_enabled/mcp_enabled global toggles; adds thread-local FFTO and query processor variables; updates initialization phases to remove redundant GenAI/MCP checks.
Test Utilities & Admin Updates
lib/Admin_FlushVariables.cpp, lib/Config_Tool_Handler.cpp, lib/ProxySQL_Admin_Tests2.cpp, lib/QP_query_digest_stats.cpp
Implements MCP variables database synchronization with runtime_global_variables. Adds QueryDigestTopK verification harness. Updates get_digest_text to const-correct signature. Adds include ordering adjustments.
.gitignore & Documentation
.gitignore, doc/MCP/Architecture.md, doc/MCP/VARIABLES.md, doc/MCP/Tool_Discovery_Guide.md, doc/Stats_Tables.md, doc/MCP/MCP_Stats_Tools_Spec.md, doc/MCP/MCP_Stats_Implementation_Plan.md, doc/ffto_design.md, doc/release-notes/*, test/tap/NOISE_TESTING.md
Updates ignore patterns for test artifacts and PEM files. Renames observe to stats throughout MCP documentation. Adds comprehensive Stats_Tables documentation (1857 lines), FFTO architecture guide (93 lines), MCP Stats specification (1362 lines) and implementation plan (1216 lines). Includes generated release notes for 3.0.6/4.0.6 versions and noise testing framework documentation.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

This PR introduces substantial new functionality across multiple architectural layers: (1) FFTO implementation requires understanding of MySQL and PostgreSQL protocol state machines and their integration into session lifecycles; (2) MCP Stats tooling involves 20+ tool handlers with complex database query patterns and cross-database compatibility; (3) Query Digest Top-K adds dense algorithmic logic with multi-criteria sorting and heap management; (4) Noise testing framework spans 900+ lines with thread management and process orchestration; (5) Changes are highly heterogeneous across headers, implementations, tests, and documentation requiring separate reasoning for each cohort.

Possibly related issues

Possibly related PRs

Suggested reviewers

  • rahim-kanji

Poem

🐰 Through fast-forward tunnels with stats so bright,
Top-K digests dance in the noise-filled night,
Traffic observers peek without a sound,
While MCP tools spin metrics all around,
From MySQL's streams to Postgres's dance,
Observability takes its stance! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.51% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'docs: Add draft release notes for ProxySQL 3.0.6 / 4.0.6' clearly and concisely describes the main change: adding release notes documentation for two specific versions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch release-notes-3.0.6-4.0.6-draft

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @renecannao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the draft release notes for ProxySQL 3.0.6 and 4.0.6, outlining a significant expansion of features and stability improvements. Key additions include the Fast Forward Traffic Observer (FFTO) for enhanced monitoring of fast-forwarded traffic, a new Time Series Database (TSDB) for historical metrics, and a robust Noise Testing Framework for more realistic load testing. For the 4.0.6 GenAI build, a Model Context Protocol (MCP) Server is integrated, alongside comprehensive GenAI and Retrieval-Augmented Generation (RAG) capabilities, enabling advanced AI-driven functionalities like NL2SQL and anomaly detection. The changes also encompass numerous bug fixes, performance optimizations, and refactorings across MySQL, PostgreSQL, and core ProxySQL components, ensuring a more stable and feature-rich release.

Highlights

  • Draft Release Notes for 3.0.6 / 4.0.6: Comprehensive draft release notes have been added, detailing new features, bug fixes, and improvements for ProxySQL versions 3.0.6 and 4.0.6. The notes clarify the version scheme, distinguishing between core proxy functionality (3.0.x) and additional GenAI/MCP/RAG capabilities (4.0.x).
  • Fast Forward Traffic Observer (FFTO): A new Fast Forward Traffic Observer (FFTO) has been implemented for both MySQL and PostgreSQL. This feature enables non-intrusive, query-level observability for connections operating in fast-forward mode, capturing metrics like affected rows and rows sent without impacting performance.
  • Time Series Database (TSDB): A built-in Time Series Database (TSDB) subsystem has been introduced for historical metrics storage. It includes a REST API for external access, a Web UI dashboard, and Prometheus-compatible metrics ingestion, facilitating trend analysis and capacity planning.
  • Noise Testing Framework: A robust Noise Testing Framework has been developed to simulate realistic background load during TAP (Test Anything Protocol) tests. This framework helps identify race conditions, deadlocks, and stability issues under concurrent activity.
  • Query Processor Enhancements: The Query Processor has been enhanced to correctly handle optimizer hints (e.g., /*+ */) in query tokenizers. A new configurable variable, {mysql,pgsql}-query_processor_first_comment_parsing, provides flexible control over when comment hints are parsed relative to query rules.
  • Model Context Protocol (MCP) Server (4.0.6 only): For the 4.0.6 GenAI build, a Model Context Protocol (MCP) Server has been integrated. This server provides multiple endpoints (config, stats, query, admin, cache, AI, RAG) with per-endpoint authentication, enabling AI agents and LLMs to interact with ProxySQL's administration interface.
  • GenAI and Retrieval-Augmented Generation (RAG) Capabilities (4.0.6 only): Comprehensive GenAI and RAG capabilities have been added to the 4.0.6 build. This includes vector database integration, embedding generation and storage, anomaly detection for query patterns, Natural Language to SQL (NL2SQL), and document ingestion for RAG.
  • MCP Endpoint Renaming and Refactoring: The /mcp/observe endpoint has been renamed to /mcp/stats, with corresponding updates to related variables, documentation, and tool handlers, improving clarity and consistency in the MCP architecture.
  • Bug Fixes and Stability Improvements: Numerous bug fixes have been implemented across various components, including PostgreSQL (deadlocks, prepared statement purge, memory reads), MySQL (monitor assertions, SQLite3 server support), and core ProxySQL (SIGSEGV, FFTO memory corruption, SQL injection vulnerabilities). Performance optimizations and refactorings, such as migrating prepare_v2 calls to a RAII-based API and improving OpenSSL RSA key generation, contribute to overall stability and safety.
  • Expanded Test Coverage: New TAP tests have been added for FFTO, MCP, GenAI, TSDB, and noise injection scenarios, significantly expanding the test coverage and ensuring the reliability of new and existing features.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .gitignore
    • Updated to ignore generated test pems, test artifacts, and .gemini files.
  • doc/MCP/Architecture.md
    • Renamed observe_tool_handler to stats_tool_handler and /mcp/observe to /mcp/stats in the architecture diagram and text.
  • doc/MCP/MCP_Stats_Implementation_Plan.md
    • Added a new document detailing the implementation guidance for /stats endpoint tools, including database access patterns, table mappings, SQL queries, data flow, and testing strategies.
  • doc/MCP/MCP_Stats_Tools_Spec.md
    • Added a new specification document for the /stats endpoint MCP tools, categorizing them into live data, historical data, and utility tools, and detailing their parameters and response formats.
  • doc/MCP/Tool_Discovery_Guide.md
    • Updated the endpoint name from /mcp/observe to /mcp/stats in the tool discovery guide.
  • doc/MCP/VARIABLES.md
    • Renamed the authentication variable mcp-observe_endpoint_auth to mcp-stats_endpoint_auth and removed several legacy mcp-mysql_* configuration variables.
  • doc/Stats_Tables.md
    • Added a new comprehensive documentation for ProxySQL statistics and metrics tables, covering MySQL, PostgreSQL, cluster, historical, and system tables, along with their architecture, data characteristics, and collection triggers.
  • doc/ffto_design.md
    • Added a new design document for the Fast Forward Traffic Observer (FFTO), outlining its architecture, goals, configuration, and limitations for MySQL and PostgreSQL.
  • doc/release-notes/CHANGELOG-3.0.6-4.0.6-commits.md
    • Added a new categorized changelog file for the 3.0.6 / 4.0.6 release.
  • doc/release-notes/CHANGELOG-3.0.6-4.0.6-detailed.md
    • Added a new detailed changelog file for the 3.0.6 / 4.0.6 release.
  • doc/release-notes/DRAFT-3.0.6-4.0.6.md
    • Added the main draft release notes document for ProxySQL 3.0.6 / 4.0.6, summarizing new features, bug fixes, improvements, and build system changes.
  • doc/release-notes/README.md
    • Added a README file for the release notes directory, explaining the version scheme, workflow, and contributing guidelines.
  • include/MCP_Thread.h
    • Renamed Observe_Tool_Handler to Stats_Tool_Handler and updated the corresponding pointer.
    • Added mcp_ai_endpoint_auth and new mcp_stats_show_queries_max_rows, mcp_stats_show_processlist_max_rows, and mcp_stats_enable_debug_tools variables.
    • Removed legacy mcp_mysql_* variables.
  • include/MCP_Tool_Handler.h
    • Removed include.
    • Added SQLite3_result forward declaration.
    • Updated comments from observe to stats.
    • Added a static resultset_to_json method.
  • include/MySQLFFTO.hpp
    • Added a new header file defining the MySQLFFTO class for MySQL traffic observation in fast-forward mode.
  • include/MySQL_Session.h
    • Added TrafficObserver include.
    • Added m_ffto (unique_ptr for TrafficObserver) and ffto_bypassed members.
    • Added observe_ffto_client_packet method.
  • include/MySQL_Thread.h
    • Added query_processor_first_comment_parsing, ffto_enabled, and ffto_max_buffer_size variables.
    • Updated the SQL3_Processlist function signature to include processlist_config_t.
  • include/Observe_Tool_Handler.h
    • Renamed to Stats_Tool_Handler.h and updated its internal description to reflect the change from "observe" to "stats" endpoint.
  • include/PgSQLFFTO.hpp
    • Added a new header file defining the PgSQLFFTO class for PostgreSQL traffic observation in fast-forward mode.
  • include/PgSQL_Session.h
    • Added TrafficObserver include.
    • Added m_ffto (unique_ptr for TrafficObserver) and ffto_bypassed members.
  • include/PgSQL_Thread.h
    • Added query_processor_first_comment_parsing, ffto_enabled, and ffto_max_buffer_size variables.
    • Updated the SQL3_Processlist function signature to include processlist_config_t.
  • include/Stats_Tool_Handler.h
    • Added a new header file defining the Stats_Tool_Handler class for the /mcp/stats endpoint, including various live data, historical data, and utility tools.
  • include/TrafficObserver.hpp
    • Added a new header file defining the TrafficObserver interface for protocol-aware traffic observation.
  • include/cpp.h
    • Updated includes to reflect the renaming of Observe_Tool_Handler.h to Stats_Tool_Handler.h and ensure MCP_Tool_Handler.h is included when PROXYSQLGENAI is defined.
  • include/proxysql.h
    • Wrapped AI feature includes within #ifdef PROXYSQLGENAI block.
  • include/proxysql_admin.h
    • Included query_digest_topk.h.
    • Added processlist_sort_by_t, processlist_query_options_t, and processlist_config_t structs.
    • Added QueryDigestTopK template function and ProxySQL_Test___Verify_QueryDigestTopK for internal testing.
  • include/proxysql_glovars.hpp
    • Removed genai_enabled and mcp_enabled boolean variables.
  • include/proxysql_structs.h
    • Added thread-local variables for pgsql_thread___ffto_enabled, pgsql_thread___ffto_max_buffer_size, pgsql_thread___query_processor_first_comment_parsing, mysql_thread___ffto_enabled, mysql_thread___ffto_max_buffer_size, and mysql_thread___query_processor_first_comment_parsing.
  • include/proxysql_utils.h
    • Added sql_escape function for escaping SQL literals and calculate_percentile_from_histogram for percentile estimation.
  • include/query_digest_topk.h
    • Added a new header file defining shared API types for in-memory query digest Top-K retrieval, including query_digest_sort_by_t, query_digest_filter_opts_t, query_digest_topk_row_t, and query_digest_topk_result_t.
  • include/query_processor.h
    • Included query_digest_topk.h.
    • Updated get_digest_text to be a const member function.
    • Added get_query_digests_topk method.
  • lib/Admin_FlushVariables.cpp
    • Removed includes for individual MCP tool handlers.
    • Modified flush_mcp_variables___database_to_runtime to update runtime_global_variables.
    • Modified flush_mcp_variables___runtime_to_database to handle runtime_global_variables and clean up prepared statements.
  • lib/Config_Tool_Handler.cpp
    • Updated includes to include MCP_Tool_Handler.h and Config_Tool_Handler.h.
  • lib/MCP_Endpoint.cpp
    • Updated authentication logic to rename observe to stats and added ai endpoint.
    • Changed server name in handle_jsonrpc_request.
  • lib/MCP_Thread.cpp
    • Updated mcp_thread_variables_names to reflect renaming of observe_endpoint_auth to stats_endpoint_auth, added ai_endpoint_auth, and new stats_show_queries_max_rows, stats_show_processlist_max_rows, stats_enable_debug_tools.
    • Removed legacy mysql_* variables.
    • Updated constructor, destructor, get_variable, and set_variable methods accordingly.
  • lib/MCP_Tool_Handler.cpp
    • Added implementation for the static resultset_to_json method.
  • lib/Makefile
    • Added MySQLFFTO.oo, PgSQLFFTO.oo, MCP_Tool_Handler.oo, and Stats_Tool_Handler.oo to the _OBJ_CXX list for conditional compilation with PROXYSQLGENAI.
  • lib/MySQLFFTO.cpp
    • Added implementation for the MySQLFFTO class, including on_client_data, on_server_data, on_close, process_client_packet, process_server_packet, and report_query_stats methods.
  • lib/MySQL_Session.cpp
    • Included MySQLFFTO.hpp.
    • Initialized ffto_bypassed in constructor and reset it in reset().
    • Added logic to create and use MySQLFFTO instance in observe_ffto_client_packet and handler.
    • Added GenAI-related error handling for GENAI: query and LLM: query.
  • lib/MySQL_Thread.cpp
    • Included , , .
    • Added query_processor_first_comment_parsing, ffto_enabled, and ffto_max_buffer_size variables.
    • Implemented apply_mysql_processlist_query_options and updated SQL3_Processlist to use it.
  • lib/PgSQLFFTO.cpp
    • Added implementation for the PgSQLFFTO class, including on_client_data, on_server_data, on_close, track_query, clear_current_query, activate_next_query, finalize_current_query, process_client_message, process_server_message, and report_query_stats methods.
  • lib/PgSQL_Session.cpp
    • Included PgSQLFFTO.hpp.
    • Initialized ffto_bypassed in constructor and reset it in reset().
    • Added logic to create and use PgSQLFFTO instance in get_pkts_from_client and handler.
  • lib/PgSQL_Thread.cpp
    • Included , , .
    • Added query_processor_first_comment_parsing, ffto_enabled, and ffto_max_buffer_size variables.
    • Implemented apply_pgsql_processlist_query_options and updated SQL3_Processlist to use it.
  • lib/ProxySQL_Admin.cpp
    • Implemented the QueryDigestTopK template function for both MySQL and PostgreSQL.
    • Explicitly instantiated QueryDigestTopK for SERVER_TYPE_MYSQL and SERVER_TYPE_PGSQL.
  • lib/ProxySQL_Admin_Tests2.cpp
    • Added query_digest_test_row_t struct and helper functions for internal testing of QueryDigestTopK.
    • Implemented ProxySQL_Test___Verify_QueryDigestTopK and added a new test case (type 56) to ProxySQL_Admin::ProxySQL_Test_Handler.
  • lib/ProxySQL_GloVars.cpp
    • Removed genai_enabled and mcp_enabled from ProxySQL_GlobalVariables::process_opts_pre.
  • lib/ProxySQL_MCP_Server.cpp
    • Updated constructor to initialize Stats_Tool_Handler instead of Observe_Tool_Handler.
    • Added error handling for tool handler initialization.
    • Refactored endpoint registration using a lambda.
    • Removed extern AI_Features_Manager *GloAI;.
    • Updated destructor to clean up Stats_Tool_Handler.
  • lib/QP_query_digest_stats.cpp
    • Made get_digest_text a const member function.
  • lib/Query_Processor.cpp
    • Included , , , .
    • Added helper structs and functions for get_query_digests_topk.
    • Implemented get_query_digests_topk method.
    • Modified process_query to incorporate query_processor_first_comment_parsing logic.
  • lib/c_tokenizer.cpp
    • Modified process_cmnt_type_1 to recognize /+ as an optimizer hint, in addition to /!.
  • lib/pgsql_tokenizer.cpp
    • Modified process_cmnt_type_1 to recognize /*+ as an optimizer hint.
  • lib/proxysql_utils.cpp
    • Included .
    • Implemented sql_escape and calculate_percentile_from_histogram functions.
  • src/main.cpp
    • Added proxy_info message for GenAI Threads Handler initialization.
    • Removed conditional initialization of GenAI and MCP modules based on GloVars.global.genai_enabled and GloVars.global.mcp_enabled, making them always initialized if PROXYSQLGENAI is defined.
    • Removed redundant MCP module initialization in ProxySQL_Main_init_phase3___start_all.
  • src/proxysql.cfg
    • Renamed mcp_observe_endpoint_auth to mcp_stats_endpoint_auth.
    • Added mcp_ai_endpoint_auth, mcp_stats_show_queries_max_rows, mcp_stats_show_processlist_max_rows, and mcp_stats_enable_debug_tools variables.
  • test/tap/NOISE_TESTING.md
    • Added a new documentation file detailing the Noise Injection Framework for TAP tests, including its overview, configuration, standard tools, built-in routines, and usage.
  • test/tap/groups/groups.json
    • Added new test entries for test_noise_injection-t, test_ffto_bypass-t, test_ffto_mysql-t, test_ffto_pgsql-t, issue5384-t, pgsql-issue5384-t, mcp_mixed_mysql_pgsql_concurrency_stress-t, mcp_mixed_stats_cap_churn-t, mcp_mixed_stats_profile_matrix-t, mcp_mysql_concurrency_stress-t, mcp_pgsql_concurrency_stress-t, mcp_runtime_variables-t, mcp_show_connections_commands_inmemory-t, mcp_show_queries_topk-t, and mcp_stats_refresh-t.
    • Removed a duplicate vector_features-t entry.
  • test/tap/noise/noise_admin_pinger.sh
    • Added a new shell script for a background admin pinger noise tool.
  • test/tap/noise/noise_pgsql_poller.sh
    • Added a new shell script for a background PostgreSQL poller noise tool.
  • test/tap/noise/noise_stats_poller.py
    • Added a new Python script for a background stats poller noise tool.
  • test/tap/tap/Makefile
    • Updated Makefile to include noise_utils.o in libtap_mariadb.a, libtap_mysql57.a, libtap_mysql8.a.
    • Added libpq and re2 linking to libtap.so.
    • Added AR definition.
  • test/tap/tap/command_line.cpp
    • Added logic to parse TAP_USE_NOISE environment variable to set use_noise flag.
  • test/tap/tap/command_line.h
    • Added use_noise boolean member.
  • test/tap/tap/mcp_client.cpp
    • Added use_ssl_ member initialization and set_use_ssl method implementation, including CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST options.
  • test/tap/tap/mcp_client.h
    • Added set_use_ssl method and use_ssl_ boolean member.
  • test/tap/tap/noise_utils.cpp
    • Added implementations for register_noise_failure, noise_log, spawn_internal_noise, stop_internal_noise_threads, get_internal_noise_threads_count, internal_noise_admin_pinger, internal_noise_stats_poller, internal_noise_prometheus_poller, internal_noise_mysql_traffic_v2, internal_noise_rest_prometheus_poller, internal_noise_random_stats_poller, internal_noise_mysql_traffic, internal_noise_pgsql_traffic, and internal_noise_pgsql_traffic_v2.
  • test/tap/tap/noise_utils.h
    • Added a new header file defining noise utility functions, including NoiseOptions, noise_log, internal_noise_func_t, spawn_internal_noise, register_noise_failure, stop_internal_noise_threads, get_internal_noise_threads_count, and declarations for various internal noise functions.
  • test/tap/tap/tap.cpp
    • Included , , , .
    • Modified exit_status to call stop_noise_tools and check noise_failures before exiting.
  • test/tap/tap/utils.cpp
    • Included , , , , .
    • Removed replace_str function.
    • Modified perform_simple_get to accept an optional userpwd parameter.
    • Implemented spawn_noise, stop_noise_tools, and get_noise_tools_count functions.
  • test/tap/tap/utils.h
    • Updated wait_get_enpoint_ready and perform_simple_get signatures to include userpwd.
    • Added declarations for spawn_noise, stop_noise_tools, and get_noise_tools_count.
  • test/tap/tests/admin_show_create_table-t.cpp
    • Added noise_utils.h include.
    • Spawns internal_noise_admin_pinger and internal_noise_stats_poller.
    • Updated plan count to account for noise tools.
  • test/tap/tests/admin_various_commands-t.cpp
    • Added noise_utils.h include.
    • Spawns internal_noise_admin_pinger and internal_noise_random_stats_poller.
    • Updated plan count to account for noise tools.
  • test/tap/tests/issue5384-t.cpp
    • Added a new test file to verify the mysql-query_processor_first_comment_parsing variable.
  • test/tap/tests/kill_connection-t.cpp
    • Added noise_utils.h include.
    • Spawns internal_noise_mysql_traffic and internal_noise_rest_prometheus_poller.
    • Updated plan count to account for noise tools.
  • test/tap/tests/mcp_mixed_mysql_pgsql_concurrency_stress-t.cpp
    • Added a new test file for mixed MySQL/PgSQL concurrency stress with MCP stats polling.
  • test/tap/tests/mcp_mixed_stats_cap_churn-t.cpp
    • Added a new test file to verify MCP stats stability under dynamic cap changes during mixed load.
  • test/tap/tests/mcp_mixed_stats_profile_matrix-t.cpp
    • Added a new test file to run a matrix of mixed MySQL/PgSQL stress profiles with MCP stats polling.
  • test/tap/tests/mcp_mysql_concurrency_stress-t.cpp
    • Added a new test file for MySQL concurrency stress with MCP stats polling.
  • test/tap/tests/mcp_pgsql_concurrency_stress-t.cpp
    • Added a new test file for PgSQL concurrency stress with MCP stats polling.
  • test/tap/tests/mcp_runtime_variables-t.cpp
    • Added a new test file to verify MCP runtime_global_variables table population.
  • test/tap/tests/test_ffto_bypass-t.cpp
    • Added a new test file to verify FFTO bypass functionality when packet size exceeds buffer limits.
  • test/tap/tests/test_ffto_mysql-t.cpp
    • Added a new test file to verify MySQL FFTO functionality for text and prepared statements.
  • test/tap/tests/test_ffto_pgsql-t.cpp
    • Added a new test file to verify PostgreSQL FFTO functionality for simple and extended queries.
  • test/tap/tests/test_noise_injection-t.cpp
    • Added a new test file to verify the Noise Injection Framework.
  • test/tap/tests/test_tsdb_api-t.cpp
    • Added noise_utils.h include.
    • Spawns internal_noise_admin_pinger and internal_noise_stats_poller.
    • Updated plan count to account for noise tools.
  • test/tap/tests/test_tsdb_variables-t.cpp
    • Added noise_utils.h include.
    • Spawns internal_noise_admin_pinger and internal_noise_stats_poller.
    • Updated plan count to account for noise tools.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a substantial amount of documentation for the upcoming ProxySQL 3.0.6 / 4.0.6 releases, including detailed changelogs and specifications for new features. Beyond documentation, it delivers significant feature enhancements and refactoring. Key changes include the renaming of the MCP observe endpoint to stats for better clarity, the introduction of the Fast Forward Traffic Observer (FFTO) for high-performance monitoring, a new noise testing framework to improve test robustness, and various enhancements to the query processor and GenAI capabilities. The code quality is high, with notable improvements in robustness, such as better error handling in module initialization and the use of prepared statements to prevent potential SQL injection vulnerabilities. My feedback is focused on improving the clarity and safety of the new documentation and specifications.

AND (username = ? OR ? IS NULL)
AND (schemaname = ? OR ? IS NULL) -- database parameter maps to schemaname column
AND (digest = ? OR ? IS NULL)
AND (sum_time / count_star >= ? OR ? IS NULL)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The division sum_time / count_star could lead to a division-by-zero error if count_star is 0. While SQLite returns NULL in this case, which might be handled by the rest of the WHERE clause, it's safer and clearer to explicitly prevent this. Consider adding a WHERE count_star > 0 condition or using a CASE statement to handle this edge case, similar to how it's handled in the C++ snippet on line 579 (count > 0 ? ...).

AND (username = ? OR ? IS NULL)
AND (database = ? OR ? IS NULL) -- database parameter maps to database column
AND (digest = ? OR ? IS NULL)
AND (sum_time / count_star >= ? OR ? IS NULL)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the MySQL query, the division sum_time / count_star here could result in a division-by-zero error. To improve robustness, I recommend adding a guard to ensure count_star is greater than zero before performing the division.

| `limit` | int | No | `100` | Maximum number of results |
| `offset` | int | No | `0` | Skip first N results |
| `min_count` | int | No | (none) | Only show queries executed at least N times |
| `min_time_us` | int | No | (none) | Only show queries with avg time >= N microseconds |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The parameter name min_time_us is a bit misleading given its description: "Only show queries with avg time >= N microseconds". To improve API clarity and user understanding, I suggest renaming this parameter to min_avg_time_us to more accurately reflect its function.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
29 Security Hotspots

See analysis details on SonarQube Cloud

@renecannao renecannao changed the title docs: Add draft release notes for ProxySQL 3.0.6 / 4.0.6 [skip-ci] docs: Add draft release notes for ProxySQL 3.0.6 / 4.0.6 Feb 23, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
test/tap/tests/mcp_query_run_sql_readonly-t.cpp (1)

212-225: ⚠️ Potential issue | 🟡 Minor

Cleanup doesn't restore MCP runtime variables, leaking test state.

configure_mcp_for_test() modifies three runtime variables via SET (mcp-port, mcp-use_ssl, mcp-enabled). These are in-memory mutations — they are not rolled back by LOAD MCP PROFILES TO RUNTIME. The cleanup block omits LOAD MCP VARIABLES TO RUNTIME, so those settings persist after the test completes, potentially leaving MCP enabled on a modified port for any test that runs next.

🛠️ Proposed fix
 	if (admin) {
 		run_q(admin, ("DELETE FROM mcp_target_profiles WHERE target_id='" + std::string(k_target_id) + "'").c_str());
 		run_q(admin, ("DELETE FROM mcp_auth_profiles WHERE auth_profile_id='" + std::string(k_auth_profile_id) + "'").c_str());
 		run_q(admin, ("DELETE FROM mysql_servers WHERE hostgroup_id=" + std::to_string(k_hostgroup_id)).c_str());
+		run_q(admin, "LOAD MCP VARIABLES TO RUNTIME");
 		run_q(admin, "LOAD MCP PROFILES TO RUNTIME");
 		run_q(admin, "LOAD MYSQL SERVERS TO RUNTIME");
 		mysql_close(admin);
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/mcp_query_run_sql_readonly-t.cpp` around lines 212 - 225, The
cleanup section fails to restore MCP runtime variables changed by
configure_mcp_for_test(); modify the cleanup block that uses admin and run_q to
also execute "LOAD MCP VARIABLES TO RUNTIME" (after deleting rows and
before/after the existing "LOAD MCP PROFILES TO RUNTIME" as appropriate) so the
in-memory mcp-port, mcp-use_ssl and mcp-enabled settings are reverted; ensure
this new run_q call is guarded by the same if (admin) check and placed alongside
the other runtime LOAD commands referenced in the cleanup block.
lib/c_tokenizer.cpp (1)

598-625: ⚠️ Potential issue | 🟡 Minor

LGTM — /*+ detection and discard are correct; minor pre-existing keep_comment gap now also covers optimizer hints.

The boundary check on line 605 (q_cur_pos != (q_len-2)) correctly guards the q[2] access before testing for +, and lines 618-619 already use 2 + c_t_1_st->is_cmd so the 3-char discard works for both /*! and /*+ without any additional changes.

One pre-existing issue that now also affects /*+: when opts->keep_comment == true, the keep_comment block (lines 610-615) copies only q[0]='/' and q[1]='*' into cur_cmd_cmnt, while q[2] (the + or !) is consumed silently by the discard step. As a result, /*+ INDEX(t) */ is reconstructed in the preserved output as /* INDEX(t) */, stripping the optimizer-hint marker entirely. This same loss of ! already existed for /*! comments. While this PR is consistent with that prior behavior, the gap is now observable for optimizer hints too — clients relying on keep_comment=true to round-trip /*+ directives will receive a semantically different (regular) comment back.

🛠️ Proposed fix to preserve `!`/`+` when `keep_comment` is enabled
 		// copy the initial mark "/*" if comment preserving is enabled
 		if (opts->keep_comment) {
 			cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = *(shared_st->q);
 			c_t_1_st->cur_cmd_cmnt_len++;
 			cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = *(shared_st->q + 1);
 			c_t_1_st->cur_cmd_cmnt_len++;
+			if (c_t_1_st->is_cmd) {
+				cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = *(shared_st->q + 2); // preserve '!' or '+'
+				c_t_1_st->cur_cmd_cmnt_len++;
+			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/c_tokenizer.cpp` around lines 598 - 625, When opts->keep_comment is true
we currently copy only q[0] and q[1] into cur_cmd_cmnt then later advance
shared_st->q by 2 + c_t_1_st->is_cmd, which drops the third character for `/*!`
or `/*+`; change the keep_comment block in the same section (near shared_st->q
handling, c_t_1_st->is_cmd and cur_cmd_cmnt) to, if c_t_1_st->is_cmd == 1, also
copy *(shared_st->q + 2) into cur_cmd_cmnt and increment cur_cmd_cmnt_len before
advancing shared_st->q/shared_st->q_cur_pos so the `!` or `+` is preserved for
`cur_cmd_cmnt`.
lib/pgsql_tokenizer.cpp (1)

655-688: ⚠️ Potential issue | 🟡 Minor

Optimizer hint + character is silently dropped from preserved comments before it's checked.

The keep_comment and fst_cmnt blocks (lines 662–678) copy the opening /* delimiter before the is_optimizer_hint check executes (lines 680–684). When the pointer advances by 3 on line 687, the + is consumed but never written to either buffer. With keep_comment=true, a query like /*+ HASH_JOIN(t1 t2) */ is incorrectly preserved as /* HASH_JOIN(t1 t2) */—a plain comment rather than an optimizer hint.

While no existing code in the codebase was found that checks fst_cmnt[0] for the + character, the logic is incorrect and would break any future code that relies on detecting optimizer hints via this marker (either in PostgreSQL extensions or when this code is reused for other databases).

Move the is_optimizer_hint check before both copy blocks and include the + in the respective buffers when present.

🐛 Proposed fix — detect is_optimizer_hint before copying the opening delimiter
+    // check for optimizer hint marker /*+  (must precede keep_comment/fst_cmnt copies)
+    int is_optimizer_hint = 0;
+    if (shared_st->q_cur_pos <= (shared_st->q_len-3) && *(shared_st->q+2) == '+') {
+        is_optimizer_hint = 1;
+    }
+
     // copy the initial mark "/*" if comment preserving is enabled
     if (opts->keep_comment) {
         cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = *(shared_st->q);
         c_t_1_st->cur_cmd_cmnt_len++;
         cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = *(shared_st->q + 1);
         c_t_1_st->cur_cmd_cmnt_len++;
+        if (is_optimizer_hint && c_t_1_st->cur_cmd_cmnt_len < FIRST_COMMENT_MAX_LENGTH - 1) {
+            cur_cmd_cmnt[c_t_1_st->cur_cmd_cmnt_len] = '+';
+            c_t_1_st->cur_cmd_cmnt_len++;
+        }
     }

     if (c_t_1_st->fst_cmnt_end == 0 && c_t_1_st->nest_level > 1 &&
         c_t_1_st->fst_cmnt_len < FIRST_COMMENT_MAX_LENGTH - 2) {
         assert(*fst_cmnt);
         char* next_fst_cmnt_char = *fst_cmnt + c_t_1_st->fst_cmnt_len;
         *next_fst_cmnt_char = *(shared_st->q);
         next_fst_cmnt_char++;
         *next_fst_cmnt_char = *(shared_st->q + 1);
         next_fst_cmnt_char++;
         c_t_1_st->fst_cmnt_len += 2;
+        if (is_optimizer_hint && c_t_1_st->fst_cmnt_len < FIRST_COMMENT_MAX_LENGTH - 1) {
+            *next_fst_cmnt_char = '+';
+            c_t_1_st->fst_cmnt_len++;
+        }
     }
-
-    // check for optimizer hint marker /*+
-    int is_optimizer_hint = 0;
-    if (shared_st->q_cur_pos <= (shared_st->q_len-3) && *(shared_st->q+2) == '+') {
-        is_optimizer_hint = 1;
-    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/pgsql_tokenizer.cpp` around lines 655 - 688, The optimizer-hint '+' is
being consumed before it's copied into preserved buffers; move the
is_optimizer_hint detection (the check using shared_st->q and shared_st->q_len)
to run before the opts->keep_comment and fst_cmnt copying blocks, then when
is_optimizer_hint==1 include the '+' character in both cur_cmd_cmnt (used when
opts->keep_comment) and the fst_cmnt buffer (used when fst_cmnt is set) so the
'+' is preserved; keep the existing pointer advance logic (shared_st->q += 2 +
is_optimizer_hint and shared_st->q_cur_pos += 2 + is_optimizer_hint) unchanged.
doc/MCP/VARIABLES.md (1)

109-118: ⚠️ Potential issue | 🟡 Minor

Missing documentation for mcp-rag_endpoint_auth and stats configuration variables.

The config file (src/proxysql.cfg) introduces mcp_rag_endpoint_auth, mcp_stats_show_queries_max_rows, mcp_stats_show_processlist_max_rows, and mcp_stats_enable_debug_tools, but none of these are documented in this file. Since this is the authoritative variable reference, they should be added for completeness.

Would you like me to draft the missing variable documentation sections?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/MCP/VARIABLES.md` around lines 109 - 118, Add documentation entries for
the missing variables using the same format as mcp-ai_endpoint_auth: create
sections for mcp-rag_endpoint_auth, mcp_stats_show_queries_max_rows,
mcp_stats_show_processlist_max_rows, and mcp_stats_enable_debug_tools that list
Type, Default, Description, Runtime, and an Example (mirroring the SET/LOAD
pattern). For mcp-rag_endpoint_auth document it as the bearer token for the
/mcp/rag endpoint; for mcp_stats_show_queries_max_rows and
mcp_stats_show_processlist_max_rows document them as integer limits controlling
the maximum rows returned for the respective stats commands; for
mcp_stats_enable_debug_tools document it as a boolean that toggles enabling of
debug stats tools. Ensure the variable names match exactly
(mcp-rag_endpoint_auth, mcp_stats_show_queries_max_rows,
mcp_stats_show_processlist_max_rows, mcp_stats_enable_debug_tools) and follow
the same casing and example style as the existing entries.
doc/MCP/Architecture.md (1)

505-505: ⚠️ Potential issue | 🟡 Minor

Stale "Last Updated" date.

Line 505 shows Last Updated: 2026-01-19 but this file has been modified as part of the current PR (2026-02-23). Consider updating the date.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/MCP/Architecture.md` at line 505, Update the stale "Last Updated"
metadata in the document by changing the string "**Last Updated:** 2026-01-19"
to reflect the current PR date (e.g., "**Last Updated:** 2026-02-23"); locate
the literal "Last Updated" line in Architecture.md (the "**Last Updated:**
2026-01-19" entry) and replace the date accordingly.
lib/MySQL_Session.cpp (1)

765-771: ⚠️ Potential issue | 🟡 Minor

Reset clears lock state before stats decrement.
reset() sets locked_on_hostgroup = -1, so st_var_hostgroup_locked is never decremented in the destructor. Capture the flag before resetting (or move reset() after the decrement).

Proposed fix
 MySQL_Session::~MySQL_Session() {
-	reset();
+	const bool was_locked_on_hostgroup = (locked_on_hostgroup >= 0);
+	reset();
 	// we moved this out to allow CHANGE_USER
 
-	if (locked_on_hostgroup >= 0) {
+	if (was_locked_on_hostgroup) {
 		thread->status_variables.stvar[st_var_hostgroup_locked]--;
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/MySQL_Session.cpp` around lines 765 - 771, The destructor
MySQL_Session::~MySQL_Session currently calls reset() before decrementing the
hostgroup lock counter, but reset() clears locked_on_hostgroup so the decrement
never happens; fix by capturing the current locked_on_hostgroup flag (or the
boolean condition) into a local variable before calling reset(), then if that
captured flag indicates a lock, decrement
thread->status_variables.stvar[st_var_hostgroup_locked]; alternatively move the
decrement to occur immediately before reset() so the decrement uses the original
locked_on_hostgroup value.
🟡 Minor comments (19)
.gitignore-71-71 (1)

71-71: ⚠️ Potential issue | 🟡 Minor

Duplicate pattern — line 71 is already present at line 184.

test/tap/tests/test_cluster_sync_config/proxysql*.pem is listed again verbatim at line 184 under #files generated during CI run. One of the two entries should be removed to keep the file tidy.

🧹 Proposed fix
 # generated test pems
 test/tap/tests/*.pem
-test/tap/tests/test_cluster_sync_config/proxysql*.pem
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore at line 71, The .gitignore contains a duplicate pattern
"test/tap/tests/test_cluster_sync_config/proxysql*.pem" (present at line 71 and
again under the "#files generated during CI run" section); remove one of the
duplicate entries so the pattern appears only once (keep the occurrence that
best matches the intended section, e.g., under "#files generated during CI run"
or the original grouping) to tidy the file and avoid redundant lines.
test/tap/tests/mcp_query_run_sql_readonly-t.cpp-159-165 (1)

159-165: ⚠️ Potential issue | 🟡 Minor

Stale mention of WITH (CTE) in the diagnostic banner.

Line 164 still advertises CTE as an allowed statement, but the CTE test case was disabled on line 47. This makes the banner misleading — a reader running the test and looking at the diag output would incorrectly believe CTEs are being validated.

📝 Proposed fix
-	diag("Allowed statements include SELECT, WITH (CTE), EXPLAIN SELECT, SHOW, and DESCRIBE.");
+	diag("Allowed statements include SELECT, EXPLAIN SELECT, SHOW, and DESCRIBE.");
+	diag("Note: WITH clause (CTE) is excluded because MySQL 5.7 does not support CTEs.");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/mcp_query_run_sql_readonly-t.cpp` around lines 159 - 165, The
diagnostic banner in the MCP query test still claims "WITH (CTE)" is allowed
even though the CTE test case was disabled; update the diag invocation that
includes "Allowed statements include SELECT, WITH (CTE), EXPLAIN SELECT, SHOW,
and DESCRIBE." to remove or modify the "WITH (CTE)" text so the banner
accurately reflects the current enabled tests (e.g., remove "WITH (CTE)" or
replace it with a note that CTE is skipped), keeping the rest of the diag
strings unchanged and targeting the diag call that contains that exact phrase.
test/tap/tests/mcp_rules_testing/configure_mcp.sh-462-462 (1)

462-462: ⚠️ Potential issue | 🟡 Minor

LGTM — /mcp/stats endpoint rename is consistent with the broader observe→stats migration.

Two pre-existing nits worth fixing while touching this file:

  1. Password logged to stdout (Line 401): echo " MYSQL_PASSWORD=${MYSQL_PASSWORD}" prints the credential in plain text. Consider masking or omitting it.
  2. Docs/code mismatch (Lines 354–355): The help text documents PROXYSQL_ADMIN_USER/PROXYSQL_ADMIN_PASSWORD defaults as admin/admin, but lines 38–39 default to radmin/radmin.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/mcp_rules_testing/configure_mcp.sh` at line 462, Remove or
mask plain-text password output by changing the echo that prints MYSQL_PASSWORD
to either omit the variable or print a masked value (e.g. show only first/last
chars or replace with ****) so the credential is not logged to stdout; locate
the echo referencing MYSQL_PASSWORD in configure_mcp.sh. Also ensure help text
defaults for PROXYSQL_ADMIN_USER and PROXYSQL_ADMIN_PASSWORD match the actual
default variables (PROXYSQL_ADMIN_USER and PROXYSQL_ADMIN_PASSWORD) used in the
script—update the help text to reflect the real defaults (radmin/radmin) or
change the default variables to match the documented admin/admin values so docs
and code are consistent.
test/tap/noise/noise_admin_pinger.sh-10-10 (1)

10-10: ⚠️ Potential issue | 🟡 Minor

Quote all variable expansions to prevent word-splitting and glob expansion.

If $HOST, $USER, or $PASS ever contain spaces or shell metacharacters (common in CI environments that inject credentials), the mysql invocation will silently misparse arguments.

🛡️ Proposed fix
-    mysql -h $HOST -P $PORT -u $USER -p$PASS -e "SELECT 1;" > /dev/null 2>&1
+    mysql -h "$HOST" -P "$PORT" -u "$USER" -p"$PASS" -e "SELECT 1;" > /dev/null 2>&1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/noise/noise_admin_pinger.sh` at line 10, The mysql invocation in
noise_admin_pinger.sh uses unquoted variable expansions which can lead to
word-splitting or globbing when HOST, PORT, USER, or PASS contain spaces/special
chars; update the command in that script to quote each expansion (e.g. use "-h"
"$HOST", "-P" "$PORT", "-u" "$USER", and "-p" "$PASS" form) so arguments are
passed intact and the redirection stays the same.
doc/release-notes/CHANGELOG-3.0.6-4.0.6-commits.md-3-18 (1)

3-18: ⚠️ Potential issue | 🟡 Minor

Commit totals are inconsistent in the summary.

The header lists 725 total commits, but the summary table totals 623; please reconcile or note that uncategorized commits are excluded. Also consider fixing obvious typos (e.g., “Handledwq”) before publishing.

Also applies to: 699-700

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/release-notes/CHANGELOG-3.0.6-4.0.6-commits.md` around lines 3 - 18, The
changelog shows inconsistent commit counts: the header string "**Total
commits:** 725" does not match the table sum "**Total: 623 commits**" and
contains typos like "Handledwq"; update the document to either reconcile the
numbers (recompute and replace "**Total commits:** 725" or adjust the table rows
so their sum equals 725) or add a clear note that uncategorized commits are
excluded, and fix obvious typos (search for and correct "Handledwq" and similar
misspellings); ensure the two strings "**Total commits:** 725" and "**Total: 623
commits**" are consistent after your change.
test/tap/tests/mcp_mysql_concurrency_stress-t.cpp-175-208 (1)

175-208: ⚠️ Potential issue | 🟡 Minor

Restore all MCP variables modified by the test.

configure_mcp_runtime() changes mcp-port, mcp-use_ssl, and mcp-stats_endpoint_auth, but restore_mcp_runtime() only resets a subset (and to hard-coded defaults). If the same ProxySQL instance is reused across TAP tests, this can leak state. Consider snapshotting original MCP variables and restoring them after the test.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/mcp_mysql_concurrency_stress-t.cpp` around lines 175 - 208,
The test mutates MCP variables in configure_mcp_runtime (mcp-port, mcp-use_ssl,
mcp-stats_endpoint_auth, mcp-enabled and the stats caps) but restore_mcp_runtime
only resets a few to hard-coded values, risking state leakage; modify
configure_mcp_runtime to snapshot the current values (query and store the
existing mcp-port, mcp-use_ssl, mcp-stats_endpoint_auth, mcp-enabled,
mcp-stats_show_processlist_max_rows and mcp-stats_show_queries_max_rows into a
small struct or map) before applying changes, keep that snapshot accessible to
the caller (or a file/static test-context), and update restore_mcp_runtime to
restore each variable from the saved snapshot using run_admin_stmt/run_q rather
than hard-coded defaults so the original ProxySQL state is always reverted.
include/PgSQL_Session.h-538-540 (1)

538-540: ⚠️ Potential issue | 🟡 Minor

Add in-class initializer to ffto_bypassed.

The boolean member lacks an in-class initializer. While the constructors currently set it to false, an explicit in-class initializer provides a safety net against future changes where a constructor might omit the initialization, and aligns with modern C++ best practices.

-    bool ffto_bypassed;
+    bool ffto_bypassed { false };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/PgSQL_Session.h` around lines 538 - 540, The member boolean
ffto_bypassed in the class declaration lacks an in-class initializer; update its
declaration to initialize it to false (e.g., change the declaration of
ffto_bypassed so it has an in-class initializer) to ensure a safe default even
if constructors are modified, refer to the nearby members Session_Regex**
match_regexes and std::unique_ptr<TrafficObserver> m_ffto to locate the correct
declaration.
test/tap/tap/tap.cpp-370-378 (1)

370-378: ⚠️ Potential issue | 🟡 Minor

Early return on noise failure skips the remaining test summary diagnostics.

When noise_failures is non-empty, the function returns EXIT_FAILURE at Line 376 without emitting the plan (for NO_PLAN tests) or the "Failed N tests!" diagnostic. This means TAP harnesses won't get a proper plan line, which could cause confusing parse errors in CI. Consider moving the noise failure check after the plan emission, or at least emitting the plan before the early return.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/tap.cpp` around lines 370 - 378, The early return on noise
failures skips emitting the TAP plan/summary; adjust the logic around
noise_failures/noise_failure_mutex so the plan and summary diagnostics are
always emitted before exiting: either move the block that checks noise_failures
(the mutex-guarded loop that calls diag for each failed_routine) to after the
code that prints the plan/"Failed N tests!" diagnostics, or ensure you call the
existing plan-emission function(s) immediately before returning EXIT_FAILURE;
keep using noise_failures, noise_failure_mutex, diag and preserve the same
failure messages but guarantee the plan line is emitted first.
doc/release-notes/DRAFT-3.0.6-4.0.6.md-88-93 (1)

88-93: ⚠️ Potential issue | 🟡 Minor

Minor: Verify MCP configuration variable naming conventions.

The release notes list simplified names (mcp_enabled, mcp_port, mcp_ssl_enabled) while the code uses ProxySQL's hyphenated convention (e.g., mcp-port, mcp_use_ssl). Ensure the user-facing variable names match what users will actually configure. This is tracked in the TODO at Line 239 but worth flagging explicitly.

#!/bin/bash
# Check actual MCP variable names used in the codebase
rg -n 'mcp.enabled|mcp.port|mcp.ssl|mcp_use_ssl|mcp_enabled' --type=cpp -C1 | head -60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/release-notes/DRAFT-3.0.6-4.0.6.md` around lines 88 - 93, Release notes
list simplified variables (mcp_enabled, mcp_port, mcp_socket_dir,
mcp_ssl_enabled) but the code uses ProxySQL-style names (e.g., mcp-port,
mcp_use_ssl); verify the actual config keys in the codebase (search for
mcp-port, mcp_use_ssl, mcp_use_ssl, mcp_socket_dir, mcp_enabled patterns) and
update the release notes to use the exact user-facing variable names or add a
clear mapping between the simplified names and the real keys (also reconcile the
TODO about MCP naming). Ensure the release note entries (the four mcp_* items
and the TODO mention) exactly match the config symbols used in the code.
doc/MCP/MCP_Stats_Implementation_Plan.md-740-756 (1)

740-756: ⚠️ Potential issue | 🟡 Minor

Cluster health logic has two edge-case bugs.

  1. Empty cluster (total == 0): healthy == total0 == 0 → returns "healthy", which is misleading for a cluster with no visible nodes.
  2. Single failing node (total == 1, healthy == 0): healthy >= total/20 >= 0 (integer division) → returns "degraded" instead of "unhealthy".
🐛 Proposed fix
 std::string calculate_cluster_health(const json& nodes) {
     int total = nodes.size();
+    if (total == 0) return "unhealthy";
     int healthy = 0;
     // ... (health counting) ...
     if (healthy == total) return "healthy";
-    if (healthy >= total / 2) return "degraded";
+    if (healthy > total / 2) return "degraded";   // strict majority required
     return "unhealthy";
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/MCP/MCP_Stats_Implementation_Plan.md` around lines 740 - 756, The
calculate_cluster_health function mishandles two edge cases: when total == 0 it
incorrectly returns "healthy", and the degraded/unhealthy boundary uses integer
division allowing a single failing node to be classified as "degraded". Fix by
first handling an empty cluster (total == 0) and return an explicit state such
as "unknown" or "unhealthy" as per spec; then change the degraded threshold
check to use a proper fraction comparison (e.g., compare healthy * 2 >= total or
use double arithmetic like healthy >= ceil(total * 0.5)) instead of relying on
integer division; update calculate_cluster_health accordingly so the final
checks (healthy == total, degraded threshold, else "unhealthy") behave
correctly.
doc/MCP/MCP_Stats_Implementation_Plan.md-96-114 (1)

96-114: ⚠️ Potential issue | 🟡 Minor

error pointer is not freed on the success path.

When rc == SQLITE_OK, error is never checked or freed. In practice SQLite sets error only on failure, but for correctness and to match the pattern used in the error branch, add a guard:

🛡️ Defensive fix
     if (rc != SQLITE_OK) {
         std::string err_msg = error ? error : "Query execution failed";
         if (error) free(error);
         return create_error_response(err_msg);
     }
+    if (error) free(error);   // defensive: sqlite3 shouldn't set this on OK, but be safe

     json rows = resultset_to_json(resultset, cols);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/MCP/MCP_Stats_Implementation_Plan.md` around lines 96 - 114, The success
path in Stats_Tool_Handler::execute_query currently never frees the char* error
buffer (allocated by db->execute_statement) which is only freed on the error
branch; after calling db->execute_statement and before returning rows, check if
error is non-null and call free(error) to avoid leaking memory—i.e., add a guard
freeing error in the rc == SQLITE_OK path (near variables error and
resultset_to_json/delete resultset) so both success and failure paths
consistently free the error buffer.
test/tap/tap/utils.cpp-2428-2451 (1)

2428-2451: ⚠️ Potential issue | 🟡 Minor

background_noise_pids is accessed from multiple call sites without synchronisation.

spawn_noise pushes to the vector and stop_noise_tools (which may run via atexit on a different logical path) iterates and clears it, both without a mutex. If either function is ever reachable from more than one thread concurrently, there is a data race.

For current single-threaded TAP test usage this is harmless, but adding a mutex (or at minimum a [[clang::no_sanitize("thread")]] annotation) would make intent explicit and safe-guard future multi-threaded callers of spawn_noise.

🔒 Minimal fix
+static std::mutex noise_pids_mutex;
 static std::vector<pid_t> background_noise_pids;

 extern "C" void stop_noise_tools() {
     static std::atomic<bool> already_stopped{false};
     if (already_stopped.exchange(true)) {
         return;
     }
     stop_internal_noise_threads();
+    std::lock_guard<std::mutex> lock(noise_pids_mutex);
     for (pid_t pid : background_noise_pids) {
 } else if (pid > 0) {
+    std::lock_guard<std::mutex> lock(noise_pids_mutex);
     background_noise_pids.push_back(pid);
 extern "C" int get_noise_tools_count() {
+    std::lock_guard<std::mutex> lock(noise_pids_mutex);
     return (int)background_noise_pids.size() + get_internal_noise_threads_count();
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/utils.cpp` around lines 2428 - 2451, background_noise_pids is
accessed concurrently by spawn_noise, stop_noise_tools and get_noise_tools_count
without synchronization, causing a potential data race; add a dedicated
std::mutex (e.g. background_noise_mutex) and protect all accesses/modifications
to background_noise_pids with std::lock_guard<std::mutex> in spawn_noise,
stop_noise_tools and get_noise_tools_count (lock before iterating, pushing,
clearing or reading size) to ensure thread-safety; alternatively, if you
intentionally restrict to single-threaded use, annotate the functions with
[[clang::no_sanitize("thread")]] and document the single-threaded assumption,
but the preferred fix is adding the mutex and locking around
background_noise_pids.
test/tap/tap/utils.cpp-2463-2496 (1)

2463-2496: ⚠️ Potential issue | 🟡 Minor

Exec failure is silently swallowed after stderr is redirected to /dev/null.

At line 2471 stderr is redirected to /dev/null. If ::execvp fails, the fprintf(stderr, ...) diagnostic at line 2489 is discarded. The parent only sees _exit(1) with no indication of what went wrong, making misconfigured tool paths hard to diagnose.

Consider writing the error to a pipe or the original stderr fd (saved before dup2) so the parent can surface the failure via diag().

💡 Suggested approach
+        // Save original stderr before redirecting
+        int orig_stderr = dup(STDERR_FILENO);
         int fd_null = open("/dev/null", O_RDWR);
         if (fd_null != -1) {
             dup2(fd_null, STDIN_FILENO);
             dup2(fd_null, STDOUT_FILENO);
             dup2(fd_null, STDERR_FILENO);
             if (fd_null > 2) close(fd_null);
         } else {
             _exit(1);
         }
         // ...
         ::execvp(tool_path.c_str(), c_args.data());
-        fprintf(stderr, "Failed to exec noise tool: %s\n", tool_path.c_str());
+        // stderr is /dev/null here; write to original stderr so the error is visible
+        if (orig_stderr >= 0) {
+            dprintf(orig_stderr, "Failed to exec noise tool '%s': %s\n",
+                    tool_path.c_str(), strerror(errno));
+        }
         _exit(1);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/utils.cpp` around lines 2463 - 2496, When forking the background
noise tool (the block that calls fork(), setpgid(), redirects stdio, builds
c_args and calls ::execvp(tool_path.c_str(), c_args.data())), save the original
stderr FD before dup2’ing it to /dev/null (or create a pipe from child->parent)
and on exec failure write the perror/message to that saved FD (or send error
back over the pipe) instead of writing to the redirected stderr; then ensure the
parent reads that FD/pipe and emits the error via diag() and records the failure
alongside background_noise_pids and tool_path so misconfigured tool paths are
visible.
doc/Stats_Tables.md-1475-1478 (1)

1475-1478: ⚠️ Potential issue | 🟡 Minor

Reset pattern explanation is inaccurate.

The described workflow (read → insert into _reset → reset counters) doesn't match the actual ProxySQL behavior. Querying a *_reset table atomically returns current stats and resets the counters in a single operation. Consider revising to match the actual semantics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/Stats_Tables.md` around lines 1475 - 1478, The "Reset pattern" text is
inaccurate: instead of describing a three-step read → insert into *_reset →
reset counters workflow, update the explanation to state that querying the
*_reset table (e.g., hostgroup_stats_reset, mysql_query_rules_reset) atomically
returns the current stats and resets the counters in a single operation; remove
the read/insert/reset sequence and clarify that a single SELECT on the *_reset
table both retrieves and clears metrics.
doc/Stats_Tables.md-91-91 (1)

91-91: ⚠️ Potential issue | 🟡 Minor

Clarify cnt_INFs description.

"Count of queries with unknown/undefined latency" is misleading. This is typically the overflow bucket for queries exceeding the highest named threshold (>10s). Consider: "Count of queries with latency >10s."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/Stats_Tables.md` at line 91, Update the documentation entry for the
metric named cnt_INFs: replace the misleading "Count of queries with
unknown/undefined latency" with a clear description such as "Count of queries
with latency >10s" (or "overflow bucket for queries exceeding the highest named
threshold (>10s)") so readers understand cnt_INFs is the overflow bucket for
very high-latency queries.
doc/Stats_Tables.md-741-741 (1)

741-741: ⚠️ Potential issue | 🟡 Minor

Clarify the column naming direction in the introductory note.

The parenthetical "(e.g., schemaname instead of db, sqlstate instead of errno)" is ambiguous about which is MySQL and which is PostgreSQL. Per the mapping table at line 1107, MySQL uses schemaname/errno while PostgreSQL uses database/sqlstate. Consider rewriting to something like:

"…with appropriate name changes (e.g., PostgreSQL uses database where MySQL uses schemaname, and sqlstate where MySQL uses errno)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/Stats_Tables.md` at line 741, The intro sentence in Stats_Tables.md is
ambiguous about which column names belong to PostgreSQL vs MySQL; update the
parenthetical to explicitly state the direction of the mapping so readers aren't
confused (for example, change "(e.g., `schemaname` instead of `db`, `sqlstate`
instead of `errno`)" to something like "PostgreSQL uses `database` where MySQL
uses `schemaname`, and PostgreSQL uses `sqlstate` where MySQL uses `errno`");
edit the sentence near the current note referencing `schemaname`, `db`,
`sqlstate`, and `errno` to follow that clearer phrasing.
lib/PgSQL_Thread.cpp-4965-5015 (1)

4965-5015: ⚠️ Potential issue | 🟡 Minor

Handle limit == 0 edge case in pagination logic.

The limit field defaults to std::numeric_limits<uint32_t>::max() to represent "no limit", but if explicitly set to 0, the pagination logic returns an empty result. Add a check to treat limit == 0 the same as disable_pagination for defensive robustness.

Note: This same issue exists in the identical apply_mysql_processlist_query_options() function in lib/MySQL_Thread.cpp and should be fixed there as well.

Suggested fix
-	if (opts.disable_pagination) {
+	if (opts.disable_pagination || opts.limit == 0) {
 		begin = 0;
 		end = filtered_rows.size();
 	} else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/PgSQL_Thread.cpp` around lines 4965 - 5015, The pagination logic in
apply_pgsql_processlist_query_options currently treats an explicit opts.limit ==
0 as a normal limit (yielding an empty page); update the pagination branch so
that if opts.disable_pagination is true OR opts.limit == 0 you treat it as "no
pagination" (set begin = 0 and end = filtered_rows.size()); adjust the
requested_end/end calculation to only run when pagination is enabled and
opts.limit > 0. Apply the identical fix in apply_mysql_processlist_query_options
in lib/MySQL_Thread.cpp to mirror behavior.
test/tap/tap/noise_utils.cpp-340-344 (1)

340-344: ⚠️ Potential issue | 🟡 Minor

mysql_fetch_row(res) called without checking res for NULL.

mysql_store_result can return NULL even after a successful query (e.g., out-of-memory). Calling mysql_fetch_row(NULL) is undefined behaviour. mysql_free_result(NULL) is safe, but the fetch is not.

🛡️ Proposed fix
 MYSQL_RES* res = mysql_store_result(setup_conn);
-MYSQL_ROW row = mysql_fetch_row(res);
-long current_rows = row ? std::stol(row[0]) : 0;
-mysql_free_result(res);
+long current_rows = 0;
+if (res) {
+    MYSQL_ROW row = mysql_fetch_row(res);
+    if (row && row[0]) current_rows = std::stol(row[0]);
+    mysql_free_result(res);
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/noise_utils.cpp` around lines 340 - 344, The code calls
mysql_fetch_row(res) without verifying that MYSQL_RES* res returned from
mysql_store_result(setup_conn) is non-NULL; update the block around
mysql_query/count_sql/setup_conn so you first check if res != nullptr before
calling mysql_fetch_row(res), and only call mysql_free_result(res) when res is
non-NULL; if res is NULL, set current_rows to 0 (or handle the error) and avoid
invoking mysql_fetch_row to prevent undefined behavior.
test/tap/tap/noise_utils.cpp-300-306 (1)

300-306: ⚠️ Potential issue | 🟡 Minor

Missing NULL check on mysql_init result before mysql_real_connect.

If mysql_init(NULL) returns NULL (OOM), the subsequent mysql_real_connect(setup_conn, ...) at line 301 dereferences a null pointer. Every other out-of-loop mysql_init call in this file guards against NULL (e.g., line 478).

🛡️ Proposed fix
 MYSQL* setup_conn = mysql_init(NULL);
+if (!setup_conn) {
+    noise_log("[NOISE] MySQL Traffic v2: mysql_init failed\n");
+    register_noise_failure("MySQL Traffic v2 (Setup)");
+    return;
+}
 if (!mysql_real_connect(setup_conn, cl.host, my_user, my_pass, NULL, cl.port, NULL, 0)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/noise_utils.cpp` around lines 300 - 306, The code calls
mysql_real_connect using setup_conn without checking if mysql_init returned
NULL; add a NULL check immediately after MYSQL* setup_conn = mysql_init(NULL)
and handle the OOM/alloc-failure by logging a clear message (use noise_log with
a generic "mysql_init returned NULL" message rather than mysql_error since
setup_conn is NULL), call register_noise_failure("MySQL Traffic v2 (Setup)") and
return, only calling mysql_real_connect, mysql_error and mysql_close if
setup_conn is non-NULL.

Comment on lines +576 to +581
for (auto& query : queries) {
int count = query["count_star"].get<int>();
int sum_time = query["sum_time_us"].get<int>();
query["avg_time_us"] = count > 0 ? sum_time / count : 0;
}
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Integer overflow: sum_time_us must be int64_t, not int.

sum_time_us is the sum of microseconds across every execution of a digest. At modest load (e.g., 35 minutes of cumulative query time), this exceeds INT_MAX (~2.1 billion µs). Using int silently truncates the value, producing a wildly incorrect avg_time_us.

🐛 Fix
-    int count = query["count_star"].get<int>();
-    int sum_time = query["sum_time_us"].get<int>();
-    query["avg_time_us"] = count > 0 ? sum_time / count : 0;
+    int64_t count    = query["count_star"].get<int64_t>();
+    int64_t sum_time = query["sum_time_us"].get<int64_t>();
+    query["avg_time_us"] = count > 0 ? sum_time / count : INT64_C(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/MCP/MCP_Stats_Implementation_Plan.md` around lines 576 - 581, The loop
computing avg_time_us is using 32-bit ints and can overflow; change the
retrieval and variables to 64-bit integers: replace
query["sum_time_us"].get<int>() with get<int64_t>() and store into an int64_t
sum_time (and make count and the assigned query["avg_time_us"] use int64_t or
compatible 64-bit type) so the division sum_time / count is performed in 64-bit
to avoid truncation/overflow in the avg_time_us computation inside the for
(auto& query : queries) block.

Comment on lines +45 to +47
**New tables:**
- `ffto_metrics` - Traffic observation statistics
- `ffto_sessions` - Active session tracking
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for FFTO-related table names and CREATE TABLE statements
echo "=== Searching for ffto_metrics and ffto_sessions references ==="
rg -i 'ffto_metrics|ffto_sessions' --type=cpp --type=c --type=sql -C2

echo ""
echo "=== Searching for CREATE TABLE statements with ffto ==="
rg -i 'CREATE.*TABLE.*ffto|ffto.*CREATE.*TABLE' --type=cpp --type=c --type=sql -C2

echo ""
echo "=== Checking doc/ffto_design.md for table references ==="
if [ -f "doc/ffto_design.md" ]; then
  cat -n doc/ffto_design.md | sed -n '75,85p'
else
  echo "doc/ffto_design.md not found"
fi

echo ""
echo "=== Searching for stats_mysql_query_digest and stats_pgsql_query_digest ==="
rg -i 'stats_mysql_query_digest|stats_pgsql_query_digest' --type=cpp --type=c --type=sql -C1 | head -30

Repository: sysown/proxysql

Length of output: 4937


Correct FFTO table names in release notes.

The release notes list ffto_metrics and ffto_sessions as new tables, but these do not exist in the codebase. FFTO metrics are accessed through the existing stats_mysql_query_digest and stats_pgsql_query_digest tables, as confirmed in doc/ffto_design.md (lines 79-82). Update lines 45-47 to reflect the correct tables.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/release-notes/DRAFT-3.0.6-4.0.6.md` around lines 45 - 47, Replace the
incorrect FFTO table names `ffto_metrics` and `ffto_sessions` in the release
notes with the actual tables used by FFTO: `stats_mysql_query_digest` and
`stats_pgsql_query_digest`; update the descriptive text so it states that FFTO
metrics are served via those existing digest tables (and remove or correct any
claim that new `ffto_*` tables were added).

Comment on lines +31 to +34
```bash
# From the ProxySQL root directory
./scripts/release-tools/generate-changelog.sh 7e9e00997d7d9fa4811c86c3a3bec9c886386e1f HEAD
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

fd "generate-changelog.sh" scripts/

Repository: sysown/proxysql

Length of output: 41


🏁 Script executed:

# Check if scripts directory exists and list its contents
ls -la scripts/ 2>/dev/null || echo "scripts/ directory not found"

# Search the entire repository for the script
rg "generate-changelog\.sh" --files

Repository: sysown/proxysql

Length of output: 1591


🏁 Script executed:

ls -la scripts/release-tools/

Repository: sysown/proxysql

Length of output: 1109


Update the changelog generation command to reference the correct script.

The README references ./scripts/release-tools/generate-changelog.sh, but the actual script is scripts/release-tools/generate_changelog.py. The command as written will fail. Update the example to use the correct Python script filename.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/release-notes/README.md` around lines 31 - 34, Update the README example
so it references the actual script name generate_changelog.py instead of
generate-changelog.sh and invoke it with the Python interpreter (e.g., python3)
from the ProxySQL root; adjust the example line in README.md that currently
mentions ./scripts/release-tools/generate-changelog.sh to call
scripts/release-tools/generate_changelog.py so the command runs successfully.

Comment on lines +305 to +311
query_digest_topk_result_t get_query_digests_topk(
const query_digest_filter_opts_t& filters,
query_digest_sort_by_t sort_by,
uint32_t limit,
uint32_t offset,
uint32_t max_window
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

uint32_t limit + offset can silently overflow, producing an empty Top-K result.

The implementation in lib/Query_Processor.cpp (line ~1179 per the provided snippet) computes:

window_size = min(max_window, limit + offset);

With limit's documented default of std::numeric_limits<uint32_t>::max() (include/proxysql_admin.h:283) and any non-zero offset, limit + offset wraps to a value ≤ offset - 1, so window_size collapses to a tiny or zero value and the call silently returns an empty/truncated result set instead of all rows.

Consider widening the arithmetic to uint64_t at the call site, or clamping before the add:

🐛 Suggested fix in lib/Query_Processor.cpp
-  uint32_t window_size = std::min(max_window, limit + offset);
+  uint64_t lo = static_cast<uint64_t>(limit) + static_cast<uint64_t>(offset);
+  uint32_t window_size = static_cast<uint32_t>(
+      std::min(static_cast<uint64_t>(max_window), lo));

Alternatively, promote the parameter types to uint64_t in this declaration to make the API self-documenting about its range.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/query_processor.h` around lines 305 - 311, The addition of limit +
offset in get_query_digests_topk can overflow uint32_t and produce a too-small
window_size; update the call-site calculation in the implementation (where
window_size = min(max_window, limit + offset)) to perform the addition in a
wider integer (e.g., cast limit and offset to uint64_t, compute sum, clamp to
max_window and to UINT32_MAX if needed) or change the API by promoting the
parameters to uint64_t in the declaration of get_query_digests_topk and adjust
the implementation accordingly so no silent 32-bit wrap occurs; ensure
comparisons with max_window use the same widened type and then safely
cast/truncate when passing to downstream functions.

Comment on lines +212 to +222
} else if (m_state == READING_COLUMNS) {
if (first_byte == 0xFE && len < 9) { // EOF after columns
m_state = READING_ROWS;
} else if (first_byte == 0xFE && len >= 9 && deprecate_eof) {
m_state = READING_ROWS;
} else if (first_byte == 0xFF) { // ERR while reading column metadata
unsigned long long duration = monotonic_time() - m_query_start_time;
report_query_stats(m_current_query, duration);
m_state = IDLE;
clear_active_query();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check how CLIENT_DEPRECATE_EOF is used across the codebase
rg -n 'CLIENT_DEPRECATE_EOF' --type=cpp -C2

Repository: sysown/proxysql

Length of output: 36569


🏁 Script executed:

#!/bin/bash
# Also search for deprecate_eof variable usage to understand how it flows
rg -n 'deprecate_eof' --type=cpp -C2 | head -100

Repository: sysown/proxysql

Length of output: 10940


🏁 Script executed:

#!/bin/bash
# Find and examine the MySQLFFTO.cpp file to understand the full state machine
fd -e cpp -n MySQLFFTO | head -5

Repository: sysown/proxysql

Length of output: 289


🏁 Script executed:

fd MySQLFFTO.cpp

Repository: sysown/proxysql

Length of output: 76


🏁 Script executed:

# Find MySQLFFTO.cpp and read it
find . -name "MySQLFFTO.cpp" -type f

Repository: sysown/proxysql

Length of output: 78


🏁 Script executed:

# Read the MySQLFFTO.cpp file to understand the state machine
cat -n lib/MySQLFFTO.cpp | head -250

Repository: sysown/proxysql

Length of output: 12200


🏁 Script executed:

# Read more of MySQLFFTO.cpp to understand the complete state machine
cat -n lib/MySQLFFTO.cpp | sed -n '174,243p'

Repository: sysown/proxysql

Length of output: 3743


🏁 Script executed:

# Search for how deprecate_eof affects result set handling in test files
rg -n "deprecate_eof" lib/MySQLFFTO.cpp -A 5 -B 5

Repository: sysown/proxysql

Length of output: 1900


🏁 Script executed:

# Look at how the FFTO state machine is supposed to handle result sets
rg -n "READING_ROWS\|READING_COLUMNS" lib/MySQLFFTO.cpp -B 2 -A 2

Repository: sysown/proxysql

Length of output: 41


State machine loses query stats with CLIENT_DEPRECATE_EOF enabled.

The code correctly detects deprecate-EOF mode (line 180) and transitions states at lines 215-216. However, there's a critical flaw: when the final OK packet (0xFE, len >= 9) arrives in READING_COLUMNS, it transitions to READING_ROWS (line 216), but in deprecate-EOF mode this OK packet is the last packet for that query. No subsequent packet arrives to trigger the stats reporting at lines 229-233. The next client command then overwrites m_state to AWAITING_RESPONSE (line 148), silently discarding the completed query without reporting stats.

Query stats should be reported when the final OK packet is processed in READING_COLUMNS state with deprecate-EOF active. The proper fix requires recognizing the final OK packet as the query completion point and reporting stats immediately at line 216, rather than deferring until another packet arrives in READING_ROWS.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/MySQLFFTO.cpp` around lines 212 - 222, When CLIENT_DEPRECATE_EOF
(deprecate_eof) is enabled the OK packet (first_byte == 0xFE && len >= 9)
received while in READING_COLUMNS is actually the final packet for the query,
but the current logic merely transitions m_state to READING_ROWS and never
reports query stats; modify the branch handling first_byte == 0xFE && len >= 9
in the READING_COLUMNS block to treat this packet as query completion: call
report_query_stats(m_current_query, duration) using monotonic_time() -
m_query_start_time, set m_state = IDLE, and call clear_active_query() (same
sequence used for ERR handling) instead of switching to READING_ROWS so stats
aren’t lost when the client uses CLIENT_DEPRECATE_EOF.

Comment on lines 430 to 431
update_string_var_if_set(&global.gr_bootstrap_uri, opt, "--bootstrap");
global.gr_bootstrap_mode = opt->isSet("--bootstrap");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -nP --type=cpp "gr_bootstrap_uri" -A3 -B1

Repository: sysown/proxysql

Length of output: 1950


🏁 Script executed:

rg -nP --type=cpp "update_string_var_if_set" -A10 -B2

Repository: sysown/proxysql

Length of output: 4233


🏁 Script executed:

rg -nP --type=cpp "gr_bootstrap_uri" --type=h | head -20

Repository: sysown/proxysql

Length of output: 552


🏁 Script executed:

rg -nP --type=cpp "parse_conn_uri" -B5 -A15 | head -80

Repository: sysown/proxysql

Length of output: 2522


🏁 Script executed:

rg -nP --type=cpp "parse_conn_uri\s*\(" -A30 src/main.cpp | head -100

Repository: sysown/proxysql

Length of output: 2498


🏁 Script executed:

sed -n '2700,2750p' src/main.cpp

Repository: sysown/proxysql

Length of output: 2056


🏁 Script executed:

rg -nP "bootstrap" src/main.cpp | grep -i "help\|socket" | head -20

Repository: sysown/proxysql

Length of output: 41


🏁 Script executed:

rg -nP "CONN_URI_REGEX" -B2 -A5

Repository: sysown/proxysql

Length of output: 1758


🏁 Script executed:

rg -nP "bootstrap" src/main.cpp | grep -i "help\|default\|socket" | head -30

Repository: sysown/proxysql

Length of output: 175


🏁 Script executed:

rg -nP "\-\-bootstrap" -B5 -A10 | head -50

Repository: sysown/proxysql

Length of output: 3891


🏁 Script executed:

rg -nP "help.*bootstrap|bootstrap.*help|default.*socket" -i | head -30

Repository: sysown/proxysql

Length of output: 202


🏁 Script executed:

rg -nP "default.*socket|socket.*default" -i -B3 -A3

Repository: sysown/proxysql

Length of output: 3330


🏁 Script executed:

rg -nP "gr_bootstrap_uri.*nullptr|gr_bootstrap_uri.*empty" -B2 -A2

Repository: sysown/proxysql

Length of output: 678


🏁 Script executed:

sed -n '1999,2070p' src/main.cpp

Repository: sysown/proxysql

Length of output: 2281


The documented "default socket" fallback for --bootstrap without URI is not implemented; the code instead aborts with a parse error.

When --bootstrap is provided without an argument, gr_bootstrap_uri does transition from nullptr to a pointer to an empty string (via strdup("")). However, the actual consumer behavior differs from the review comment's prediction: the code passes the empty string to parse_conn_uri(), which attempts regex matching against CONN_URI_REGEX. Since an empty string cannot match the regex pattern, parse_conn_uri() returns EXIT_FAILURE, causing the program to exit with an error message at line 2721.

The help text promises "if no URI is provided, a connection to the default local socket will be attempted" (ProxySQL_GloVars.cpp:287), but this fallback behavior is not implemented. The code should provide a default socket URI (e.g., "mysql://localhost" or equivalent) when --bootstrap is set without an argument, rather than failing with a parse error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/ProxySQL_GloVars.cpp` around lines 430 - 431, The current handling sets
global.gr_bootstrap_uri to an empty string when --bootstrap is passed without an
argument, which then fails in parse_conn_uri() against CONN_URI_REGEX; change
the logic after update_string_var_if_set so that if opt->isSet("--bootstrap")
and global.gr_bootstrap_uri is non-null but empty (""), replace it with a
sensible default URI (e.g., "mysql://localhost" or your project's default socket
URI) before calling parse_conn_uri(), and ensure global.gr_bootstrap_mode
remains true; reference update_string_var_if_set, global.gr_bootstrap_uri,
global.gr_bootstrap_mode, parse_conn_uri, and CONN_URI_REGEX when locating where
to insert this substitution.

Comment on lines +490 to +496
// Port defaults to 6070
int port = get_opt_int(opt, "port", 6070);
std::string endpoint = "http://localhost:" + std::to_string(port) + "/metrics";
std::string auth = std::string(cl.admin_username) + ":" + std::string(cl.admin_password);
if (opt.find("endpoint") != opt.end()) {
endpoint = opt.at("endpoint");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded localhost ignores cl.host for the REST endpoint.

The default endpoint uses localhost instead of cl.host. Every other noise function in this file connects via cl.host; this one silently diverges, causing the poller to target the wrong address when ProxySQL runs on a non-loopback host.

🐛 Proposed fix
-std::string endpoint = "http://localhost:" + std::to_string(port) + "/metrics";
+std::string endpoint = "http://" + std::string(cl.host) + ":" + std::to_string(port) + "/metrics";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Port defaults to 6070
int port = get_opt_int(opt, "port", 6070);
std::string endpoint = "http://localhost:" + std::to_string(port) + "/metrics";
std::string auth = std::string(cl.admin_username) + ":" + std::string(cl.admin_password);
if (opt.find("endpoint") != opt.end()) {
endpoint = opt.at("endpoint");
}
// Port defaults to 6070
int port = get_opt_int(opt, "port", 6070);
std::string endpoint = "http://" + std::string(cl.host) + ":" + std::to_string(port) + "/metrics";
std::string auth = std::string(cl.admin_username) + ":" + std::string(cl.admin_password);
if (opt.find("endpoint") != opt.end()) {
endpoint = opt.at("endpoint");
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/noise_utils.cpp` around lines 490 - 496, The default REST
endpoint construction in noise_utils.cpp currently hardcodes "localhost" which
ignores the configured ProxySQL host (cl.host); update the endpoint
initialization to use cl.host (e.g., build endpoint as "http://" +
std::string(cl.host) + ":" + std::to_string(port) + "/metrics") while preserving
the existing get_opt_int("port", 6070) usage and the opt override logic
(opt.at("endpoint")), and leave auth construction
(std::string(cl.admin_username) + ":" + std::string(cl.admin_password))
unchanged.

Comment on lines +367 to +381
int noise_count = get_noise_tools_count();
stop_noise_tools();

{
std::lock_guard<std::mutex> lock(noise_failure_mutex);
if (!noise_failures.empty()) {
for (const auto& failed_routine : noise_failures) {
diag("Noise failure detected in: %s", failed_routine.c_str());
}
return EXIT_FAILURE;
}
}

// Add noise tools to the count of executed tests if they didn't fail
__sync_add_and_fetch(&g_test.last, noise_count);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the actual code in test/tap/tap/tap.cpp around lines 367-389
cat -n test/tap/tap/tap.cpp | sed -n '360,395p'

Repository: sysown/proxysql

Length of output: 1203


🏁 Script executed:

# Find all test files using spawn_noise
echo "=== Test files using spawn_noise ==="
rg -l 'spawn_noise' --type=cpp | head -20

Repository: sysown/proxysql

Length of output: 139


🏁 Script executed:

# Search more broadly for where spawn_noise is actually called
echo "=== Files calling spawn_noise ==="
rg 'spawn_noise\s*\(' --type=cpp | head -30

Repository: sysown/proxysql

Length of output: 355


🏁 Script executed:

# Let's search for actual test files that use plan() to understand the testing pattern
echo "=== Test files that call plan() ==="
rg 'plan\s*\(' --type=cpp test/ | head -30

echo ""
echo "=== Check for NO_PLAN usage ==="
rg 'NO_PLAN' --type=cpp test/ | head -20

Repository: sysown/proxysql

Length of output: 2382


🏁 Script executed:

# Check if any test files that call plan() also use spawn_noise
echo "=== Test files with both plan() and spawn_noise ==="
for f in $(rg -l 'plan\s*\(' --type=cpp test/tap/tests/ 2>/dev/null); do
  if rg 'spawn_noise' "$f" > /dev/null 2>&1; then
    echo "Found in: $f"
  fi
done

echo ""
echo "=== Search broader for spawn_noise invocations ==="
rg 'spawn_noise' --type=cpp test/tap/tests/ -A 2 -B 2

Repository: sysown/proxysql

Length of output: 162


🏁 Script executed:

# Let's understand what spawn_noise does and if it's actually used anywhere
echo "=== spawn_noise function definition ==="
rg -A 20 'void spawn_noise' test/tap/tap/utils.cpp

echo ""
echo "=== Search for any calls to spawn_noise in entire repo ==="
rg 'spawn_noise\s*\(' --type=cpp -g '!utils' --type-list | head -20

Repository: sysown/proxysql

Length of output: 1088


🏁 Script executed:

# Search for actual calls to spawn_noise
echo "=== Any actual calls to spawn_noise (excluding definitions) ==="
rg 'spawn_noise\s*\(' --type=cpp -v 'utils\.(cpp|h)' | head -30

echo ""
echo "=== Get more context on the utils functions around spawn_noise ==="
rg -A 5 'get_noise_tools_count\|stop_noise_tools' test/tap/tap/utils.cpp | head -40

Repository: sysown/proxysql

Length of output: 262


🏁 Script executed:

# Search for the noise-related functions
echo "=== Search for get_noise_tools_count ==="
rg 'get_noise_tools_count' --type=cpp -B 3 -A 5

echo ""
echo "=== Search for stop_noise_tools ==="
rg 'stop_noise_tools' --type=cpp -B 3 -A 5

Repository: sysown/proxysql

Length of output: 4401


🏁 Script executed:

# Let's examine the test_noise_injection-t.cpp file
echo "=== test_noise_injection-t.cpp ==="
cat -n test/tap/tests/test_noise_injection-t.cpp | head -100

Repository: sysown/proxysql

Length of output: 2087


🏁 Script executed:

# Let's check if there's any mechanism to enable noise globally
echo "=== Check for global noise enablement ==="
rg 'use_noise' --type=cpp test/tap/ | grep -v 'test_noise_injection'

echo ""
echo "=== Check the CommandLine class to understand noise flag ==="
rg -A 10 'class CommandLine' test/tap/tap/command_line.h | head -30

Repository: sysown/proxysql

Length of output: 3149


Document the implicit contract that tests spawning noise tools must account for them in their plan() count.

The concern about line 381 is valid: adding noise_count to g_test.last will cause the test to fail if plan(N) was called with a fixed count, since the final check at line 389 (g_test.plan != g_test.last) will fail because g_test.last becomes N + noise_count.

However, this is not a bug—it's an architectural contract that existing noise-aware tests already follow correctly. Both test_noise_injection-t.cpp (which calls plan(get_noise_tools_count()) to determine the plan count first) and other noise-using tests (which either account for noise conditionally in their plan count like plan(1 + (cl.use_noise ? 3 : 0))) demonstrate this pattern.

The real issue is that this implicit contract is undocumented. Add a clarifying comment at line 381 explaining that any test spawning noise tools must either:

  1. Compute plan() count using get_noise_tools_count() before spawning, or
  2. Explicitly account for use_noise in their plan count, or
  3. Use NO_PLAN and let the framework reconcile the count
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tap/tap.cpp` around lines 367 - 381, Add a clarifying comment
immediately before the line that increments g_test.last with noise_count (near
the call to __sync_add_and_fetch(&g_test.last, noise_count)) documenting the
implicit contract: tests that spawn noise tools must include those tools in
their plan() count by either calling plan(get_noise_tools_count()) before
spawning, accounting for use_noise in their plan expression (e.g., plan(1 +
(cl.use_noise ? 3 : 0))), or using NO_PLAN so the framework can reconcile
counts; also note that stop_noise_tools() and checking noise_failures (protected
by noise_failure_mutex) occurs before this adjustment so failures will cause an
early exit instead of adding to the plan.

Comment on lines +35 to +41
MYSQL* admin = init_mysql_conn(cl.host, cl.admin_port, cl.admin_username, cl.admin_password);
if (!admin) {
return exit_status();
}

MYSQL* proxy = init_mysql_conn(cl.host, cl.port, cl.username, cl.password);
if (!proxy) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the admin host for the admin connection.
cl.host can point to the proxy frontend, while the admin interface may be bound to cl.admin_host.

🔧 Suggested fix
-	MYSQL* admin = init_mysql_conn(cl.host, cl.admin_port, cl.admin_username, cl.admin_password);
+	MYSQL* admin = init_mysql_conn(cl.admin_host, cl.admin_port, cl.admin_username, cl.admin_password);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MYSQL* admin = init_mysql_conn(cl.host, cl.admin_port, cl.admin_username, cl.admin_password);
if (!admin) {
return exit_status();
}
MYSQL* proxy = init_mysql_conn(cl.host, cl.port, cl.username, cl.password);
if (!proxy) {
MYSQL* admin = init_mysql_conn(cl.admin_host, cl.admin_port, cl.admin_username, cl.admin_password);
if (!admin) {
return exit_status();
}
MYSQL* proxy = init_mysql_conn(cl.host, cl.port, cl.username, cl.password);
if (!proxy) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/issue5384-t.cpp` around lines 35 - 41, The admin connection is
using cl.host which may point to the proxy frontend; change the init_mysql_conn
call that creates the admin MYSQL* (symbol: admin) to use cl.admin_host (and
keep cl.admin_port, cl.admin_username, cl.admin_password) instead of cl.host so
the admin interface binds to the correct host; leave the proxy connection
(symbol: proxy) using cl.host/cl.port as-is.

Comment on lines +128 to +195
// Test 1: Verify runtime_global_variables has at least 10 MCP variables after LOAD
diag("Test 1: Verifying runtime_global_variables is populated with MCP variables");

// First, ensure we have a clean state and load to runtime
run_q(admin, "SET mcp-enabled=false");
run_q(admin, "LOAD MCP VARIABLES TO RUNTIME");

int count = count_mcp_runtime_variables(admin);
ok(count >= 10, "runtime_global_variables contains at least 10 MCP variables (got %d)", count);

// Test 2: Change several variables and verify they update in runtime_global_variables
diag("Test 2: Changing variables and verifying runtime_global_variables updates");

// Record original values
string orig_timeout = get_runtime_variable(admin, "mcp-timeout_ms");
string orig_queries_max = get_runtime_variable(admin, "mcp-stats_show_queries_max_rows");
string orig_processlist_max = get_runtime_variable(admin, "mcp-stats_show_processlist_max_rows");

diag("Original values: timeout_ms=%s, queries_max=%s, processlist_max=%s",
orig_timeout.c_str(), orig_queries_max.c_str(), orig_processlist_max.c_str());

// Change the variables
run_q(admin, "SET mcp-timeout_ms=45000");
run_q(admin, "SET mcp-stats_show_queries_max_rows=500");
run_q(admin, "SET mcp-stats_show_processlist_max_rows=300");

// Load to runtime
run_q(admin, "LOAD MCP VARIABLES TO RUNTIME");

// Verify the values changed in runtime_global_variables
string new_timeout = get_runtime_variable(admin, "mcp-timeout_ms");
string new_queries_max = get_runtime_variable(admin, "mcp-stats_show_queries_max_rows");
string new_processlist_max = get_runtime_variable(admin, "mcp-stats_show_processlist_max_rows");

diag("New values: timeout_ms=%s, queries_max=%s, processlist_max=%s",
new_timeout.c_str(), new_queries_max.c_str(), new_processlist_max.c_str());

ok(new_timeout == "45000", "mcp-timeout_ms updated to 45000 in runtime_global_variables (got '%s')", new_timeout.c_str());
ok(new_queries_max == "500", "mcp-stats_show_queries_max_rows updated to 500 in runtime_global_variables (got '%s')", new_queries_max.c_str());
ok(new_processlist_max == "300", "mcp-stats_show_processlist_max_rows updated to 300 in runtime_global_variables (got '%s')", new_processlist_max.c_str());

// Test 3: Verify runtime values match global_variables
diag("Test 3: Verifying runtime_global_variables matches global_variables");

// Get values from global_variables
string global_timeout = get_mcp_variable(admin, "timeout_ms");
string global_queries = get_mcp_variable(admin, "stats_show_queries_max_rows");
string global_processlist = get_mcp_variable(admin, "stats_show_processlist_max_rows");

ok(new_timeout == global_timeout, "runtime timeout_ms (%s) matches global_variables (%s)",
new_timeout.c_str(), global_timeout.c_str());
ok(new_queries_max == global_queries, "runtime queries_max (%s) matches global_variables (%s)",
new_queries_max.c_str(), global_queries.c_str());
ok(new_processlist_max == global_processlist, "runtime processlist_max (%s) matches global_variables (%s)",
new_processlist_max.c_str(), global_processlist.c_str());

// Cleanup: restore original values
diag("Cleanup: Restoring original values");
if (!orig_timeout.empty()) {
run_q(admin, ("SET mcp-timeout_ms=" + orig_timeout).c_str());
}
if (!orig_queries_max.empty()) {
run_q(admin, ("SET mcp-stats_show_queries_max_rows=" + orig_queries_max).c_str());
}
if (!orig_processlist_max.empty()) {
run_q(admin, ("SET mcp-stats_show_processlist_max_rows=" + orig_processlist_max).c_str());
}
run_q(admin, "LOAD MCP VARIABLES TO RUNTIME");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore mcp-enabled after the test.
The test sets mcp-enabled=false but never restores the prior value, which can leak state into later TAP runs.

🔧 Suggested fix
@@
-// Test 1: Verify runtime_global_variables has at least 10 MCP variables after LOAD
+// Test 1: Verify runtime_global_variables has at least 10 MCP variables after LOAD
+string orig_enabled = get_mcp_variable(admin, "enabled");
@@
-// Cleanup: restore original values
+// Cleanup: restore original values
+if (!orig_enabled.empty()) {
+	run_q(admin, ("SET mcp-enabled=" + orig_enabled).c_str());
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tap/tests/mcp_runtime_variables-t.cpp` around lines 128 - 195, The test
disables MCP with run_q(admin, "SET mcp-enabled=false") but never restores the
original mcp-enabled value; update the test to read and store the original
mcp-enabled value (using get_runtime_variable or get_mcp_variable as
appropriate) before changing it, and restore it in the cleanup section (call
run_q(admin, "SET mcp-enabled=<original_value>") and then run_q(admin, "LOAD MCP
VARIABLES TO RUNTIME")) so the global state is returned to its prior setting;
ensure you reference the stored variable (e.g., orig_mcp_enabled) and perform
the restore alongside the existing restores for timeout/queries/processlist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants