diff --git a/doc/proxysql_cluster/pgsql_cluster_sync_pr5297_status.md b/doc/proxysql_cluster/pgsql_cluster_sync_pr5297_status.md new file mode 100644 index 0000000000..4c4a8f3dcf --- /dev/null +++ b/doc/proxysql_cluster/pgsql_cluster_sync_pr5297_status.md @@ -0,0 +1,520 @@ +# PostgreSQL Cluster Sync Parity Branch Status + +## Purpose + +This document explains what the `fix/postgresql-cluster-sync_2` branch and PR +`#5297` are trying to achieve, what was actually implemented, and why the branch +should still be treated as work in progress. + +The goal of the branch is PostgreSQL cluster-sync parity with the existing MySQL +cluster-sync framework. In practical terms, the branch extends ProxySQL cluster +monitoring, checksum handling, peer selection, pull/apply logic, admin +variables, and TAP coverage so PostgreSQL configuration can be synchronized +between cluster nodes with the same overall model already used for MySQL. + +## Current Status + +As of 2026-03-18: + +- PR `#5297` is still open against `v3.0`. +- GitHub reports `mergeStateStatus: DIRTY`. +- GitHub reports `reviewDecision: REVIEW_REQUIRED`. +- The latest pushed branch head on GitHub is commit `5c7e616f9` + (`PR5297: resolve remaining actionable review findings for PGSQL checksum sync and TAP assertions`). +- The local branch also contains additional unpushed follow-up commits beyond + `5c7e616f9`, including: + - `bf9fc81f4` (`test: strengthen pgsql cluster sync TAP follow-up`) + - `d702e7d3e` (`doc: summarize pgsql cluster sync branch status`) + +This means the branch is functionally substantial, but it is not yet in a clean +and finished state for merge. + +## Non-CI TODO + +This TODO intentionally excludes the failing CI jobs, which are being worked on +separately. + +- [x] resolve the immediate merge conflict against `v3.0` locally and verify + mergeability by simulation +- [x] harden the PostgreSQL TAP follow-up so optional replica validation covers + `pgsql_servers_v2`, `pgsql_users`, and `pgsql_query_rules` +- [x] review the branch changes against real two-node behavior instead of only + checksum presence and table accessibility +- [x] write a maintainer-facing status document for PR `#5297` +- [x] add a module-by-module implementation summary for the PostgreSQL sync + paths +- [x] add a non-CI merge checklist to make branch handoff easier +- [x] run end-to-end PostgreSQL multi-node validation with a real replica + topology and both `save_to_disk=true` and `save_to_disk=false` +- [ ] decide whether the long-lived admin-session visibility quirk seen during + replica polling needs a dedicated follow-up before merge +- [ ] get final maintainer review on whether this should merge as one branch or + be split into smaller follow-ups + +## What The Branch Adds + +The branch changes the following tracked files compared to `v3.0`: + +- `include/ProxySQL_Cluster.hpp` +- `include/proxysql_admin.h` +- `include/proxysql_glovars.hpp` +- `lib/Admin_FlushVariables.cpp` +- `lib/PgSQL_Variables_Validator.cpp` +- `lib/ProxySQL_Admin.cpp` +- `lib/ProxySQL_Cluster.cpp` +- `test/tap/tap/Makefile` +- `test/tap/tests/test_cluster_sync_pgsql-t.cpp` + +At a high level, the branch adds PostgreSQL support in the same cluster-sync +areas where MySQL already had support: + +- checksum tracking +- diff counters and sync thresholds +- peer selection +- configuration pull/apply flows +- optional save-to-disk behavior +- admin variable exposure +- TAP coverage + +## PostgreSQL Modules Covered + +The implementation is centered around these PostgreSQL cluster-sync modules: + +- `pgsql_query_rules` +- `pgsql_servers` +- `pgsql_servers_v2` +- `pgsql_users` +- `pgsql_variables` + +An important architectural point is that `pgsql_replication_hostgroups` and +`pgsql_hostgroup_attributes` are not treated as independent cluster modules. +They are synchronized as part of the broader PostgreSQL servers flow and are +included in the combined checksum and apply logic for PostgreSQL servers. + +## Cluster Query And Pull Support + +The branch adds PostgreSQL cluster query definitions and pull paths for: + +- `runtime_pgsql_servers` +- `pgsql_servers_v2` +- `pgsql_users` +- `pgsql_query_rules` +- `pgsql_query_rules_fast_routing` +- `pgsql_variables` +- `pgsql_replication_hostgroups` +- `pgsql_hostgroup_attributes` + +Relevant entry points now present in the code include: + +- `pull_runtime_pgsql_servers_from_peer()` +- `pull_pgsql_servers_v2_from_peer()` +- `pull_pgsql_users_from_peer()` +- `pull_pgsql_query_rules_from_peer()` +- `pull_pgsql_variables_from_peer()` + +The servers path is the most complex one. It fetches and validates: + +- `pgsql_servers_v2` +- `pgsql_replication_hostgroups` +- `pgsql_hostgroup_attributes` +- `runtime_pgsql_servers` when runtime sync is requested + +This is important because PostgreSQL server sync is not just one table copy. It +needs the related topology and hostgroup metadata to move together. + +## Checksum And Admin Variable Integration + +The branch extends cluster state and admin state so PostgreSQL modules participate +in the same synchronization decision process as MySQL modules. + +### Added PostgreSQL diff controls + +- `cluster_pgsql_query_rules_diffs_before_sync` +- `cluster_pgsql_servers_diffs_before_sync` +- `cluster_pgsql_users_diffs_before_sync` +- `cluster_pgsql_variables_diffs_before_sync` + +### Added PostgreSQL persistence controls + +- `cluster_pgsql_query_rules_save_to_disk` +- `cluster_pgsql_servers_save_to_disk` +- `cluster_pgsql_users_save_to_disk` +- `cluster_pgsql_variables_save_to_disk` + +### Added PostgreSQL checksum gate + +- `checksum_pgsql_variables` + +This checksum gate became important during review. Later fixes ensured that +disabling `checksum_pgsql_variables` resets all PostgreSQL +`*_diffs_before_sync` thresholds rather than leaving stale non-zero sync +triggers behind. + +## Runtime Checksum Visibility + +The branch also makes PostgreSQL checksums visible through +`runtime_checksums_values`, including: + +- `pgsql_query_rules` +- `pgsql_servers` +- `pgsql_servers_v2` +- `pgsql_users` +- `pgsql_variables` + +Without this, cluster nodes can not reason correctly about PostgreSQL module +drift using the existing cluster monitoring loop. + +## Module-By-Module Implementation Summary + +This section describes the actual PostgreSQL synchronization behavior now +present in the code. + +### `pgsql_users` + +Synchronization source: + +- `CLUSTER_QUERY_PGSQL_USERS` +- runtime table: `runtime_pgsql_users` + +Checksum behavior: + +- the fetched resultset is checksummed with `get_pgsql_users_checksum()` +- that helper delegates to PostgreSQL authentication runtime checksum logic +- the resulting hash is compared to the peer checksum before any apply step + +Apply behavior: + +- the code reuses `update_mysql_users_mutex` +- accepted rows are written back into the replica `pgsql_users` admin-memory + table +- the accepted resultset is converted to `SQLite3_result` +- `GloAdmin->init_pgsql_users(..., expected_checksum, epoch)` loads the runtime + PostgreSQL users state +- when `cluster_pgsql_users_save_to_disk` is enabled, the branch calls + `flush_pgsql_users__from_memory_to_disk()` + +### `pgsql_variables` + +Synchronization source: + +- `CLUSTER_QUERY_PGSQL_VARIABLES` +- runtime table: `runtime_pgsql_variables` + +Checksum behavior: + +- the fetched resultset is checksummed with `mysql_raw_checksum()` +- the computed checksum must match the peer checksum before variables are loaded + +Apply behavior: + +- the code reuses `update_mysql_variables_mutex` +- current `pgsql-%` rows are deleted from `global_variables` +- when `cluster_sync_interfaces` is disabled, interface-related PostgreSQL + variables listed in `CLUSTER_SYNC_INTERFACES_PGSQL` are preserved +- accepted rows are inserted into `global_variables` +- `GloAdmin->load_pgsql_variables_to_runtime(expected_checksum, epoch)` applies + them to runtime +- when `cluster_pgsql_variables_save_to_disk` is enabled, the branch calls + `flush_pgsql_variables__from_memory_to_disk()` + +### `pgsql_query_rules` + +Synchronization source: + +- `CLUSTER_QUERY_PGSQL_QUERY_RULES` +- `CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING` +- runtime tables: + - `runtime_pgsql_query_rules` + - `runtime_pgsql_query_rules_fast_routing` + +Checksum behavior: + +- both resultsets are fetched +- both resultsets are converted to `SQLite3_result` +- the raw checksums of those converted resultsets are summed using the same + runtime-facing representation that the loader consumes +- the final combined checksum must match the peer checksum before apply + +Apply behavior: + +- the code reuses `update_mysql_query_rules_mutex` +- the fetched query-rules and fast-routing resultsets are passed directly into + `GloAdmin->load_pgsql_query_rules_to_runtime(...)` +- after runtime loading, the branch writes runtime state back into the replica + `pgsql_query_rules` and `pgsql_query_rules_fast_routing` admin-memory tables +- when `cluster_pgsql_query_rules_save_to_disk` is enabled, the branch calls + `flush_GENERIC__from_to("pgsql_query_rules", "memory_to_disk")` + +### `runtime_pgsql_servers` + +Synchronization source: + +- `CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS` +- runtime table: `runtime_pgsql_servers` + +Checksum behavior: + +- the fetched runtime rows are checked with `mysql_raw_checksum()` +- the computed runtime checksum is compared with the peer runtime checksum + +Apply behavior: + +- the code reuses `update_runtime_mysql_servers_mutex` +- accepted rows are converted to `SQLite3_result` +- `PgHGM->servers_add(...)` loads them into the incoming manager state +- `PgHGM->commit(..., only_commit_runtime_pgsql_servers=true)` applies runtime + PostgreSQL server state +- when `cluster_pgsql_servers_save_to_disk` is enabled, runtime state is first + persisted through `save_pgsql_servers_runtime_to_database(false)` and then + written to disk through `flush_GENERIC__from_to(ClusterModules::PGSQL_SERVERS, "memory_to_disk")` + +### `pgsql_servers_v2` plus dependent PostgreSQL server tables + +Synchronization source: + +- `CLUSTER_QUERY_PGSQL_SERVERS_V2` +- `CLUSTER_QUERY_PGSQL_REPLICATION_HOSTGROUPS` +- `CLUSTER_QUERY_PGSQL_HOSTGROUP_ATTRIBUTES` +- optionally `CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS` + +Checksum behavior: + +- the branch fetches the static PostgreSQL server resultset plus the dependent + topology tables +- those resultsets are checked together using + `compute_servers_tables_raw_checksum(...)` +- when runtime PostgreSQL server rows are fetched in the same operation, the + runtime checksum is also checked before apply + +Apply behavior: + +- the code reuses `update_mysql_servers_v2_mutex` +- resultsets are converted with `convert_pgsql_servers_resultsets(...)` +- before runtime load, the replica admin-memory tables are updated for: + - `pgsql_servers` + - `pgsql_replication_hostgroups` + - `pgsql_hostgroup_attributes` +- `GloAdmin->load_pgsql_servers_to_runtime(...)` applies: + - `pgsql_servers_v2` + - `pgsql_replication_hostgroups` + - `pgsql_hostgroup_attributes` + - optional runtime PostgreSQL server state +- when `cluster_pgsql_servers_save_to_disk` is enabled, the accepted state is + persisted through `flush_GENERIC__from_to(ClusterModules::PGSQL_SERVERS, "memory_to_disk")` + +## Refactoring Done In The Branch + +This branch is not just a feature patch. It also performs a large refactor of +cluster synchronization internals while PostgreSQL support is being added. + +Major refactoring themes: + +- large sections of repetitive checksum handling were replaced with + data-driven tables and loops +- multiple `get_peer_to_sync_*()` paths were unified +- a generic pull framework was introduced for module synchronization +- repetitive memory management patterns were replaced with helper utilities +- many string literals were moved into central constant namespaces + +These refactors reduced code duplication and made it easier to insert +PostgreSQL modules into the same control flow as MySQL modules, but they also +increase the review burden because the branch changes behavior and structure at +the same time. + +## Review-Driven Fixes Already Applied + +The branch history shows repeated follow-up commits that addressed review +feedback and reproducible defects. Examples include: + +- fixing PostgreSQL checksum field usage +- fixing the `CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS` status handling +- fixing review findings around `checksum_pgsql_variables` +- restoring active PostgreSQL variable checksum generation in + `Admin_FlushVariables` +- making TAP checksum-loop failure paths emit explicit failing assertions +- correcting the PostgreSQL servers TAP tuple shape + +By February 23, 2026, the branch had already gone through several rounds of +review cleanup rather than remaining in its original implementation shape. + +The additional March 18, 2026 local validation also exposed and fixed a real +branch bug that was not just a test issue: + +- the data-driven variable-sync dispatcher passed `"pgsql_variables"` into + `pull_global_variables_from_peer()`, but that shared function only accepted + `"pgsql"` +- the same refactor also used `"ldap_variables"` instead of `"ldap"` +- in a real two-node run, changing a PostgreSQL variable caused the replica to + abort on an assertion +- the local fix was to restore the short dispatcher keys expected by the shared + variable pull path + +## TAP And Build Work + +The branch includes two separate testing/build-related efforts. + +### 1. PostgreSQL cluster-sync TAP coverage + +`test/tap/tests/test_cluster_sync_pgsql-t.cpp` started as a basic presence and +accessibility test. It was later extended to cover: + +- PostgreSQL checksum presence in `runtime_checksums_values` +- accessibility of PostgreSQL admin tables +- optional replica-based synchronization checks + +The local follow-up commit `bf9fc81f4` extends that further by: + +- documenting the test more accurately +- adding safer backup/restore helpers for temporary table mutation +- checking optional replica sync for: + - `pgsql_servers_v2` + - `pgsql_users` + - `pgsql_query_rules` +- checking disk persistence on the replica when the corresponding + `admin-cluster_pgsql_*_save_to_disk` flag is enabled +- failing the TAP assertions when those admin flags can not be read + +### 2. TAP library archive handling + +`test/tap/tap/Makefile` was updated in this branch to avoid stale static archive +members being reused in `libtap*.a`, which had caused linker problems in TAP +builds. A local follow-up also resolves the `v3.0` merge conflict in this file +while keeping the stale-archive workaround in a separate helper rule so the +branch remains mergeable into `v3.0`. + +## Multi-Node Validation Plan And Results + +On 2026-03-18 the branch was validated locally with a real two-node ProxySQL +topology, not just unit-style compilation checks. + +Validation plan: + +1. Build the current source tree with `make build_src -j2`. +2. Build the PostgreSQL cluster-sync TAP binary. +3. Run the TAP two-node case with `cluster_pgsql_*_save_to_disk=false`. +4. Run the TAP two-node case with `cluster_pgsql_*_save_to_disk=true`. +5. Run manual two-node mutation checks for: + - `pgsql_servers` + - `pgsql_replication_hostgroups` + - `pgsql_hostgroup_attributes` + - one real PostgreSQL variable +6. Confirm replica runtime, replica admin-memory, and when enabled replica + disk-table state. + +Observed results after the latest local fixes: + +- source build succeeded +- PostgreSQL cluster-sync TAP passed with `save_to_disk=false` +- PostgreSQL cluster-sync TAP passed with `save_to_disk=true` +- manual two-node validation passed with `save_to_disk=false` +- manual two-node validation passed with `save_to_disk=true` + +What the manual run added beyond the TAP file: + +- it verified that `pgsql_replication_hostgroups` moved with the server sync + path +- it verified that `pgsql_hostgroup_attributes` moved with the server sync path +- it verified that changing a PostgreSQL variable on the primary caused a real + cluster pull on the replica +- it exposed the variable-sync dispatcher crash described above + +## MySQL Parity Assessment + +Parity with MySQL should be judged by behavioral guarantees, not by identical +table count. MySQL has several server-related modules that do not have a +PostgreSQL counterpart. + +Behavior that is now acceptably aligned for the PostgreSQL modules that do +exist: + +- `pgsql_users`: runtime checksum source, admin-memory update, runtime load, + optional disk persistence +- `pgsql_query_rules`: combined rules plus fast-routing checksum, runtime load, + admin-memory update, optional disk persistence +- `pgsql_servers_v2`: combined checksum across server rows and dependent + topology tables, admin-memory update, runtime load, optional disk persistence +- `pgsql_variables`: shared cluster variable pull model with PostgreSQL-specific + checksum and interface filtering + +Intentional non-parity with MySQL: + +- no PostgreSQL equivalent for MySQL-only modules such as + `mysql_group_replication_hostgroups`, `mysql_galera_hostgroups`, + `mysql_aws_aurora_hostgroups`, or `mysql_servers_ssl_params` +- no PostgreSQL equivalent for MySQL-only user fields such as + `default_schema` and `schema_locked` + +Residual caveat: + +- during replica polling, a long-lived admin session did not reliably observe + synced PostgreSQL rows, while fresh admin sessions and the manual harness did + observe them +- the current TAP follow-up uses fresh replica admin sessions for polling, so + the test asserts the externally observable state that new admin clients see +- this looks like a real visibility quirk worth understanding better, but it is + separate from the correctness of the underlying cluster-sync apply path + +## Why The Branch Is Not Finished Yet + +Even though the implementation is broad and many review items were already +fixed, the branch should still be treated as incomplete. + +Reasons: + +- the PR is still open +- GitHub still marks it `DIRTY` +- GitHub still requires review +- the latest local fixes and validation results are not yet reflected on GitHub +- there is still one observed admin-session visibility quirk that has not been + root-caused + +Separately, local validation in this workspace is limited by missing vendored +dependencies, so not every verification step can be reproduced here. + +## Recommended Remaining Work + +Before this branch should be considered complete, the following should happen: + +1. Decide whether the long-lived admin-session visibility quirk needs a code + fix or only a documented test expectation. +2. Push the local follow-up commits when explicitly requested so the branch on + GitHub matches the working tree used for current analysis. +3. Re-check mergeability into `v3.0` after the latest follow-up is pushed. +4. Get another maintainer review now that the branch has both feature work and + refactoring work. + +## Non-CI Merge Checklist + +Use this list when finishing the branch, ignoring the unrelated CI breakage that +is being handled in parallel: + +1. Confirm whether the admin-session visibility quirk needs a follow-up patch. +2. Push the local follow-up commits when explicitly requested. +3. Confirm the branch merges cleanly into `v3.0`. +4. Verify PostgreSQL sync for: + - `pgsql_users` + - `pgsql_query_rules` + - `pgsql_servers_v2` + - `runtime_pgsql_servers` + - `pgsql_variables` +5. Verify both `save_to_disk=true` and `save_to_disk=false` behaviors. +6. Verify `checksum_pgsql_variables=false` disables all PostgreSQL + `*_diffs_before_sync` triggers. +7. Re-read the TAP test and Makefile diffs and confirm the branch still carries + the intended local fixes after the final rebase or merge refresh. + +## Bottom Line + +This branch is no longer a partial experiment. It already contains the core +PostgreSQL cluster-sync implementation and a significant amount of review-driven +hardening. However, it is still not complete from a release or merge +perspective. + +The right current description is: + +- feature-complete in intent +- substantially implemented in code +- still unfinished operationally and procedurally + +That is why it should be continued as a work-in-progress branch rather than +treated as closed work. diff --git a/doc/proxysql_cluster/proxysql_cluster_working.md b/doc/proxysql_cluster/proxysql_cluster_working.md index bb91a5089b..67ef566377 100644 --- a/doc/proxysql_cluster/proxysql_cluster_working.md +++ b/doc/proxysql_cluster/proxysql_cluster_working.md @@ -1,6 +1,9 @@ # Introduction This documentation provides an in-depth look at the internal workings of the ProxySQL Cluster feature. It is intended for readers who are already familiar with the basic concepts and functionality of ProxySQL Cluster. +For PostgreSQL-specific branch status and implementation notes related to PR +`#5297`, see `doc/proxysql_cluster/pgsql_cluster_sync_pr5297_status.md`. + # Prerequisites Before reading this documentation, it is mandatory that the reader has gone through the official ProxySQL Cluster documentation available at [https://proxysql.com/documentation/proxysql-cluster/](https://proxysql.com/documentation/proxysql-cluster/). This will provide the necessary background knowledge and understanding of terminologies to understand the internal workings of the feature. @@ -219,4 +222,4 @@ graph TD CLOSE_CONNECTION[Close Connection] --> END END[End] end -``` \ No newline at end of file +``` diff --git a/include/ProxySQL_Cluster.hpp b/include/ProxySQL_Cluster.hpp index 05621f543e..fbcdd045d3 100644 --- a/include/ProxySQL_Cluster.hpp +++ b/include/ProxySQL_Cluster.hpp @@ -5,6 +5,7 @@ #include "thread.h" #include "wqueue.h" #include +#include #include "prometheus/counter.h" #include "prometheus/gauge.h" @@ -12,6 +13,15 @@ #define PROXYSQL_NODE_METRICS_LEN 5 /** + * @file ProxySQL_Cluster.hpp + * @brief ProxySQL Cluster synchronization and management definitions. + * + * This file contains definitions for ProxySQL's clustering functionality, including: + * - Cluster query definitions for MySQL and PostgreSQL module synchronization + * - Node management and metrics collection + * - Checksum computation and comparison algorithms + * - Peer selection and synchronization logic + * * CLUSTER QUERIES DEFINITION * ========================== * @@ -19,11 +29,21 @@ * the queries issued for generating the checksum for each of the target modules, for simpler reasoning, they should * also represent the actual resultset being received when issuing them, since this resultset is used for computing the * 'expected checksum' for the fetched config before loading it to runtime. This is done for the following modules: + * + * MySQL modules: * - 'runtime_mysql_servers': tables 'mysql_servers' * - 'runtime_mysql_users'. * - 'runtime_mysql_query_rules'. * - 'mysql_servers_v2': tables admin 'mysql_servers', 'mysql_replication_hostgroups', 'mysql_group_replication_hostroups', * 'mysql_galera_hostgroups', 'mysql_aws_aurora_hostgroups', 'mysql_hostgroup_attributes'. + * + * PostgreSQL modules: + * - 'runtime_pgsql_servers': runtime PostgreSQL server status and configuration + * - 'runtime_pgsql_users': runtime PostgreSQL user authentication settings + * - 'runtime_pgsql_query_rules': runtime PostgreSQL query routing rules + * - 'pgsql_servers_v2': static PostgreSQL server configuration + * - 'pgsql_variables': PostgreSQL server variables configuration + * * IMPORTANT: For further clarify this means that it's important that the actual resultset produced by the intercepted * query preserve the filtering and ordering expressed in this queries. */ @@ -61,6 +81,136 @@ /* @brief Query to be intercepted by 'ProxySQL_Admin' for 'runtime_mysql_query_rules_fast_routing'. See top comment for details. */ #define CLUSTER_QUERY_MYSQL_QUERY_RULES_FAST_ROUTING "PROXY_SELECT username, schemaname, flagIN, destination_hostgroup, comment FROM runtime_mysql_query_rules_fast_routing ORDER BY username, schemaname, flagIN" +/** + * @brief Query to be intercepted by 'ProxySQL_Admin' for 'runtime_pgsql_servers'. + * + * This query retrieves the current operational status and configuration of PostgreSQL servers + * from the runtime_pgsql_servers table. It includes server health metrics, connection settings, + * and current operational status. The query filters out OFFLINE_HARD servers and converts + * numeric status values to human-readable format. + * + * Result columns: + * - hostgroup_id: Logical grouping identifier for PostgreSQL servers + * - hostname: Server hostname or IP address + * - port: PostgreSQL server port number + * - status: Converted status string (ONLINE, OFFLINE_SOFT, OFFLINE_HARD) + * - weight: Load balancing weight for the server + * - compression: Whether compression is enabled + * - max_connections: Maximum allowed connections + * - max_replication_lag: Maximum acceptable replication lag + * - use_ssl: SSL/TLS connection requirement + * - max_latency_ms: Maximum acceptable latency + * - comment: Administrative comments + * + * @see runtime_pgsql_servers + * @see pull_runtime_pgsql_servers_from_peer() + */ +#define CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS "PROXY_SELECT hostgroup_id, hostname, port, CASE status WHEN 'ONLINE' THEN 'ONLINE' WHEN 'OFFLINE_SOFT' THEN 'OFFLINE_SOFT' WHEN 'OFFLINE_HARD' THEN 'OFFLINE_HARD' END status, weight, compression, max_connections, max_replication_lag, use_ssl, max_latency_ms, comment FROM runtime_pgsql_servers WHERE status<>'OFFLINE_HARD' ORDER BY hostgroup_id, hostname, port" + +/** + * @brief Query to be intercepted by 'ProxySQL_Admin' for 'pgsql_servers_v2'. + * + * This query retrieves the static configuration of PostgreSQL servers from the pgsql_servers_v2 table. + * It includes connection parameters, load balancing settings, and server metadata. The query + * filters out OFFLINE_HARD servers and converts SHUNNED status to ONLINE for cluster synchronization. + * + * Result columns: + * - hostgroup_id: Logical grouping identifier for PostgreSQL servers + * - hostname: Server hostname or IP address + * - port: PostgreSQL server port number + * - status: Server status (SHUNNED converted to ONLINE for sync) + * - weight: Load balancing weight for the server + * - compression: Whether compression is enabled + * - max_connections: Maximum allowed connections + * - max_replication_lag: Maximum acceptable replication lag + * - use_ssl: SSL/TLS connection requirement + * - max_latency_ms: Maximum acceptable latency + * - comment: Administrative comments + * + * @see pgsql_servers_v2 + * @see pull_pgsql_servers_v2_from_peer() + */ +#define CLUSTER_QUERY_PGSQL_SERVERS_V2 "PROXY_SELECT hostgroup_id, hostname, port, CASE WHEN status='SHUNNED' THEN 'ONLINE' ELSE status END AS status, weight, compression, max_connections, max_replication_lag, use_ssl, max_latency_ms, comment FROM pgsql_servers_v2 WHERE status<>'OFFLINE_HARD' ORDER BY hostgroup_id, hostname, port" + +/** + * @brief Query to be intercepted by 'ProxySQL_Admin' for 'runtime_pgsql_users'. + * + * This query retrieves PostgreSQL user authentication configuration from the runtime_pgsql_users table. + * It includes credentials, connection settings, and user behavior preferences that are used for + * authenticating and managing PostgreSQL client connections. + * + * Result columns: + * - username: PostgreSQL username + * - password: Authentication password/hash + * - use_ssl: SSL/TLS connection requirement + * - default_hostgroup: Default hostgroup for routing + * - transaction_persistent: Whether transactions persist across connections + * - fast_forward: Fast forwarding mode setting + * - backend: Backend connection settings + * - frontend: Frontend connection settings + * - max_connections: Maximum connections per user + * - attributes: Additional user attributes (JSON) + * - comment: Administrative comments + * + * @see runtime_pgsql_users + * @see pull_pgsql_users_from_peer() + */ +#define CLUSTER_QUERY_PGSQL_USERS "PROXY_SELECT username, password, use_ssl, default_hostgroup, transaction_persistent, fast_forward, backend, frontend, max_connections, attributes, comment FROM runtime_pgsql_users" + +/** + * @brief Query to be intercepted by 'ProxySQL_Admin' for 'runtime_pgsql_query_rules'. + * + * This query retrieves PostgreSQL query routing rules from the runtime_pgsql_query_rules table. + * It includes comprehensive rule definitions for query matching, routing, caching, and behavior + * control. Rules are ordered by rule_id to ensure consistent processing and checksum generation. + * + * Key result columns: + * - rule_id: Unique identifier for the rule + * - username: Filter by PostgreSQL username + * - database: Filter by database name (PostgreSQL-specific, replaces schemaname) + * - flagIN, flagOUT: Rule processing flags + * - match_digest, match_pattern: Query matching criteria + * - destination_hostgroup: Target hostgroup for matching queries + * - cache_ttl, cache_timeout: Query caching settings + * - timeout, retries, delay: Query execution parameters + * - mirror_hostgroup: Query mirroring destination + * - error_msg, ok_msg: Custom response messages + * - sticky_conn, multiplex: Connection pooling behavior + * - log, apply: Logging and application flags + * - attributes: Additional rule attributes (JSON) + * - comment: Administrative comments + * + * @see runtime_pgsql_query_rules + * @see pull_pgsql_query_rules_from_peer() + */ +#define CLUSTER_QUERY_PGSQL_QUERY_RULES "PROXY_SELECT rule_id, username, database, flagIN, client_addr, proxy_addr, proxy_port, digest, match_digest, match_pattern, negate_match_pattern, re_modifiers, flagOUT, replace_pattern, destination_hostgroup, cache_ttl, cache_empty_result, cache_timeout, reconnect, timeout, retries, delay, next_query_flagIN, mirror_flagOUT, mirror_hostgroup, error_msg, ok_msg, sticky_conn, multiplex, log, apply, attributes, comment FROM runtime_pgsql_query_rules ORDER BY rule_id" + +/** + * @brief Query to be intercepted by 'ProxySQL_Admin' for 'runtime_pgsql_query_rules_fast_routing'. + * + * This query retrieves PostgreSQL fast routing rules from the runtime_pgsql_query_rules_fast_routing table. + * Fast routing provides a lightweight mechanism for direct query routing based on username, database, + * and processing flags without complex pattern matching. This enables efficient routing for common + * use cases and reduces processing overhead. + * + * Result columns: + * - username: PostgreSQL username for routing rule + * - database: Database name for routing rule (PostgreSQL-specific) + * - flagIN: Input processing flag for rule matching + * - destination_hostgroup: Target hostgroup for direct routing + * - comment: Administrative comments + * + * @see runtime_pgsql_query_rules_fast_routing + * @see pull_pgsql_query_rules_from_peer() + * @see CLUSTER_QUERY_PGSQL_QUERY_RULES + */ +#define CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING "PROXY_SELECT username, database, flagIN, destination_hostgroup, comment FROM runtime_pgsql_query_rules_fast_routing ORDER BY username, database, flagIN" + +#define CLUSTER_QUERY_PGSQL_VARIABLES "PROXY_SELECT variable_name, variable_value FROM runtime_pgsql_variables ORDER BY variable_name" + +#define CLUSTER_QUERY_PGSQL_REPLICATION_HOSTGROUPS "PROXY_SELECT writer_hostgroup, reader_hostgroup, check_type, comment FROM runtime_pgsql_replication_hostgroups ORDER BY writer_hostgroup" +#define CLUSTER_QUERY_PGSQL_HOSTGROUP_ATTRIBUTES "PROXY_SELECT hostgroup_id, max_num_online_servers, autocommit, free_connections_pct, init_connect, multiplex, connection_warming, throttle_connections_per_sec, ignore_session_variables, hostgroup_settings, servers_defaults, comment FROM runtime_pgsql_hostgroup_attributes ORDER BY hostgroup_id" + class ProxySQL_Checksum_Value_2: public ProxySQL_Checksum_Value { public: time_t last_updated; @@ -161,6 +311,11 @@ class ProxySQL_Node_Entry { ProxySQL_Checksum_Value_2 mysql_users; ProxySQL_Checksum_Value_2 proxysql_servers; ProxySQL_Checksum_Value_2 mysql_servers_v2; + ProxySQL_Checksum_Value_2 pgsql_query_rules; + ProxySQL_Checksum_Value_2 pgsql_servers; + ProxySQL_Checksum_Value_2 pgsql_users; + ProxySQL_Checksum_Value_2 pgsql_servers_v2; + ProxySQL_Checksum_Value_2 pgsql_variables; } checksums_values; uint64_t global_checksum; }; @@ -255,10 +410,17 @@ class ProxySQL_Cluster_Nodes { void get_peer_to_sync_mysql_servers_v2(char** host, uint16_t* port, char** peer_mysql_servers_v2_checksum, char** peer_runtime_mysql_servers_checksum, char** ip_address); void get_peer_to_sync_mysql_users(char **host, uint16_t *port, char** ip_address); + void get_peer_to_sync_variables_module(const char* module_name, char **host, uint16_t *port, char** ip_address, char **peer_checksum, char **peer_secondary_checksum); void get_peer_to_sync_mysql_variables(char **host, uint16_t *port, char** ip_address); void get_peer_to_sync_admin_variables(char **host, uint16_t* port, char** ip_address); void get_peer_to_sync_ldap_variables(char **host, uint16_t *port, char** ip_address); + void get_peer_to_sync_pgsql_variables(char **host, uint16_t *port, char** ip_address); void get_peer_to_sync_proxysql_servers(char **host, uint16_t *port, char ** ip_address); + void get_peer_to_sync_pgsql_query_rules(char **host, uint16_t *port, char** ip_address); + void get_peer_to_sync_runtime_pgsql_servers(char **host, uint16_t *port, char **peer_checksum, char** ip_address); + void get_peer_to_sync_pgsql_servers_v2(char** host, uint16_t* port, char** peer_pgsql_servers_v2_checksum, + char** peer_runtime_pgsql_servers_checksum, char** ip_address); + void get_peer_to_sync_pgsql_users(char **host, uint16_t *port, char** ip_address); }; struct p_cluster_counter { @@ -298,6 +460,15 @@ struct p_cluster_counter { pulled_ldap_variables_success, pulled_ldap_variables_failure, + pulled_pgsql_query_rules_success, + pulled_pgsql_query_rules_failure, + pulled_pgsql_servers_success, + pulled_pgsql_servers_failure, + pulled_pgsql_users_success, + pulled_pgsql_users_failure, + pulled_pgsql_variables_success, + pulled_pgsql_variables_failure, + pulled_mysql_ldap_mapping_success, pulled_mysql_ldap_mapping_failure, @@ -308,6 +479,10 @@ struct p_cluster_counter { sync_conflict_mysql_variables_share_epoch, sync_conflict_admin_variables_share_epoch, sync_conflict_ldap_variables_share_epoch, + sync_conflict_pgsql_query_rules_share_epoch, + sync_conflict_pgsql_servers_share_epoch, + sync_conflict_pgsql_users_share_epoch, + sync_conflict_pgsql_variables_share_epoch, sync_delayed_mysql_query_rules_version_one, sync_delayed_mysql_servers_version_one, @@ -316,6 +491,10 @@ struct p_cluster_counter { sync_delayed_mysql_variables_version_one, sync_delayed_admin_variables_version_one, sync_delayed_ldap_variables_version_one, + sync_delayed_pgsql_query_rules_version_one, + sync_delayed_pgsql_servers_version_one, + sync_delayed_pgsql_users_version_one, + sync_delayed_pgsql_variables_version_one, __size }; @@ -418,13 +597,17 @@ class ProxySQL_Cluster { char* admin_mysql_ifaces; int cluster_check_interval_ms; int cluster_check_status_frequency; - int cluster_mysql_query_rules_diffs_before_sync; - int cluster_mysql_servers_diffs_before_sync; - int cluster_mysql_users_diffs_before_sync; - int cluster_proxysql_servers_diffs_before_sync; - int cluster_mysql_variables_diffs_before_sync; - int cluster_ldap_variables_diffs_before_sync; - int cluster_admin_variables_diffs_before_sync; + std::atomic cluster_mysql_query_rules_diffs_before_sync; + std::atomic cluster_mysql_servers_diffs_before_sync; + std::atomic cluster_mysql_users_diffs_before_sync; + std::atomic cluster_proxysql_servers_diffs_before_sync; + std::atomic cluster_mysql_variables_diffs_before_sync; + std::atomic cluster_ldap_variables_diffs_before_sync; + std::atomic cluster_admin_variables_diffs_before_sync; + std::atomic cluster_pgsql_query_rules_diffs_before_sync; + std::atomic cluster_pgsql_servers_diffs_before_sync; + std::atomic cluster_pgsql_users_diffs_before_sync; + std::atomic cluster_pgsql_variables_diffs_before_sync; int cluster_mysql_servers_sync_algorithm; bool cluster_mysql_query_rules_save_to_disk; bool cluster_mysql_servers_save_to_disk; @@ -433,6 +616,10 @@ class ProxySQL_Cluster { bool cluster_mysql_variables_save_to_disk; bool cluster_ldap_variables_save_to_disk; bool cluster_admin_variables_save_to_disk; + bool cluster_pgsql_query_rules_save_to_disk; + bool cluster_pgsql_servers_save_to_disk; + bool cluster_pgsql_users_save_to_disk; + bool cluster_pgsql_variables_save_to_disk; ProxySQL_Cluster(); ~ProxySQL_Cluster(); void init() {}; @@ -491,5 +678,68 @@ class ProxySQL_Cluster { */ void pull_global_variables_from_peer(const std::string& type, const std::string& expected_checksum, const time_t epoch); void pull_proxysql_servers_from_peer(const std::string& expected_checksum, const time_t epoch); + void pull_pgsql_query_rules_from_peer(const std::string& expected_checksum, const time_t epoch); + void pull_runtime_pgsql_servers_from_peer(const runtime_pgsql_servers_checksum_t& peer_runtime_pgsql_server); + void pull_pgsql_servers_v2_from_peer(const pgsql_servers_v2_checksum_t& peer_pgsql_server_v2, + const runtime_pgsql_servers_checksum_t& peer_runtime_pgsql_server = {}, bool fetch_runtime_pgsql_servers = false); + void pull_pgsql_users_from_peer(const std::string& expected_checksum, const time_t epoch); + void pull_pgsql_variables_from_peer(const std::string& expected_checksum, const time_t epoch); + + // Configuration structure for unified pull operations + struct PullOperationConfig { + // Basic function info + const char* module_name; + + // Peer selection callback + std::function peer_selector; + + // Query configuration + std::vector queries; + bool use_multiple_queries; + + // Configuration callbacks + std::function checksum_validator; + std::function data_loader; + std::function runtime_loader; + std::function save_to_disk_checker; + + // Metrics tracking + p_cluster_counter::metric success_metric; + p_cluster_counter::metric failure_metric; + + // Mutex for thread safety + pthread_mutex_t* operation_mutex; + + // Logging callbacks + std::function get_module_display_name; + std::function get_description; + + // Optional additional parameters for complex operations + std::function custom_setup; + std::function custom_processor; + void* custom_context; + }; + + // Unified pull framework for data-driven operations + void pull_from_peer_unified(const PullOperationConfig& config, + const std::string& expected_checksum, + const time_t epoch, + void* complex_context = nullptr); + bool fetch_query_with_metrics(MYSQL* conn, const fetch_query& query, MYSQL_RES** result); + std::string compute_single_checksum(MYSQL_RES* result); + std::string compute_combined_checksum(const std::vector& results); + + // Memory management utilities for safe allocation and cleanup + char* safe_strdup(const char* source); + void* safe_malloc(size_t size); + char** safe_string_array_alloc(size_t count); + bool safe_update_string_array(char*** target_array, size_t count, const char** new_values); + bool safe_update_string(char** target, const char* new_value); + char* safe_query_construct(const char* format, ...); + void safe_cleanup_strings(char** str1, char** str2, char** str3); + + // RAII wrappers for automatic memory management + struct ScopedCharPointer; + struct ScopedCharArrayPointer; }; #endif /* CLASS_PROXYSQL_CLUSTER_H */ diff --git a/include/proxysql_admin.h b/include/proxysql_admin.h index dd1800d802..cb4778c257 100644 --- a/include/proxysql_admin.h +++ b/include/proxysql_admin.h @@ -313,6 +313,10 @@ class ProxySQL_Admin { int cluster_mysql_variables_diffs_before_sync; int cluster_admin_variables_diffs_before_sync; int cluster_ldap_variables_diffs_before_sync; + int cluster_pgsql_variables_diffs_before_sync; + int cluster_pgsql_query_rules_diffs_before_sync; + int cluster_pgsql_servers_diffs_before_sync; + int cluster_pgsql_users_diffs_before_sync; int cluster_mysql_servers_sync_algorithm; bool cluster_mysql_query_rules_save_to_disk; bool cluster_mysql_servers_save_to_disk; @@ -321,6 +325,10 @@ class ProxySQL_Admin { bool cluster_mysql_variables_save_to_disk; bool cluster_admin_variables_save_to_disk; bool cluster_ldap_variables_save_to_disk; + bool cluster_pgsql_variables_save_to_disk; + bool cluster_pgsql_query_rules_save_to_disk; + bool cluster_pgsql_servers_save_to_disk; + bool cluster_pgsql_users_save_to_disk; int stats_mysql_connection_pool; int stats_mysql_connections; int stats_mysql_query_cache; @@ -528,6 +536,7 @@ class ProxySQL_Admin { bool checksum_mysql_variables; bool checksum_admin_variables; bool checksum_ldap_variables; + bool checksum_pgsql_variables; } checksum_variables; template void public_add_active_users(enum cred_username_type usertype, char *user=NULL) { @@ -639,6 +648,7 @@ class ProxySQL_Admin { // void flush_admin_variables__from_disk_to_memory(); // commented in 2.3 because unused void flush_admin_variables__from_memory_to_disk(); void flush_ldap_variables__from_memory_to_disk(); + void flush_pgsql_variables__from_memory_to_disk(); void load_mysql_servers_to_runtime(const incoming_servers_t& incoming_servers = {}, const runtime_mysql_servers_checksum_t& peer_runtime_mysql_server = {}, const mysql_servers_v2_checksum_t& peer_mysql_server_v2 = {}); void save_mysql_servers_from_runtime(); diff --git a/include/proxysql_glovars.hpp b/include/proxysql_glovars.hpp index 8792e94b05..6f8e091ef5 100644 --- a/include/proxysql_glovars.hpp +++ b/include/proxysql_glovars.hpp @@ -3,6 +3,7 @@ #define CLUSTER_SYNC_INTERFACES_ADMIN "('admin-mysql_ifaces','admin-restapi_port','admin-telnet_admin_ifaces','admin-telnet_stats_ifaces','admin-web_port','admin-pgsql_ifaces')" #define CLUSTER_SYNC_INTERFACES_MYSQL "('mysql-interfaces')" +#define CLUSTER_SYNC_INTERFACES_PGSQL "('pgsql-interfaces')" #include #include diff --git a/lib/Admin_FlushVariables.cpp b/lib/Admin_FlushVariables.cpp index d994872732..9795abec8e 100644 --- a/lib/Admin_FlushVariables.cpp +++ b/lib/Admin_FlushVariables.cpp @@ -400,6 +400,10 @@ void ProxySQL_Admin::flush_GENERIC_variables__checksum__database_to_runtime(cons if (GloVars.cluster_sync_interfaces == false) { q += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_ADMIN); } + } else if (modname == "pgsql") { + if (GloVars.cluster_sync_interfaces == false) { + q += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_PGSQL); + } } q += " ORDER BY variable_name"; admindb->execute_statement(q.c_str(), &error , &cols , &affected_rows , &resultset); @@ -415,6 +419,8 @@ void ProxySQL_Admin::flush_GENERIC_variables__checksum__database_to_runtime(cons checkvar = &GloVars.checksums_values.mysql_variables; } else if (modname == "ldap") { checkvar = &GloVars.checksums_values.ldap_variables; + } else if (modname == "pgsql") { + checkvar = &GloVars.checksums_values.pgsql_variables; } assert(checkvar != NULL); checkvar->set_checksum(buf); @@ -889,49 +895,14 @@ void ProxySQL_Admin::flush_pgsql_variables___database_to_runtime(SQLite3DB* db, GloPTH->commit(); GloPTH->wrunlock(); - /* Checksums are always generated - 'admin-checksum_*' deprecated - { - // NOTE: 'GloPTH->wrunlock()' should have been called before this point to avoid possible - // deadlocks. See issue #3847. - pthread_mutex_lock(&GloVars.checksum_mutex); - // generate checksum for cluster - flush_mysql_variables___runtime_to_database(admindb, false, false, false, true, true); - char* error = NULL; - int cols = 0; - int affected_rows = 0; - SQLite3_result* resultset = NULL; - std::string q; - q = "SELECT variable_name, variable_value FROM runtime_global_variables WHERE variable_name LIKE 'mysql-\%' AND variable_name NOT IN ('mysql-threads')"; - if (GloVars.cluster_sync_interfaces == false) { - q += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_MYSQL); - } - q += " ORDER BY variable_name"; - admindb->execute_statement(q.c_str(), &error, &cols, &affected_rows, &resultset); - uint64_t hash1 = resultset->raw_checksum(); - uint32_t d32[2]; - char buf[20]; - memcpy(&d32, &hash1, sizeof(hash1)); - snprintf(buf, sizeof(buf), "0x%0X%0X", d32[0], d32[1]); - GloVars.checksums_values.mysql_variables.set_checksum(buf); - GloVars.checksums_values.mysql_variables.version++; - time_t t = time(NULL); - if (epoch != 0 && checksum != "" && GloVars.checksums_values.mysql_variables.checksum == checksum) { - GloVars.checksums_values.mysql_variables.epoch = epoch; - } - else { - GloVars.checksums_values.mysql_variables.epoch = t; + { + // NOTE: 'GloPTH->wrunlock()' should have been called before this point to avoid possible + // deadlocks. See issue #3847. + pthread_mutex_lock(&GloVars.checksum_mutex); + flush_pgsql_variables___runtime_to_database(admindb, false, false, false, true, true); + flush_GENERIC_variables__checksum__database_to_runtime("pgsql", checksum, epoch); + pthread_mutex_unlock(&GloVars.checksum_mutex); } - GloVars.epoch_version = t; - GloVars.generate_global_checksum(); - GloVars.checksums_values.updates_cnt++; - pthread_mutex_unlock(&GloVars.checksum_mutex); - delete resultset; - } - proxy_info( - "Computed checksum for 'LOAD MYSQL VARIABLES TO RUNTIME' was '%s', with epoch '%llu'\n", - GloVars.checksums_values.mysql_variables.checksum, GloVars.checksums_values.mysql_variables.epoch - ); - */ /** * @brief Check and warn if TCP keepalive is disabled for PostgreSQL connections. diff --git a/lib/Admin_Handler.cpp b/lib/Admin_Handler.cpp index 1b39d8f140..faa9be8bcd 100644 --- a/lib/Admin_Handler.cpp +++ b/lib/Admin_Handler.cpp @@ -3340,6 +3340,53 @@ void admin_session_handler(S* sess, void *_pa, PtrSize_t *pkt) { } } + if (sess->session_type == PROXYSQL_SESSION_ADMIN) { // no stats + string tn = ""; + if (!strncasecmp(CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS, query_no_space, strlen(CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS))) { + tn = "cluster_pgsql_servers"; + } else if (!strncasecmp(CLUSTER_QUERY_PGSQL_REPLICATION_HOSTGROUPS, query_no_space, strlen(CLUSTER_QUERY_PGSQL_REPLICATION_HOSTGROUPS))) { + tn = "pgsql_replication_hostgroups"; + } else if (!strncasecmp(CLUSTER_QUERY_PGSQL_HOSTGROUP_ATTRIBUTES, query_no_space, strlen(CLUSTER_QUERY_PGSQL_HOSTGROUP_ATTRIBUTES))) { + tn = "pgsql_hostgroup_attributes"; + } else if (!strncasecmp(CLUSTER_QUERY_PGSQL_SERVERS_V2, query_no_space, strlen(CLUSTER_QUERY_PGSQL_SERVERS_V2))) { + tn = "pgsql_servers_v2"; + } + if (tn != "") { + GloAdmin->pgsql_servers_wrlock(); + resultset = PgHGM->get_current_pgsql_table(tn); + GloAdmin->pgsql_servers_wrunlock(); + + if (resultset == nullptr) { + if (tn == "pgsql_servers_v2") { + const string query_empty_resultset { + string { PGHGM_GEN_CLUSTER_ADMIN_PGSQL_SERVERS } + " LIMIT 0" + }; + + char* error = NULL; + int cols = 0; + int affected_rows = 0; + proxy_debug(PROXY_DEBUG_MYSQL_CONNPOOL, 4, "%s\n", query); + GloAdmin->pgsql_servers_wrlock(); + GloAdmin->admindb->execute_statement(query_empty_resultset.c_str(), &error, &cols, &affected_rows, &resultset); + GloAdmin->pgsql_servers_wrunlock(); + } else { + resultset = PgHGM->dump_table_pgsql(tn); + } + + if (resultset) { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + delete resultset; + run_query = false; + goto __run_query; + } + } else { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + run_query = false; + goto __run_query; + } + } + } + if (!strncasecmp(CLUSTER_QUERY_MYSQL_USERS, query_no_space, strlen(CLUSTER_QUERY_MYSQL_USERS))) { if (sess->session_type == PROXYSQL_SESSION_ADMIN) { pthread_mutex_lock(&users_mutex); @@ -3353,6 +3400,19 @@ void admin_session_handler(S* sess, void *_pa, PtrSize_t *pkt) { } } + if (!strncasecmp(CLUSTER_QUERY_PGSQL_USERS, query_no_space, strlen(CLUSTER_QUERY_PGSQL_USERS))) { + if (sess->session_type == PROXYSQL_SESSION_ADMIN) { + pthread_mutex_lock(&users_mutex); + resultset = GloPgAuth->get_current_pgsql_users(); + pthread_mutex_unlock(&users_mutex); + if (resultset != nullptr) { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + run_query = false; + goto __run_query; + } + } + } + if (sess->session_type == PROXYSQL_SESSION_ADMIN) { // no stats if (!strncasecmp(CLUSTER_QUERY_MYSQL_QUERY_RULES, query_no_space, strlen(CLUSTER_QUERY_MYSQL_QUERY_RULES))) { GloMyQPro->wrlock(); @@ -3396,6 +3456,69 @@ void admin_session_handler(S* sess, void *_pa, PtrSize_t *pkt) { } } + if (sess->session_type == PROXYSQL_SESSION_ADMIN) { // no stats + if (!strncasecmp(CLUSTER_QUERY_PGSQL_QUERY_RULES, query_no_space, strlen(CLUSTER_QUERY_PGSQL_QUERY_RULES))) { + GloPgQPro->wrlock(); + resultset = GloPgQPro->get_current_query_rules_inner(); + if (resultset == NULL) { + GloPgQPro->wrunlock(); // unlock first + resultset = GloPgQPro->get_current_query_rules(); + if (resultset) { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + delete resultset; + run_query = false; + goto __run_query; + } + } else { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + // DO NOT DELETE: this is the inner resultset of Query_Processor + GloPgQPro->wrunlock(); + run_query = false; + goto __run_query; + } + } + if (!strncasecmp(CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING, query_no_space, strlen(CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING))) { + GloPgQPro->wrlock(); + resultset = GloPgQPro->get_current_query_rules_fast_routing_inner(); + if (resultset == NULL) { + GloPgQPro->wrunlock(); // unlock first + resultset = GloPgQPro->get_current_query_rules_fast_routing(); + if (resultset) { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + delete resultset; + run_query = false; + goto __run_query; + } + } else { + sess->SQLite3_to_MySQL(resultset, error, affected_rows, &sess->client_myds->myprot); + // DO NOT DELETE: this is the inner resultset of Query_Processor + GloPgQPro->wrunlock(); + run_query = false; + goto __run_query; + } + } + } + + if (!strncasecmp(CLUSTER_QUERY_PGSQL_VARIABLES, query_no_space, strlen(CLUSTER_QUERY_PGSQL_VARIABLES))) { + if (sess->session_type == PROXYSQL_SESSION_ADMIN) { + pthread_mutex_lock(&GloVars.checksum_mutex); + GloAdmin->flush_pgsql_variables___runtime_to_database(GloAdmin->admindb, false, false, false, true, true); + pthread_mutex_unlock(&GloVars.checksum_mutex); + + l_free(query_length, query); + string q { + "SELECT variable_name, variable_value FROM runtime_global_variables WHERE variable_name LIKE 'pgsql-%'" + }; + if (GloVars.cluster_sync_interfaces == false) { + q += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_PGSQL); + } + q += " ORDER BY variable_name"; + query = l_strdup(q.c_str()); + query_length = strlen(query) + 1; + goto __run_query; + } + } + // if the client simply executes: // SELECT COUNT(*) FROM runtime_mysql_query_rules_fast_routing // we just return the count diff --git a/lib/PgSQL_Variables_Validator.cpp b/lib/PgSQL_Variables_Validator.cpp index 7afe1b94d8..f6806ed00c 100644 --- a/lib/PgSQL_Variables_Validator.cpp +++ b/lib/PgSQL_Variables_Validator.cpp @@ -639,3 +639,33 @@ const pgsql_variable_validator pgsql_variable_validator_search_path = { .validate = &pgsql_variable_validate_search_path, .params = {} }; + +/** + * @brief Validates an integer variable for PostgreSQL. + * + * This function checks if the provided value is a valid integer representation + * and falls within the specified range. The range is defined by the params + * parameter. + * + * @param value The value to validate. + * @param params The parameter structure containing the integer range. + * @param session Unused parameter. + * @param transformed_value If not null, will be set to null. + * @return true if the value is a valid integer representation within the specified range, false otherwise. + */ +bool pgsql_variable_validate_integer(const char* value, const params_t* params, PgSQL_Session* session, char** transformed_value) { + (void)session; + if (transformed_value) *transformed_value = nullptr; + char* end = nullptr; + errno = 0; + long num = strtol(value, &end, 10); + if (end == value || *end != '\0' || errno == ERANGE) return false; + if (num < params->int_range.min || num > params->int_range.max) return false; + return true; +} + +const pgsql_variable_validator pgsql_variable_validator_integer = { + .type = VARIABLE_TYPE_INT, + .validate = &pgsql_variable_validate_integer, + .params = {} +}; diff --git a/lib/ProxySQL_Admin.cpp b/lib/ProxySQL_Admin.cpp index a0f2e73ed2..bfc7befa63 100644 --- a/lib/ProxySQL_Admin.cpp +++ b/lib/ProxySQL_Admin.cpp @@ -401,6 +401,10 @@ static char * admin_variables_names[]= { (char *)"cluster_mysql_variables_diffs_before_sync", (char *)"cluster_admin_variables_diffs_before_sync", (char *)"cluster_ldap_variables_diffs_before_sync", + (char *)"cluster_pgsql_variables_diffs_before_sync", + (char *)"cluster_pgsql_query_rules_diffs_before_sync", + (char *)"cluster_pgsql_servers_diffs_before_sync", + (char *)"cluster_pgsql_users_diffs_before_sync", (char *)"cluster_mysql_query_rules_save_to_disk", (char *)"cluster_mysql_servers_save_to_disk", (char *)"cluster_mysql_users_save_to_disk", @@ -408,13 +412,18 @@ static char * admin_variables_names[]= { (char *)"cluster_mysql_variables_save_to_disk", (char *)"cluster_admin_variables_save_to_disk", (char *)"cluster_ldap_variables_save_to_disk", - (char *)"cluster_mysql_servers_sync_algorithm", + (char *)"cluster_pgsql_variables_save_to_disk", + (char *)"cluster_pgsql_query_rules_save_to_disk", + (char *)"cluster_pgsql_servers_save_to_disk", + (char *)"cluster_pgsql_users_save_to_disk", + (char *)"cluster_mysql_servers_sync_algorithm", (char *)"checksum_mysql_query_rules", (char *)"checksum_mysql_servers", (char *)"checksum_mysql_users", (char *)"checksum_mysql_variables", (char *)"checksum_admin_variables", (char *)"checksum_ldap_variables", + (char *)"checksum_pgsql_variables", (char *)"restapi_enabled", (char *)"restapi_port", (char *)"web_enabled", @@ -1491,7 +1500,7 @@ bool ProxySQL_Admin::GenericRefreshStatistics(const char *query_no_space, unsign runtime_mysql_ldap_mapping=true; refresh=true; } if (strstr(query_no_space, "runtime_pgsql_ldap_mapping")) { - runtime_mysql_ldap_mapping = true; refresh = true; + runtime_pgsql_ldap_mapping = true; refresh = true; } } if (strstr(query_no_space,"runtime_mysql_query_rules")) { @@ -2831,13 +2840,18 @@ ProxySQL_Admin::ProxySQL_Admin() : variables.cluster_mysql_variables_diffs_before_sync = 3; variables.cluster_admin_variables_diffs_before_sync = 3; variables.cluster_ldap_variables_diffs_before_sync = 3; - variables.cluster_mysql_servers_sync_algorithm = 1; + variables.cluster_pgsql_variables_diffs_before_sync = 3; + variables.cluster_pgsql_query_rules_diffs_before_sync = 3; + variables.cluster_pgsql_servers_diffs_before_sync = 3; + variables.cluster_pgsql_users_diffs_before_sync = 3; + variables.cluster_mysql_servers_sync_algorithm = 1; checksum_variables.checksum_mysql_query_rules = true; checksum_variables.checksum_mysql_servers = true; checksum_variables.checksum_mysql_users = true; checksum_variables.checksum_mysql_variables = true; checksum_variables.checksum_admin_variables = true; checksum_variables.checksum_ldap_variables = true; + checksum_variables.checksum_pgsql_variables = true; variables.cluster_mysql_query_rules_save_to_disk = true; variables.cluster_mysql_servers_save_to_disk = true; variables.cluster_mysql_users_save_to_disk = true; @@ -2845,7 +2859,11 @@ ProxySQL_Admin::ProxySQL_Admin() : variables.cluster_mysql_variables_save_to_disk = true; variables.cluster_admin_variables_save_to_disk = true; variables.cluster_ldap_variables_save_to_disk = true; - variables.stats_mysql_connection_pool = 60; + variables.cluster_pgsql_variables_save_to_disk = true; + variables.cluster_pgsql_query_rules_save_to_disk = true; + variables.cluster_pgsql_servers_save_to_disk = true; + variables.cluster_pgsql_users_save_to_disk = true; + variables.stats_mysql_connection_pool = 60; variables.stats_mysql_connections = 60; variables.stats_mysql_query_cache = 60; variables.stats_mysql_query_digest_to_disk = 0; @@ -3840,6 +3858,22 @@ char * ProxySQL_Admin::get_variable(char *name) { snprintf(intbuf, sizeof(intbuf),"%d",variables.cluster_ldap_variables_diffs_before_sync); return strdup(intbuf); } + if (!strcasecmp(name,"cluster_pgsql_query_rules_diffs_before_sync")) { + sprintf(intbuf,"%d",variables.cluster_pgsql_query_rules_diffs_before_sync); + return strdup(intbuf); + } + if (!strcasecmp(name,"cluster_pgsql_servers_diffs_before_sync")) { + sprintf(intbuf,"%d",variables.cluster_pgsql_servers_diffs_before_sync); + return strdup(intbuf); + } + if (!strcasecmp(name,"cluster_pgsql_users_diffs_before_sync")) { + sprintf(intbuf,"%d",variables.cluster_pgsql_users_diffs_before_sync); + return strdup(intbuf); + } + if (!strcasecmp(name,"cluster_pgsql_variables_diffs_before_sync")) { + sprintf(intbuf,"%d",variables.cluster_pgsql_variables_diffs_before_sync); + return strdup(intbuf); + } if (!strcasecmp(name,"cluster_mysql_servers_sync_algorithm")) { snprintf(intbuf, sizeof(intbuf), "%d", variables.cluster_mysql_servers_sync_algorithm); return strdup(intbuf); @@ -3865,6 +3899,18 @@ char * ProxySQL_Admin::get_variable(char *name) { if (!strcasecmp(name,"cluster_ldap_variables_save_to_disk")) { return strdup((variables.cluster_ldap_variables_save_to_disk ? "true" : "false")); } + if (!strcasecmp(name,"cluster_pgsql_query_rules_save_to_disk")) { + return strdup((variables.cluster_pgsql_query_rules_save_to_disk ? "true" : "false")); + } + if (!strcasecmp(name,"cluster_pgsql_servers_save_to_disk")) { + return strdup((variables.cluster_pgsql_servers_save_to_disk ? "true" : "false")); + } + if (!strcasecmp(name,"cluster_pgsql_users_save_to_disk")) { + return strdup((variables.cluster_pgsql_users_save_to_disk ? "true" : "false")); + } + if (!strcasecmp(name,"cluster_pgsql_variables_save_to_disk")) { + return strdup((variables.cluster_pgsql_variables_save_to_disk ? "true" : "false")); + } if (!strcasecmp(name,"refresh_interval")) { snprintf(intbuf, sizeof(intbuf),"%d",variables.refresh_interval); return strdup(intbuf); @@ -3898,6 +3944,9 @@ char * ProxySQL_Admin::get_variable(char *name) { if (!strcasecmp(name,"checksum_ldap_variables")) { return strdup((checksum_variables.checksum_ldap_variables ? "true" : "false")); } + if (!strcasecmp(name,"checksum_pgsql_variables")) { + return strdup((checksum_variables.checksum_pgsql_variables ? "true" : "false")); + } if (!strcasecmp(name,"restapi_enabled")) { return strdup((variables.restapi_enabled ? "true" : "false")); } @@ -4285,11 +4334,11 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (intv >= 0 && intv <= 1000) { intv = checksum_variables.checksum_mysql_query_rules ? intv : 0; if (variables.cluster_mysql_query_rules_diffs_before_sync == 0 && intv != 0) { - proxy_info("Re-enabled previously disabled 'admin-cluster_admin_variables_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); + proxy_info("Re-enabled previously disabled 'admin-cluster_mysql_query_rules_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_mysql_query_rules_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync, intv); + GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync = intv; return true; } else { return false; @@ -4304,7 +4353,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_mysql_servers_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_servers_diffs_before_sync, intv); + GloProxyCluster->cluster_mysql_servers_diffs_before_sync = intv; return true; } else { return false; @@ -4319,7 +4368,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_mysql_users_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_users_diffs_before_sync, intv); + GloProxyCluster->cluster_mysql_users_diffs_before_sync = intv; return true; } else { return false; @@ -4333,7 +4382,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_proxysql_servers_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_proxysql_servers_diffs_before_sync, intv); + GloProxyCluster->cluster_proxysql_servers_diffs_before_sync = intv; return true; } else { return false; @@ -4348,7 +4397,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_mysql_variables_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_variables_diffs_before_sync, intv); + GloProxyCluster->cluster_mysql_variables_diffs_before_sync = intv; return true; } else { return false; @@ -4363,7 +4412,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_admin_variables_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_admin_variables_diffs_before_sync, intv); + GloProxyCluster->cluster_admin_variables_diffs_before_sync = intv; return true; } else { return false; @@ -4378,7 +4427,67 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this GloProxyCluster->Reset_Global_Checksums(lock); } variables.cluster_ldap_variables_diffs_before_sync=intv; - __sync_lock_test_and_set(&GloProxyCluster->cluster_ldap_variables_diffs_before_sync, intv); + GloProxyCluster->cluster_ldap_variables_diffs_before_sync = intv; + return true; + } else { + return false; + } + } + if (!strcasecmp(name,"cluster_pgsql_query_rules_diffs_before_sync")) { + int intv=atoi(value); + if (intv >= 0 && intv <= 1000) { + intv = checksum_variables.checksum_pgsql_variables ? intv : 0; + if (variables.cluster_pgsql_query_rules_diffs_before_sync == 0 && intv != 0) { + proxy_info("Re-enabled previously disabled 'admin-cluster_pgsql_query_rules_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); + GloProxyCluster->Reset_Global_Checksums(lock); + } + variables.cluster_pgsql_query_rules_diffs_before_sync=intv; + GloProxyCluster->cluster_pgsql_query_rules_diffs_before_sync = intv; + return true; + } else { + return false; + } + } + if (!strcasecmp(name,"cluster_pgsql_servers_diffs_before_sync")) { + int intv=atoi(value); + if (intv >= 0 && intv <= 1000) { + intv = checksum_variables.checksum_pgsql_variables ? intv : 0; + if (variables.cluster_pgsql_servers_diffs_before_sync == 0 && intv != 0) { + proxy_info("Re-enabled previously disabled 'admin-cluster_pgsql_servers_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); + GloProxyCluster->Reset_Global_Checksums(lock); + } + variables.cluster_pgsql_servers_diffs_before_sync=intv; + GloProxyCluster->cluster_pgsql_servers_diffs_before_sync = intv; + return true; + } else { + return false; + } + } + if (!strcasecmp(name,"cluster_pgsql_users_diffs_before_sync")) { + int intv=atoi(value); + if (intv >= 0 && intv <= 1000) { + intv = checksum_variables.checksum_pgsql_variables ? intv : 0; + if (variables.cluster_pgsql_users_diffs_before_sync == 0 && intv != 0) { + proxy_info("Re-enabled previously disabled 'admin-cluster_pgsql_users_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); + GloProxyCluster->Reset_Global_Checksums(lock); + } + variables.cluster_pgsql_users_diffs_before_sync=intv; + GloProxyCluster->cluster_pgsql_users_diffs_before_sync = intv; + return true; + } else { + return false; + } + } + if (!strcasecmp(name,"cluster_pgsql_variables_diffs_before_sync")) { + int intv=atoi(value); + if (intv >= 0 && intv <= 1000) { + intv = checksum_variables.checksum_pgsql_variables ? intv : 0; + if (variables.cluster_pgsql_variables_diffs_before_sync == 0 && intv != 0) { + proxy_info("Re-enabled previously disabled 'admin-cluster_pgsql_variables_diffs_before_sync'. Resetting global checksums to force Cluster re-sync.\n"); + GloProxyCluster->Reset_Global_Checksums(lock); + } + variables.cluster_pgsql_variables_diffs_before_sync=intv; + GloProxyCluster->cluster_pgsql_variables_diffs_before_sync = intv; return true; } else { return false; @@ -4583,6 +4692,62 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this } return rt; } + if (!strcasecmp(name,"cluster_pgsql_query_rules_save_to_disk")) { + bool rt = false; + if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { + variables.cluster_pgsql_query_rules_save_to_disk=true; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_query_rules_save_to_disk, true); + return true; + } + if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { + variables.cluster_pgsql_query_rules_save_to_disk=false; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_query_rules_save_to_disk, false); + return true; + } + return rt; + } + if (!strcasecmp(name,"cluster_pgsql_servers_save_to_disk")) { + bool rt = false; + if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { + variables.cluster_pgsql_servers_save_to_disk=true; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_servers_save_to_disk, true); + return true; + } + if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { + variables.cluster_pgsql_servers_save_to_disk=false; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_servers_save_to_disk, false); + return true; + } + return rt; + } + if (!strcasecmp(name,"cluster_pgsql_users_save_to_disk")) { + bool rt = false; + if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { + variables.cluster_pgsql_users_save_to_disk=true; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_users_save_to_disk, true); + return true; + } + if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { + variables.cluster_pgsql_users_save_to_disk=false; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_users_save_to_disk, false); + return true; + } + return rt; + } + if (!strcasecmp(name,"cluster_pgsql_variables_save_to_disk")) { + bool rt = false; + if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { + variables.cluster_pgsql_variables_save_to_disk=true; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_variables_save_to_disk, true); + return true; + } + if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { + variables.cluster_pgsql_variables_save_to_disk=false; + rt = __sync_lock_test_and_set(&GloProxyCluster->cluster_pgsql_variables_save_to_disk, false); + return true; + } + return rt; + } if (!strcasecmp(name,"checksum_mysql_query_rules")) { if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { checksum_variables.checksum_mysql_query_rules=true; @@ -4591,8 +4756,10 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_mysql_query_rules=false; variables.cluster_mysql_query_rules_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync, 0); - proxy_warning("Disabling deprecated 'admin-checksum_mysql_query_rules', setting 'admin-cluster_mysql_query_rules_diffs_before_sync=0'\n"); + GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync = 0; + variables.cluster_pgsql_query_rules_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_query_rules_diffs_before_sync = 0; + proxy_warning("Disabling deprecated 'admin-checksum_mysql_query_rules', setting 'admin-cluster_mysql_query_rules_diffs_before_sync=0' and 'admin-cluster_pgsql_query_rules_diffs_before_sync=0'\n"); return true; } return false; @@ -4605,8 +4772,10 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_mysql_servers=false; variables.cluster_mysql_servers_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_servers_diffs_before_sync, 0); - proxy_warning("Disabling deprecated 'admin-checksum_mysql_servers', setting 'admin-cluster_mysql_servers_diffs_before_sync=0'\n"); + GloProxyCluster->cluster_mysql_servers_diffs_before_sync = 0; + variables.cluster_pgsql_servers_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_servers_diffs_before_sync = 0; + proxy_warning("Disabling deprecated 'admin-checksum_mysql_servers', setting 'admin-cluster_mysql_servers_diffs_before_sync=0' and 'admin-cluster_pgsql_servers_diffs_before_sync=0'\n"); return true; } @@ -4620,8 +4789,10 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_mysql_users=false; variables.cluster_mysql_users_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_users_diffs_before_sync, 0); - proxy_warning("Disabling deprecated 'admin-checksum_mysql_users', setting 'admin-cluster_mysql_users_diffs_before_sync=0'\n"); + GloProxyCluster->cluster_mysql_users_diffs_before_sync = 0; + variables.cluster_pgsql_users_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_users_diffs_before_sync = 0; + proxy_warning("Disabling deprecated 'admin-checksum_mysql_users', setting 'admin-cluster_mysql_users_diffs_before_sync=0' and 'admin-cluster_pgsql_users_diffs_before_sync=0'\n"); return true; } return false; @@ -4634,8 +4805,10 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_mysql_variables=false; variables.cluster_mysql_variables_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_mysql_variables_diffs_before_sync, 0); - proxy_warning("Disabling deprecated 'admin-checksum_mysql_variables', setting 'admin-cluster_mysql_variables_diffs_before_sync=0'\n"); + GloProxyCluster->cluster_mysql_variables_diffs_before_sync = 0; + variables.cluster_pgsql_variables_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_variables_diffs_before_sync = 0; + proxy_warning("Disabling deprecated 'admin-checksum_mysql_variables', setting 'admin-cluster_mysql_variables_diffs_before_sync=0' and 'admin-cluster_pgsql_variables_diffs_before_sync=0'\n"); return true; } return false; @@ -4648,7 +4821,7 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_admin_variables=false; variables.cluster_admin_variables_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_admin_variables_diffs_before_sync, 0); + GloProxyCluster->cluster_admin_variables_diffs_before_sync = 0; proxy_warning("Disabling deprecated 'admin-checksum_admin_variables', setting 'admin-cluster_admin_variables_diffs_before_sync=0'\n"); return true; } @@ -4662,12 +4835,32 @@ bool ProxySQL_Admin::set_variable(char *name, char *value, bool lock) { // this if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { checksum_variables.checksum_ldap_variables=false; variables.cluster_ldap_variables_diffs_before_sync = 0; - __sync_lock_test_and_set(&GloProxyCluster->cluster_ldap_variables_diffs_before_sync, 0); + GloProxyCluster->cluster_ldap_variables_diffs_before_sync = 0; proxy_warning("Disabling deprecated 'admin-checksum_ldap_variables', setting 'admin-cluster_ldap_variables_diffs_before_sync=0'\n"); return true; } return false; } + if (!strcasecmp(name,"checksum_pgsql_variables")) { + if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { + checksum_variables.checksum_pgsql_variables=true; + return true; + } + if (strcasecmp(value,"false")==0 || strcasecmp(value,"0")==0) { + checksum_variables.checksum_pgsql_variables=false; + variables.cluster_pgsql_variables_diffs_before_sync = 0; + variables.cluster_pgsql_query_rules_diffs_before_sync = 0; + variables.cluster_pgsql_servers_diffs_before_sync = 0; + variables.cluster_pgsql_users_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_variables_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_query_rules_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_servers_diffs_before_sync = 0; + GloProxyCluster->cluster_pgsql_users_diffs_before_sync = 0; + proxy_warning("Disabling deprecated 'admin-checksum_pgsql_variables', setting all 'admin-cluster_pgsql_*_diffs_before_sync=0'\n"); + return true; + } + return false; + } if (!strcasecmp(name,"read_only")) { if (strcasecmp(value,"true")==0 || strcasecmp(value,"1")==0) { variables.admin_read_only=true; @@ -5840,6 +6033,14 @@ void ProxySQL_Admin::flush_ldap_variables__from_memory_to_disk() { admindb->wrunlock(); } +void ProxySQL_Admin::flush_pgsql_variables__from_memory_to_disk() { + admindb->wrlock(); + admindb->execute("PRAGMA foreign_keys = OFF"); + admindb->execute("INSERT OR REPLACE INTO disk.global_variables SELECT * FROM main.global_variables WHERE variable_name LIKE 'pgsql-%'"); + admindb->execute("PRAGMA foreign_keys = ON"); + admindb->wrunlock(); +} + void ProxySQL_Admin::__attach_db(SQLite3DB *db1, SQLite3DB *db2, char *alias) { const char *a="ATTACH DATABASE '%s' AS %s"; int l=strlen(a)+strlen(db2->get_url())+strlen(alias)+5; @@ -6613,6 +6814,14 @@ void ProxySQL_Admin::dump_checksums_values_table() { SAFE_SQLITE3_STEP2(statement1); rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, admindb); + + rc = (*proxy_sqlite3_bind_text)(statement1, 1, "pgsql_servers_v2", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 2, GloVars.checksums_values.pgsql_servers_v2.version); ASSERT_SQLITE_OK(rc, admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 3, GloVars.checksums_values.pgsql_servers_v2.epoch); ASSERT_SQLITE_OK(rc, admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 4, GloVars.checksums_values.pgsql_servers_v2.checksum, -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, admindb); + SAFE_SQLITE3_STEP2(statement1); + rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, admindb); + rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, admindb); // @@ -7524,7 +7733,7 @@ void ProxySQL_Admin::save_pgsql_servers_runtime_to_database(bool _runtime) { rc = (*proxy_sqlite3_bind_int64)(statement32, (idx * 11) + 1, atoi(r1->fields[0])); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_bind_text)(statement32, (idx * 11) + 2, r1->fields[1], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_bind_int64)(statement32, (idx * 11) + 3, atoi(r1->fields[2])); ASSERT_SQLITE_OK(rc, admindb); - rc = (*proxy_sqlite3_bind_text)(statement32, (idx * 11) + 4, (_runtime ? r1->fields[3] : (strcmp(r1->fields[4], "SHUNNED") == 0 ? "ONLINE" : r1->fields[3])), -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, admindb); + rc = (*proxy_sqlite3_bind_text)(statement32, (idx * 11) + 4, (_runtime ? r1->fields[3] : (strcmp(r1->fields[3], "SHUNNED") == 0 ? "ONLINE" : r1->fields[3])), -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_bind_int64)(statement32, (idx * 11) + 5, atoi(r1->fields[4])); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_bind_int64)(statement32, (idx * 11) + 6, atoi(r1->fields[5])); ASSERT_SQLITE_OK(rc, admindb); rc = (*proxy_sqlite3_bind_int64)(statement32, (idx * 11) + 7, atoi(r1->fields[6])); ASSERT_SQLITE_OK(rc, admindb); diff --git a/lib/ProxySQL_Cluster.cpp b/lib/ProxySQL_Cluster.cpp index b1edbbfabe..451d6cae37 100644 --- a/lib/ProxySQL_Cluster.cpp +++ b/lib/ProxySQL_Cluster.cpp @@ -1,4 +1,6 @@ #include +#include +#include #include "proxysql.h" #include "proxysql_utils.h" @@ -13,6 +15,8 @@ #include "ProxySQL_Cluster.hpp" #include "MySQL_Authentication.hpp" #include "MySQL_LDAP_Authentication.hpp" +#include "PgSQL_Authentication.h" +#include "PgSQL_Query_Processor.h" #ifdef DEBUG #define DEB "_DEBUG" @@ -23,6 +27,75 @@ #define QUERY_ERROR_RATE 20 +// Module name constants +namespace ClusterModules { + const char* const MYSQL_QUERY_RULES = "mysql_query_rules"; + const char* const MYSQL_SERVERS = "mysql_servers"; + const char* const MYSQL_SERVERS_V2 = "mysql_servers_v2"; + const char* const MYSQL_USERS = "mysql_users"; + const char* const MYSQL_VARIABLES = "mysql_variables"; + const char* const PROXYSQL_SERVERS = "proxysql_servers"; + const char* const LDAP_VARIABLES = "ldap_variables"; + const char* const PGSQL_QUERY_RULES = "pgsql_query_rules"; + const char* const PGSQL_SERVERS = "pgsql_servers"; + const char* const PGSQL_SERVERS_V2 = "pgsql_servers_v2"; + const char* const PGSQL_USERS = "pgsql_users"; + const char* const PGSQL_VARIABLES = "pgsql_variables"; +} + +// Error message templates +namespace ErrorMessages { + const char* const CLUSTER_FETCH_FAILED = "Fetching %s from peer %s:%d failed: %s"; + const char* const CLUSTER_FETCH_STARTED = "Fetching %s from peer %s:%d started. Expected checksum: %s"; + const char* const CLUSTER_FETCH_COMPLETED = "Fetching %s from peer %s:%d completed"; + const char* const MEMORY_ALLOCATION_FAILED = "Memory allocation failed in %s for %s\n"; + const char* const MYSQL_INIT_FAILED = "Unable to run mysql_init()\n"; + const char* const DIFFS_BEFORE_SYNC_FORMAT = "Not syncing due to 'admin-cluster_%s_diffs_before_sync=0'.\n"; +} + +// Runtime command templates +namespace RuntimeCommands { + const char* const LOAD_ADMIN_VARIABLES = "LOAD ADMIN VARIABLES TO RUNTIME"; + const char* const LOAD_MYSQL_VARIABLES = "LOAD MYSQL VARIABLES TO RUNTIME"; + const char* const LOAD_LDAP_VARIABLES = "LOAD LDAP VARIABLES TO RUNTIME"; + const char* const LOAD_PGSQL_VARIABLES = "LOAD PGSQL VARIABLES TO RUNTIME"; +} + +// Debug/Info message templates +namespace DebugMessages { + const char* const CLUSTER_PEER_CONNECTED = "Connecting to peer %s:%d\n"; + const char* const CLUSTER_PEER_THREAD_STARTED = "Thread started for peer %s:%d\n"; + const char* const CLUSTER_PEER_THREAD_STARTING = "Cluster: starting thread for peer %s:%d\n"; + const char* const CLUSTER_PEER_VERSION_MISMATCH = "Cluster: different ProxySQL version with peer %s:%d . Remote: %s . Self: %s\n"; + const char* const CLUSTER_PEER_UUID_SENT = "Sending CLUSTER_NODE_UUID %s to peer %s:%d\n"; + const char* const CLUSTER_PEER_UUID_SENT_INFO = "Cluster: sending CLUSTER_NODE_UUID %s to peer %s:%d\n"; + const char* const CHECKSUM_DIFFERENT = "Checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s"; + const char* const CHECKSUM_MATCHES = "Checksum for %s from peer %s:%d matches with local checksum %s"; + const char* const COMPUTED_CHECKSUM = "Computed checksum for %s from peer %s:%d : %s"; +} + +// SQL query constants +namespace SQLQueries { + const char* const GLOBAL_CHECKSUM = "SELECT GLOBAL_CHECKSUM()"; + const char* const STATS_MYSQL_GLOBAL = "SELECT * FROM stats_mysql_global ORDER BY Variable_Name"; + const char* const RUNTIME_CHECKSUMS_VALUES = "SELECT * FROM runtime_checksums_values ORDER BY name"; + const char* const VERSION = "SELECT @@version"; + const char* const DELETE_MYSQL_QUERY_RULES = "DELETE FROM mysql_query_rules"; + const char* const DELETE_MYSQL_USERS = "DELETE FROM mysql_users"; + const char* const DELETE_MYSQL_LDAP_MAPPING = "DELETE FROM mysql_ldap_mapping"; + const char* const DELETE_MYSQL_SERVERS = "DELETE FROM mysql_servers"; + const char* const DELETE_MYSQL_QUERY_RULES_FAST_ROUTING = "DELETE FROM mysql_query_rules_fast_routing"; + const char* const DELETE_MYSQL_REPLICATION_HOSTGROUPS = "DELETE FROM mysql_replication_hostgroups"; + const char* const DELETE_MYSQL_GROUP_REPLICATION_HOSTGROUPS = "DELETE FROM mysql_group_replication_hostgroups"; + const char* const DELETE_MYSQL_GALERA_HOSTGROUPS = "DELETE FROM mysql_galera_hostgroups"; + const char* const DELETE_MYSQL_AWS_AURORA_HOSTGROUPS = "DELETE FROM mysql_aws_aurora_hostgroups"; + const char* const DELETE_MYSQL_HOSTGROUP_ATTRIBUTES = "DELETE FROM mysql_hostgroup_attributes"; + const char* const DELETE_MYSQL_SERVERS_SSL_PARAMS = "DELETE FROM mysql_servers_ssl_params"; + const char* const DELETE_PGSQL_SERVERS = "DELETE FROM pgsql_servers"; + const char* const DELETE_PGSQL_REPLICATION_HOSTGROUPS = "DELETE FROM pgsql_replication_hostgroups"; + const char* const DELETE_PGSQL_HOSTGROUP_ATTRIBUTES = "DELETE FROM pgsql_hostgroup_attributes"; +} + #define SAFE_SQLITE3_STEP(_stmt) do {\ do {\ rc=(*proxy_sqlite3_step)(_stmt);\ @@ -39,10 +112,55 @@ using std::string; static char *NODE_COMPUTE_DELIMITER=(char *)"-gtyw23a-"; // a random string used for hashing +/** + * Safely updates peer information by freeing existing allocations and creating new ones + * + * @param existing_hostname Pointer to existing hostname allocation (may be NULL) + * @param existing_ip_addr Pointer to existing IP address allocation (may be NULL) + * @param new_hostname New hostname to allocate (may be NULL) + * @param new_ip_addr New IP address to allocate (may be NULL) + * + * @return Returns true if allocation succeeded, false on memory allocation failure + */ +static bool safe_update_peer_info(char** existing_hostname, char** existing_ip_addr, + const char* new_hostname, const char* new_ip_addr) { + // Free existing allocations + if (*existing_hostname) { + free(*existing_hostname); + *existing_hostname = NULL; + } + if (*existing_ip_addr) { + free(*existing_ip_addr); + *existing_ip_addr = NULL; + } + + // Allocate new values + if (new_hostname) { + *existing_hostname = strdup(new_hostname); + if (*existing_hostname == NULL) { + return false; // Memory allocation failed + } + } + if (new_ip_addr) { + *existing_ip_addr = strdup(new_ip_addr); + if (*existing_ip_addr == NULL) { + if (*existing_hostname) { + free(*existing_hostname); + *existing_hostname = NULL; + } + return false; // Memory allocation failed + } + } + + return true; +} + extern ProxySQL_Cluster * GloProxyCluster; extern ProxySQL_Admin *GloAdmin; extern MySQL_LDAP_Authentication* GloMyLdapAuth; extern MySQL_Authentication* GloMyAuth; +extern PgSQL_Authentication *GloPgAuth; +extern PgSQL_Query_Processor* GloPgQPro; void * ProxySQL_Cluster_Monitor_thread(void *args) { pthread_attr_t thread_attr; @@ -400,439 +518,315 @@ ProxySQL_Node_Metrics * ProxySQL_Node_Entry::get_metrics_prev() { return m; } +/** + * @brief Processes checksum updates from a cluster peer and triggers synchronization when needed. + * + * This function is the core of ProxySQL's cluster monitoring and synchronization system. It processes + * checksum data received from peer nodes, compares it with local checksums, and initiates synchronization + * when differences are detected and thresholds are met. + * + * The function processes checksums for the following modules: + * MySQL modules: + * - admin_variables: ProxySQL admin configuration + * - mysql_query_rules: MySQL query routing rules + * - mysql_servers_v2: MySQL server configuration + * - runtime_mysql_servers: Runtime MySQL server status + * - mysql_users: MySQL user credentials + * - mysql_variables: MySQL server variables + * - ldap_variables: LDAP authentication settings + * - proxysql_servers: ProxySQL cluster node configuration + * + * PostgreSQL modules: + * - pgsql_query_rules: PostgreSQL query routing rules + * - pgsql_servers_v2: PostgreSQL server configuration + * - runtime_pgsql_servers: Runtime PostgreSQL server status + * - pgsql_users: PostgreSQL user credentials + * + * Synchronization Logic: + * 1. For each module, it compares local and peer checksums + * 2. If checksums differ, it checks the epoch timestamp to determine recency + * 3. If the peer is more recent and diff_check exceeds configured thresholds, sync is triggered + * 4. Conflict resolution handles cases where multiple nodes have the same epoch + * 5. Appropriate pull functions are called to fetch updated configuration + * + * Thresholds and Configuration: + * Each module has a corresponding *_diffs_before_sync variable that controls how many + * consecutive differences must be observed before triggering synchronization. This prevents + * excessive network traffic due to transient changes. + * + * @param _r MySQL result set containing checksum data from a peer node + * + * @note This function is thread-safe and requires GloVars.checksum_mutex to be held + * @note The function logs detailed information about checksum changes and synchronization decisions + * @note Metrics are updated to track successful and failed synchronization attempts + * @see ProxySQL_Cluster::pull_mysql_query_rules_from_peer() + * @see ProxySQL_Cluster::pull_pgsql_query_rules_from_peer() + * @see ProxySQL_Cluster::pull_pgsql_users_from_peer() + * @see ProxySQL_Cluster::pull_pgsql_servers_v2_from_peer() + * @see cluster_*_diffs_before_sync variables + */ +/** + * @brief Helper function to process checksum updates for cluster components + * + * @param row MySQL row containing checksum data (row[0] contains component name) + * @param checksum Reference to the node's checksum value + * @param global_checksum Reference to the global checksum value + * @param now Current timestamp + * @param diff_flag Flag indicating if sync should be delayed + * @param diff_sync_msg Message for when sync is disabled + * @param hostname Peer hostname for logging + * @param port Peer port for logging + */ +static void process_component_checksum( + MYSQL_ROW row, + ProxySQL_Checksum_Value_2& checksum, + ProxySQL_Checksum_Value& global_checksum, + time_t now, + bool diff_flag, + const char* diff_sync_msg, + const char* hostname, + int port +) { + checksum.version = atoll(row[1]); + checksum.epoch = atoll(row[2]); + checksum.last_updated = now; + + if (strcmp(checksum.checksum, row[3])) { + strcpy(checksum.checksum, row[3]); + checksum.last_changed = now; + checksum.diff_check = 1; + const char* no_sync_message = NULL; + + if (diff_flag) { + no_sync_message = "Not syncing yet ...\n"; + } else { + no_sync_message = diff_sync_msg; + } + + proxy_info( + "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", + row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message + ); + + if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { + proxy_info( + "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", + row[0], hostname, port, global_checksum.checksum + ); + } + } else { + checksum.diff_check++; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", + row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, global_checksum.checksum, checksum.diff_check); + } + + if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { + checksum.diff_check = 0; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for %s from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", + row[0], hostname, port, global_checksum.checksum); + } +} + + void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { MYSQL_ROW row; time_t now = time(NULL); - // Fetch the cluster_*_diffs_before_sync variables to ensure consistency at local scope - unsigned int diff_av = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_admin_variables_diffs_before_sync,0); - unsigned int diff_mqr = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync,0); - unsigned int diff_ms = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_servers_diffs_before_sync,0); - unsigned int diff_mu = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_users_diffs_before_sync,0); - unsigned int diff_ps = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_proxysql_servers_diffs_before_sync,0); - unsigned int diff_mv = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_variables_diffs_before_sync,0); - unsigned int diff_lv = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_ldap_variables_diffs_before_sync,0); - + pthread_mutex_lock(&GloVars.checksum_mutex); - while ( _r && (row = mysql_fetch_row(_r))) { - if (strcmp(row[0],"admin_variables")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.admin_variables; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.admin_variables; - checksums_values.admin_variables.version = atoll(row[1]); - checksums_values.admin_variables.epoch = atoll(row[2]); - checksums_values.admin_variables.last_updated = now; - if (strcmp(checksums_values.admin_variables.checksum, row[3])) { - strcpy(checksums_values.admin_variables.checksum, row[3]); - checksums_values.admin_variables.last_changed = now; - checksums_values.admin_variables.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_av) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_admin_variables_diffs_before_sync=0'.\n"; - } + // Data-driven mapping of module names to their checksum fields and sync configuration + struct ChecksumModuleInfo { + const char* module_name; + ProxySQL_Checksum_Value_2* local_checksum; + ProxySQL_Checksum_Value* global_checksum; + std::atomic ProxySQL_Cluster::*diff_member; - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); + // Sync decision fields (used only for modules that need special sync processing) + const char* sync_command; // "admin", "mysql", "ldap" for pull_global_variables_from_peer() + const char* load_runtime_command; // Command name for warning messages + int sync_conflict_counter; // Counter for epoch conflicts + int sync_delayed_counter; // Counter for version=1 delays - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); - } - } else { - checksums_values.admin_variables.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for admin_variables from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.admin_variables.version, checksums_values.admin_variables.epoch, - checksums_values.admin_variables.checksum, GloVars.checksums_values.admin_variables.checksum, checksums_values.admin_variables.diff_check); - } - if (strcmp(checksums_values.admin_variables.checksum, GloVars.checksums_values.admin_variables.checksum) == 0) { - checksums_values.admin_variables.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for admin_variables from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.admin_variables.checksum); - } - continue; - } - if (strcmp(row[0],"mysql_query_rules")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.mysql_query_rules; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.mysql_query_rules; - checksums_values.mysql_query_rules.version = atoll(row[1]); - checksums_values.mysql_query_rules.epoch = atoll(row[2]); - checksums_values.mysql_query_rules.last_updated = now; - if (strcmp(checksums_values.mysql_query_rules.checksum, row[3])) { - strcpy(checksums_values.mysql_query_rules.checksum, row[3]); - checksums_values.mysql_query_rules.last_changed = now; - checksums_values.mysql_query_rules.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_mqr) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_mysql_query_rules_diffs_before_sync=0'.\n"; - } + bool (*enabled_check)(); // Function to check if module is enabled (nullptr for always enabled) + }; - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); + // Initialize all supported modules with their respective checksum field pointers + ChecksumModuleInfo modules[] = { + {"admin_variables", &checksums_values.admin_variables, &GloVars.checksums_values.admin_variables, &ProxySQL_Cluster::cluster_admin_variables_diffs_before_sync, + "admin", RuntimeCommands::LOAD_ADMIN_VARIABLES, + static_cast(p_cluster_counter::sync_conflict_admin_variables_share_epoch), + static_cast(p_cluster_counter::sync_delayed_admin_variables_version_one), nullptr}, + {ClusterModules::MYSQL_QUERY_RULES, &checksums_values.mysql_query_rules, &GloVars.checksums_values.mysql_query_rules, &ProxySQL_Cluster::cluster_mysql_query_rules_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::MYSQL_SERVERS, &checksums_values.mysql_servers, &GloVars.checksums_values.mysql_servers, &ProxySQL_Cluster::cluster_mysql_servers_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::MYSQL_SERVERS_V2, &checksums_values.mysql_servers_v2, &GloVars.checksums_values.mysql_servers_v2, &ProxySQL_Cluster::cluster_mysql_servers_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::MYSQL_USERS, &checksums_values.mysql_users, &GloVars.checksums_values.mysql_users, &ProxySQL_Cluster::cluster_mysql_users_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::MYSQL_VARIABLES, &checksums_values.mysql_variables, &GloVars.checksums_values.mysql_variables, &ProxySQL_Cluster::cluster_mysql_variables_diffs_before_sync, + "mysql", RuntimeCommands::LOAD_MYSQL_VARIABLES, + static_cast(p_cluster_counter::sync_conflict_mysql_variables_share_epoch), + static_cast(p_cluster_counter::sync_delayed_mysql_variables_version_one), nullptr}, + {ClusterModules::PROXYSQL_SERVERS, &checksums_values.proxysql_servers, &GloVars.checksums_values.proxysql_servers, &ProxySQL_Cluster::cluster_proxysql_servers_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::LDAP_VARIABLES, &checksums_values.ldap_variables, &GloVars.checksums_values.ldap_variables, &ProxySQL_Cluster::cluster_ldap_variables_diffs_before_sync, + "ldap", RuntimeCommands::LOAD_LDAP_VARIABLES, + static_cast(p_cluster_counter::sync_conflict_ldap_variables_share_epoch), + static_cast(p_cluster_counter::sync_delayed_ldap_variables_version_one), + []() { return GloMyLdapAuth != nullptr; }}, + {ClusterModules::PGSQL_QUERY_RULES, &checksums_values.pgsql_query_rules, &GloVars.checksums_values.pgsql_query_rules, &ProxySQL_Cluster::cluster_pgsql_query_rules_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::PGSQL_SERVERS, &checksums_values.pgsql_servers, &GloVars.checksums_values.pgsql_servers, &ProxySQL_Cluster::cluster_pgsql_servers_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::PGSQL_SERVERS_V2, &checksums_values.pgsql_servers_v2, &GloVars.checksums_values.pgsql_servers_v2, &ProxySQL_Cluster::cluster_pgsql_servers_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::PGSQL_USERS, &checksums_values.pgsql_users, &GloVars.checksums_values.pgsql_users, &ProxySQL_Cluster::cluster_pgsql_users_diffs_before_sync, + nullptr, nullptr, 0, 0, nullptr}, + {ClusterModules::PGSQL_VARIABLES, &checksums_values.pgsql_variables, &GloVars.checksums_values.pgsql_variables, &ProxySQL_Cluster::cluster_pgsql_variables_diffs_before_sync, + "pgsql", RuntimeCommands::LOAD_PGSQL_VARIABLES, + static_cast(p_cluster_counter::sync_conflict_pgsql_variables_share_epoch), + static_cast(p_cluster_counter::sync_delayed_pgsql_variables_version_one), nullptr} + }; - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); + while ( _r && (row = mysql_fetch_row(_r))) { + // Data-driven approach: find the matching module and process it + bool module_found = false; + + // Search for the module in our data structure and check if it's enabled + for (const auto& module : modules) { + if (strcmp(row[0], module.module_name) == 0) { + // Skip module if not enabled (for modules with optional dependencies like LDAP) + if (module.enabled_check && !module.enabled_check()) { + module_found = true; + break; } - } else { - checksums_values.mysql_query_rules.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_query_rules from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.mysql_query_rules.version, checksums_values.mysql_query_rules.epoch, - checksums_values.mysql_query_rules.checksum, GloVars.checksums_values.mysql_query_rules.checksum, checksums_values.mysql_query_rules.diff_check); - } - if (strcmp(checksums_values.mysql_query_rules.checksum, GloVars.checksums_values.mysql_query_rules.checksum) == 0) { - checksums_values.mysql_query_rules.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_query_rules from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.mysql_query_rules.checksum); + module_found = true; + break; } - continue; } - if (strcmp(row[0],"mysql_servers")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.mysql_servers; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.mysql_servers; - checksums_values.mysql_servers.version = atoll(row[1]); - checksums_values.mysql_servers.epoch = atoll(row[2]); - checksums_values.mysql_servers.last_updated = now; - if (strcmp(checksums_values.mysql_servers.checksum, row[3])) { - strcpy(checksums_values.mysql_servers.checksum, row[3]); - checksums_values.mysql_servers.last_changed = now; - checksums_values.mysql_servers.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_ms) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_mysql_servers_diffs_before_sync=0'.\n"; - } - - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum + if (module_found) { + // Find the module and get its diff threshold + for (const auto& module : modules) { + if (strcmp(row[0], module.module_name) == 0) { + // Get diff threshold using member pointer with atomic load + unsigned int diff_threshold = (unsigned int)(GloProxyCluster->*(module.diff_member)).load(); + + // Generate generalized sync message + char sync_msg[256]; + snprintf(sync_msg, sizeof(sync_msg), ErrorMessages::DIFFS_BEFORE_SYNC_FORMAT, module.module_name); + + process_component_checksum( + row, + *module.local_checksum, + *module.global_checksum, + now, diff_threshold, + sync_msg, + hostname, port ); + break; } - } else { - checksums_values.mysql_servers.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_servers from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.mysql_servers.version, checksums_values.mysql_servers.epoch, - checksums_values.mysql_servers.checksum, GloVars.checksums_values.mysql_servers.checksum, checksums_values.mysql_servers.diff_check); - } - if (strcmp(checksums_values.mysql_servers.checksum, GloVars.checksums_values.mysql_servers.checksum) == 0) { - checksums_values.mysql_servers.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_servers from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.mysql_servers.checksum); } - continue; } - if (strcmp(row[0], "mysql_servers_v2")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.mysql_servers_v2; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.mysql_servers_v2; - checksums_values.mysql_servers_v2.version = atoll(row[1]); - checksums_values.mysql_servers_v2.epoch = atoll(row[2]); - checksums_values.mysql_servers_v2.last_updated = now; - if (strcmp(checksums_values.mysql_servers_v2.checksum, row[3])) { - strcpy(checksums_values.mysql_servers_v2.checksum, row[3]); - checksums_values.mysql_servers_v2.last_changed = now; - checksums_values.mysql_servers_v2.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_ms) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_mysql_servers_diffs_before_sync=0'.\n"; + } + if (_r == NULL) { + // Update diff_check counters for all modules using data-driven approach + size_t module_count = sizeof(modules) / sizeof(modules[0]); + for (size_t i = 0; i < module_count; i++) { + ProxySQL_Checksum_Value_2* local_v = modules[i].local_checksum; + ProxySQL_Checksum_Value* global_v = modules[i].global_checksum; + + if (local_v && global_v) { + local_v->last_updated = now; + if (strcmp(local_v->checksum, global_v->checksum) == 0) { + local_v->diff_check = 0; } - - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); - - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); + if (local_v->diff_check) { + local_v->diff_check++; } - } else { - checksums_values.mysql_servers_v2.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_servers_v2 from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.mysql_servers_v2.version, checksums_values.mysql_servers_v2.epoch, - checksums_values.mysql_servers_v2.checksum, GloVars.checksums_values.mysql_servers_v2.checksum, checksums_values.mysql_servers_v2.diff_check); - } - if (strcmp(checksums_values.mysql_servers_v2.checksum, GloVars.checksums_values.mysql_servers_v2.checksum) == 0) { - checksums_values.mysql_servers_v2.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_servers_v2 from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.mysql_servers.checksum); } - continue; } - if (strcmp(row[0],"mysql_users")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.mysql_users; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.mysql_users; - checksums_values.mysql_users.version = atoll(row[1]); - checksums_values.mysql_users.epoch = atoll(row[2]); - checksums_values.mysql_users.last_updated = now; - if (strcmp(checksums_values.mysql_users.checksum, row[3])) { - strcpy(checksums_values.mysql_users.checksum, row[3]); - checksums_values.mysql_users.last_changed = now; - checksums_values.mysql_users.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_mu) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_mysql_users_diffs_before_sync=0'.\n"; - } + } + pthread_mutex_unlock(&GloVars.checksum_mutex); + // we now do a series of checks, and we take action + // note that this is done outside the critical section + // as mutex on GloVars.checksum_mutex is already released - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); + // Set of modules that need special sync decision processing (admin_variables, mysql_variables, ldap_variables, pgsql_variables) + const std::unordered_set sync_enabled_modules = { + "admin_variables", + "mysql_variables", + "ldap_variables", + "pgsql_variables" + }; - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); - } - } else { - checksums_values.mysql_users.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_users from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.mysql_users.version, checksums_values.mysql_users.epoch, - checksums_values.mysql_users.checksum, GloVars.checksums_values.mysql_users.checksum, checksums_values.mysql_users.diff_check); - } - if (strcmp(checksums_values.mysql_users.checksum, GloVars.checksums_values.mysql_users.checksum) == 0) { - checksums_values.mysql_users.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_users from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.mysql_users.checksum); - } + // Process sync decisions for modules that need special sync processing + for (const auto& module : modules) { + // Only process modules that are in the sync_enabled_modules set + if (sync_enabled_modules.find(module.module_name) == sync_enabled_modules.end()) { continue; } - if (strcmp(row[0],"mysql_variables")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.mysql_variables; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.mysql_variables; - checksums_values.mysql_variables.version = atoll(row[1]); - checksums_values.mysql_variables.epoch = atoll(row[2]); - checksums_values.mysql_variables.last_updated = now; - if (strcmp(checksums_values.mysql_variables.checksum, row[3])) { - strcpy(checksums_values.mysql_variables.checksum, row[3]); - checksums_values.mysql_variables.last_changed = now; - checksums_values.mysql_variables.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_mv) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_mysql_variables_diffs_before_sync=0'.\n"; - } - - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); - } - } else { - checksums_values.mysql_variables.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_variables from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.mysql_variables.version, checksums_values.mysql_variables.epoch, - checksums_values.mysql_variables.checksum, GloVars.checksums_values.mysql_variables.checksum, checksums_values.mysql_variables.diff_check); - } - if (strcmp(checksums_values.mysql_variables.checksum, GloVars.checksums_values.mysql_variables.checksum) == 0) { - checksums_values.mysql_variables.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for mysql_variables from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.mysql_variables.checksum); - } + // Skip module if not enabled (for modules with optional dependencies like LDAP) + if (module.enabled_check && !module.enabled_check()) { continue; } - if (strcmp(row[0],"proxysql_servers")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.proxysql_servers; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.proxysql_servers; - checksums_values.proxysql_servers.version = atoll(row[1]); - checksums_values.proxysql_servers.epoch = atoll(row[2]); - checksums_values.proxysql_servers.last_updated = now; - if (strcmp(checksums_values.proxysql_servers.checksum, row[3])) { - strcpy(checksums_values.proxysql_servers.checksum, row[3]); - checksums_values.proxysql_servers.last_changed = now; - checksums_values.proxysql_servers.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_ps) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_proxysql_servers_diffs_before_sync=0'.\n"; - } - - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); - } - } else { - checksums_values.proxysql_servers.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for proxysql_servers from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.proxysql_servers.version, checksums_values.proxysql_servers.epoch, - checksums_values.proxysql_servers.checksum, GloVars.checksums_values.proxysql_servers.checksum, checksums_values.proxysql_servers.diff_check); - } - if (strcmp(checksums_values.proxysql_servers.checksum, GloVars.checksums_values.proxysql_servers.checksum) == 0) { - checksums_values.proxysql_servers.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for proxysql_servers from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.proxysql_servers.checksum); - } + // Skip modules that don't have sync configuration (those with nullptr sync_command) + if (!module.sync_command) { continue; } - if (GloMyLdapAuth && strcmp(row[0],"ldap_variables")==0) { - ProxySQL_Checksum_Value_2& checksum = checksums_values.ldap_variables; - ProxySQL_Checksum_Value& global_checksum = GloVars.checksums_values.ldap_variables; - checksums_values.ldap_variables.version = atoll(row[1]); - checksums_values.ldap_variables.epoch = atoll(row[2]); - checksums_values.ldap_variables.last_updated = now; - if (strcmp(checksums_values.ldap_variables.checksum, row[3])) { - strcpy(checksums_values.ldap_variables.checksum, row[3]); - checksums_values.ldap_variables.last_changed = now; - checksums_values.ldap_variables.diff_check = 1; - const char* no_sync_message = NULL; - - if (diff_lv) { - no_sync_message = "Not syncing yet ...\n"; - } else { - no_sync_message = "Not syncing due to 'admin-cluster_ldap_variables_diffs_before_sync=0'.\n"; - } - proxy_info( - "Cluster: detected a new checksum for %s from peer %s:%d, version %llu, epoch %llu, checksum %s . %s", - row[0], hostname, port, checksum.version, checksum.epoch, checksum.checksum, no_sync_message - ); + // Get diff threshold using member pointer with atomic load + unsigned int diff_threshold = (unsigned int)(GloProxyCluster->*(module.diff_member)).load(); - if (strcmp(checksum.checksum, global_checksum.checksum) == 0) { - proxy_info( - "Cluster: checksum for %s from peer %s:%d matches with local checksum %s , we won't sync.\n", - row[0], hostname, port, global_checksum.checksum - ); + if (diff_threshold > 0) { + ProxySQL_Checksum_Value_2 *v = module.local_checksum; + unsigned long long own_version = __sync_fetch_and_add(&module.global_checksum->version, 0); + unsigned long long own_epoch = __sync_fetch_and_add(&module.global_checksum->epoch, 0); + char* own_checksum = __sync_fetch_and_add(&module.global_checksum->checksum, 0); + const string expected_checksum { v->checksum }; + + if (v->version > 1) { + if ( + (own_version == 1) // we just booted + || + (v->epoch > own_epoch) // epoch is newer + ) { + if (v->diff_check >= diff_threshold) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, own_version, own_epoch); + proxy_info("Cluster: detected a peer %s:%d with %s version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, own_version, own_epoch); + GloProxyCluster->pull_global_variables_from_peer(module.sync_command, expected_checksum, v->epoch); + } + } + if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_threshold*10)) == 0)) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until %s is executed on candidate master.\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_threshold * 10), module.load_runtime_command); + proxy_error("Cluster: detected a peer %s:%d with %s version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until %s is executed on candidate master.\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_threshold*10), module.load_runtime_command); + GloProxyCluster->metrics.p_counter_array[module.sync_conflict_counter]->Increment(); } } else { - checksums_values.ldap_variables.diff_check++; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for ldap_variables from peer %s:%d, version %llu, epoch %llu, checksum %s is different from local checksum %s. Incremented diff_check %d ...\n", hostname, port, checksums_values.ldap_variables.version, checksums_values.ldap_variables.epoch, - checksums_values.ldap_variables.checksum, GloVars.checksums_values.ldap_variables.checksum, checksums_values.ldap_variables.diff_check); - } - if (strcmp(checksums_values.ldap_variables.checksum, GloVars.checksums_values.ldap_variables.checksum) == 0) { - checksums_values.ldap_variables.diff_check = 0; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum for ldap_variables from peer %s:%d matches with local checksum %s, reset diff_check to 0.\n", hostname, port, GloVars.checksums_values.ldap_variables.checksum); - } - continue; - } - } - if (_r == NULL) { - ProxySQL_Checksum_Value_2 *v = NULL; - v = &checksums_values.admin_variables; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.admin_variables.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.mysql_query_rules; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.mysql_query_rules.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.mysql_servers; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.mysql_servers.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.mysql_servers_v2; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.mysql_servers_v2.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.mysql_users; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.mysql_users.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.mysql_variables; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.mysql_variables.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.proxysql_servers; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.proxysql_servers.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - v = &checksums_values.ldap_variables; - v->last_updated = now; - if (strcmp(v->checksum, GloVars.checksums_values.ldap_variables.checksum) == 0) { - v->diff_check = 0; - } - if (v->diff_check) - v->diff_check++; - } - pthread_mutex_unlock(&GloVars.checksum_mutex); - // we now do a series of checks, and we take action - // note that this is done outside the critical section - // as mutex on GloVars.checksum_mutex is already released - ProxySQL_Checksum_Value_2 *v = NULL; - if (diff_av) { - v = &checksums_values.admin_variables; - unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.admin_variables.version, 0); - unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.admin_variables.epoch, 0); - char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.admin_variables.checksum, 0); - const string expected_checksum { v->checksum }; - - if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { - if (v->diff_check >= diff_av) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - proxy_info("Cluster: detected a peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - GloProxyCluster->pull_global_variables_from_peer("admin", expected_checksum, v->epoch); + if (v->diff_check && (v->diff_check % (diff_threshold*10)) == 0) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until %s is executed on candidate master.\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_threshold * 10), module.load_runtime_command); + proxy_warning("Cluster: detected a peer %s:%d with %s version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until %s is executed on candidate master.\n", hostname, port, module.module_name, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_threshold*10), module.load_runtime_command); + GloProxyCluster->metrics.p_counter_array[module.sync_delayed_counter]->Increment(); } } - if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_av*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD ADMIN VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_av * 10)); - proxy_error("Cluster: detected a peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD ADMIN VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_av*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_admin_variables_share_epoch]->Increment(); - } - } else { - if (v->diff_check && (v->diff_check % (diff_av*10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD ADMIN VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_av * 10)); - proxy_warning("Cluster: detected a peer %s:%d with admin_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD ADMIN VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_av*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_admin_variables_version_one]->Increment(); - } } } + + // Synchronization for all non-variable modules is handled below. + ProxySQL_Checksum_Value_2 *v = nullptr; + + unsigned int diff_mqr = (unsigned int)GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync.load(); + unsigned int diff_ms = (unsigned int)GloProxyCluster->cluster_mysql_servers_diffs_before_sync.load(); + unsigned int diff_mu = (unsigned int)GloProxyCluster->cluster_mysql_users_diffs_before_sync.load(); + unsigned int diff_pqr = (unsigned int)GloProxyCluster->cluster_pgsql_query_rules_diffs_before_sync.load(); + unsigned int diff_ms_pgsql = (unsigned int)GloProxyCluster->cluster_pgsql_servers_diffs_before_sync.load(); + unsigned int diff_mu_pgsql = (unsigned int)GloProxyCluster->cluster_pgsql_users_diffs_before_sync.load(); + unsigned int diff_ps = (unsigned int)GloProxyCluster->cluster_proxysql_servers_diffs_before_sync.load(); + if (diff_mqr) { unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.mysql_query_rules.version,0); unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.mysql_query_rules.epoch,0); @@ -841,11 +835,7 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { const std::string v_exp_checksum { v->checksum }; if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { + if ((own_version == 1) || (v->epoch > own_epoch)) { if (v->diff_check >= diff_mqr) { proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); proxy_info("Cluster: detected a peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); @@ -853,21 +843,21 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { } } if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_mqr*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mqr * 10)); - proxy_error("Cluster: detected a peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mqr*10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mqr * 10)); + proxy_error("Cluster: detected a peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mqr*10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_mysql_query_rules_share_epoch]->Increment(); } } else { if (v->diff_check && (v->diff_check % (diff_mqr*10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected a peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mqr * 10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mqr * 10)); proxy_warning("Cluster: detected a peer %s:%d with mysql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mqr*10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_mysql_query_rules_version_one]->Increment(); } } } + if (diff_ms) { mysql_servers_sync_algorithm mysql_server_sync_algo = (mysql_servers_sync_algorithm)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_servers_sync_algorithm, 0); - if (mysql_server_sync_algo == mysql_servers_sync_algorithm::auto_select) { mysql_server_sync_algo = (GloVars.global.my_monitor == false) ? mysql_servers_sync_algorithm::runtime_mysql_servers_and_mysql_servers_v2 : mysql_servers_sync_algorithm::mysql_servers_v2; @@ -880,23 +870,16 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { bool runtime_mysql_servers_already_loaded = false; if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { + if ((own_version == 1) || (v->epoch > own_epoch)) { if (v->diff_check >= diff_ms) { proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); proxy_info("Cluster: detected a peer %s:%d with mysql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); ProxySQL_Checksum_Value_2* runtime_mysql_server_checksum = &checksums_values.mysql_servers; - const bool fetch_runtime = (mysql_server_sync_algo == mysql_servers_sync_algorithm::runtime_mysql_servers_and_mysql_servers_v2); - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetch mysql_servers_v2:'YES', mysql_servers:'%s' from peer %s:%d\n", (fetch_runtime ? "YES" : "NO"), - hostname, port); - proxy_info("Cluster: Fetch mysql_servers_v2:'YES', mysql_servers:'%s' from peer %s:%d\n", (fetch_runtime ? "YES" : "NO"), - hostname, port); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetch mysql_servers_v2:'YES', mysql_servers:'%s' from peer %s:%d\n", (fetch_runtime ? "YES" : "NO"), hostname, port); + proxy_info("Cluster: Fetch mysql_servers_v2:'YES', mysql_servers:'%s' from peer %s:%d\n", (fetch_runtime ? "YES" : "NO"), hostname, port); GloProxyCluster->pull_mysql_servers_v2_from_peer({ v->checksum, static_cast(v->epoch) }, { runtime_mysql_server_checksum->checksum, static_cast(runtime_mysql_server_checksum->epoch) }, fetch_runtime); @@ -911,25 +894,20 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { } } else { if (v->diff_check && (v->diff_check % (diff_ms * 10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms * 10)); - proxy_warning("Cluster: detected a peer %s:%d with mysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms * 10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms * 10)); + proxy_warning("Cluster: detected a peer %s:%d with mysql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms * 10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_mysql_servers_version_one]->Increment(); } } - if (mysql_server_sync_algo == mysql_servers_sync_algorithm::runtime_mysql_servers_and_mysql_servers_v2 && - runtime_mysql_servers_already_loaded == false) { + if (mysql_server_sync_algo == mysql_servers_sync_algorithm::runtime_mysql_servers_and_mysql_servers_v2 && runtime_mysql_servers_already_loaded == false) { v = &checksums_values.mysql_servers; unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.mysql_servers.version, 0); unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.mysql_servers.epoch, 0); char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.mysql_servers.checksum, 0); if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { + if ((own_version == 1) || (v->epoch > own_epoch)) { if (v->diff_check >= diff_ms) { proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); proxy_info("Cluster: detected a peer %s:%d with mysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); @@ -948,8 +926,9 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_mysql_servers_version_one]->Increment(); } } - } + } } + if (diff_mu) { v = &checksums_values.mysql_users; unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.mysql_users.version,0); @@ -958,11 +937,7 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { const std::string v_exp_checksum { v->checksum }; if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { + if ((own_version == 1) || (v->epoch > own_epoch)) { if (v->diff_check >= diff_mu) { proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); proxy_info("Cluster: detected a peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); @@ -970,101 +945,127 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { } } if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_mu*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu * 10)); - proxy_error("Cluster: detected a peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu*10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu * 10)); + proxy_error("Cluster: detected a peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu*10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_mysql_users_share_epoch]->Increment(); } } else { if (v->diff_check && (v->diff_check % (diff_mu*10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected a peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mu * 10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mu * 10)); proxy_warning("Cluster: detected a peer %s:%d with mysql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mu*10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_mysql_users_version_one]->Increment(); } } } - if (diff_mv) { - v = &checksums_values.mysql_variables; - unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.mysql_variables.version, 0); - unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.mysql_variables.epoch, 0); - char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.mysql_variables.checksum, 0); - const string expected_checksum { v->checksum }; + + if (diff_pqr) { + v = &checksums_values.pgsql_query_rules; + unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_query_rules.version,0); + unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_query_rules.epoch,0); + char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_query_rules.checksum,0); + const std::string v_exp_checksum { v->checksum }; + + if (v->version > 1) { + if ((own_version == 1) || (v->epoch > own_epoch)) { + if (v->diff_check >= diff_pqr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + proxy_info("Cluster: detected a peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + GloProxyCluster->pull_pgsql_query_rules_from_peer(v_exp_checksum, v->epoch); + } + } + if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_pqr*10)) == 0)) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_pqr * 10)); + proxy_error("Cluster: detected a peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_pqr*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_pgsql_query_rules_share_epoch]->Increment(); + } + } else { + if (v->diff_check && (v->diff_check % (diff_pqr*10)) == 0) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_pqr * 10)); + proxy_warning("Cluster: detected a peer %s:%d with pgsql_query_rules version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL QUERY RULES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_pqr*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_pgsql_query_rules_version_one]->Increment(); + } + } + } + + if (diff_mu_pgsql) { + v = &checksums_values.pgsql_users; + unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_users.version,0); + unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_users.epoch,0); + char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_users.checksum,0); + const std::string v_exp_checksum { v->checksum }; if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { - if (v->diff_check >= diff_mv) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - proxy_info("Cluster: detected a peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - GloProxyCluster->pull_global_variables_from_peer("mysql", expected_checksum, v->epoch); + if ((own_version == 1) || (v->epoch > own_epoch)) { + if (v->diff_check >= diff_mu_pgsql) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + proxy_info("Cluster: detected a peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + GloProxyCluster->pull_pgsql_users_from_peer(v_exp_checksum, v->epoch); } } - if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_mv*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mv * 10)); - proxy_error("Cluster: detected a peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mv*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_mysql_variables_share_epoch]->Increment(); + if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_mu_pgsql*10)) == 0)) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu_pgsql * 10)); + proxy_error("Cluster: detected a peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_mu_pgsql*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_pgsql_users_share_epoch]->Increment(); } } else { - if (v->diff_check && (v->diff_check % (diff_mv*10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mv * 10)); - proxy_warning("Cluster: detected a peer %s:%d with mysql_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD MYSQL VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mv*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_mysql_variables_version_one]->Increment(); + if (v->diff_check && (v->diff_check % (diff_mu_pgsql*10)) == 0) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mu_pgsql * 10)); + proxy_warning("Cluster: detected a peer %s:%d with pgsql_users version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL USERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_mu_pgsql*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_pgsql_users_version_one]->Increment(); } } } - if (GloMyLdapAuth && diff_lv) { - v = &checksums_values.ldap_variables; - unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.ldap_variables.version, 0); - unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.ldap_variables.epoch, 0); - char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.ldap_variables.checksum, 0); - const string expected_checksum { v->checksum }; + + if (diff_ms_pgsql) { + v = &checksums_values.pgsql_servers_v2; + unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_servers_v2.version,0); + unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_servers_v2.epoch,0); + char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.pgsql_servers_v2.checksum,0); + const std::string v_exp_checksum { v->checksum }; if (v->version > 1) { - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { - if (v->diff_check >= diff_lv) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - proxy_info("Cluster: detected a peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); - GloProxyCluster->pull_global_variables_from_peer("ldap", expected_checksum, v->epoch); + if ((own_version == 1) || (v->epoch > own_epoch)) { + if (v->diff_check >= diff_ms_pgsql) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + proxy_info("Cluster: detected a peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); + + pgsql_servers_v2_checksum_t pgsql_servers_v2_checksum{v_exp_checksum, static_cast(v->epoch)}; + ProxySQL_Checksum_Value_2* runtime_pgsql_server_checksum = &checksums_values.pgsql_servers; + runtime_pgsql_servers_checksum_t runtime_pgsql_servers_checksum{ + runtime_pgsql_server_checksum->checksum, static_cast(runtime_pgsql_server_checksum->epoch) + }; + GloProxyCluster->pull_pgsql_servers_v2_from_peer(pgsql_servers_v2_checksum, runtime_pgsql_servers_checksum, true); } } - if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_lv*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD LDAP VARIABLES is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_lv * 10)); - proxy_error("Cluster: detected a peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD LDAP VARIABLES is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_lv*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_ldap_variables_share_epoch]->Increment(); + if ((v->epoch == own_epoch) && v->diff_check && ((v->diff_check % (diff_ms_pgsql*10)) == 0)) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ms_pgsql * 10)); + proxy_error("Cluster: detected a peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PGSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ms_pgsql*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_pgsql_servers_share_epoch]->Increment(); } } else { - if (v->diff_check && (v->diff_check % (diff_lv*10)) == 0) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD LDAP VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_lv * 10)); - proxy_warning("Cluster: detected a peer %s:%d with ldap_variables version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD LDAP VARIABLES TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_lv*10)); - GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_ldap_variables_version_one]->Increment(); + if (v->diff_check && (v->diff_check % (diff_ms_pgsql*10)) == 0) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms_pgsql * 10)); + proxy_warning("Cluster: detected a peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. diff_check is increasing, but version 1 doesn't allow sync. This message will be repeated every %u checks until LOAD PGSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch, (diff_ms_pgsql*10)); + GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_delayed_pgsql_servers_version_one]->Increment(); } } } - // IMPORTANT-NOTE: This action should ALWAYS be performed the last, since the 'checksums_values' gets - // invalidated by 'pull_proxysql_servers_from_peer' and further memory accesses would be invalid. + + // IMPORTANT-NOTE: This action should ALWAYS be performed the last, since + // the 'checksums_values' gets invalidated by 'pull_proxysql_servers_from_peer'. if (diff_ps) { v = &checksums_values.proxysql_servers; unsigned long long own_version = __sync_fetch_and_add(&GloVars.checksums_values.proxysql_servers.version,0); unsigned long long own_epoch = __sync_fetch_and_add(&GloVars.checksums_values.proxysql_servers.epoch,0); char* own_checksum = __sync_fetch_and_add(&GloVars.checksums_values.proxysql_servers.checksum,0); if (v->version > 1) { - // NOTE: Backup values: 'v' gets invalidated by 'pull_proxysql_servers_from_peer()' + // Backup values: 'v' gets invalidated by 'pull_proxysql_servers_from_peer()'. unsigned long long v_epoch = v->epoch; unsigned long long v_version = v->version; unsigned int v_diff_check = v->diff_check; const string v_exp_checksum { v->checksum }; - if ( - (own_version == 1) // we just booted - || - (v->epoch > own_epoch) // epoch is newer - ) { + if ((own_version == 1) || (v->epoch > own_epoch)) { if (v->diff_check >= diff_ps) { proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); proxy_info("Cluster: detected a peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u. Own version: %llu, epoch: %llu. Proceeding with remote sync\n", hostname, port, v->version, v->epoch, v->diff_check, own_version, own_epoch); @@ -1072,8 +1073,8 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { } } if ((v_epoch == own_epoch) && v_diff_check && ((v_diff_check % (diff_ps*10)) == 0)) { - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v_version, v_epoch, v_diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ps * 10)); - proxy_error("Cluster: detected a peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD MYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v_version, v_epoch, v_diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ps*10)); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PROXYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v_version, v_epoch, v_diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ps * 10)); + proxy_error("Cluster: detected a peer %s:%d with proxysql_servers version %llu, epoch %llu, diff_check %u, checksum %s. Own version: %llu, epoch: %llu, checksum %s. Sync conflict, epoch times are EQUAL, can't determine which server holds the latest config, we won't sync. This message will be repeated every %u checks until LOAD PROXYSQL SERVERS TO RUNTIME is executed on candidate master.\n", hostname, port, v_version, v_epoch, v_diff_check, v->checksum, own_version, own_epoch, own_checksum, (diff_ps*10)); GloProxyCluster->metrics.p_counter_array[p_cluster_counter::sync_conflict_proxysql_servers_share_epoch]->Increment(); } } else { @@ -1084,6 +1085,7 @@ void ProxySQL_Node_Entry::set_checksums(MYSQL_RES *_r) { } } } + } /** @@ -1123,6 +1125,33 @@ uint64_t mysql_raw_checksum(MYSQL_RES* resultset) { return res_hash; } +/** + * @brief Pulls MySQL query rules configuration from a cluster peer node. + * + * This function fetches MySQL query rules from a peer ProxySQL instance when the peer's + * checksum differs from the local checksum and the difference exceeds the configured + * threshold (cluster_mysql_query_rules_diffs_before_sync). It retrieves both regular query + * rules and fast routing rules. + * + * The function performs the following steps: + * 1. Identifies the optimal peer to sync from using get_peer_to_sync_mysql_query_rules() + * 2. Establishes a MySQL connection to the peer's admin interface + * 3. Executes CLUSTER_QUERY_MYSQL_QUERY_RULES and CLUSTER_QUERY_MYSQL_QUERY_RULES_FAST_ROUTING + * 4. Computes checksums for the fetched data + * 5. Validates checksums match the expected values + * 6. Loads the query rules to runtime via load_mysql_query_rules_to_runtime() + * 7. Optionally saves configuration to disk if cluster_mysql_query_rules_save_to_disk is enabled + * + * @param expected_checksum The expected checksum of the query rules on the peer + * @param epoch The epoch timestamp of the query rules on the peer + * + * @note This function is thread-safe and requires the update_mysql_query_rules_mutex to be held + * @note The function will sleep(1) if the fetch operation fails to prevent busy loops + * @see get_peer_to_sync_mysql_query_rules() + * @see CLUSTER_QUERY_MYSQL_QUERY_RULES + * @see CLUSTER_QUERY_MYSQL_QUERY_RULES_FAST_ROUTING + * @see load_mysql_query_rules_to_runtime() + */ void ProxySQL_Cluster::pull_mysql_query_rules_from_peer(const string& expected_checksum, const time_t epoch) { char * hostname = NULL; char * ip_address = NULL; @@ -1183,8 +1212,8 @@ void ProxySQL_Cluster::pull_mysql_query_rules_from_peer(const string& expected_c proxy_info("Cluster: Loading to runtime MySQL Query Rules from peer %s:%d\n", hostname, port); pthread_mutex_lock(&GloAdmin->sql_query_global_mutex); //GloAdmin->admindb->execute("PRAGMA quick_check"); - GloAdmin->admindb->execute("DELETE FROM mysql_query_rules"); - GloAdmin->admindb->execute("DELETE FROM mysql_query_rules_fast_routing"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_QUERY_RULES); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_QUERY_RULES_FAST_ROUTING); MYSQL_ROW row; char *q = (char *)"INSERT INTO mysql_query_rules (rule_id, active, username, schemaname, flagIN, client_addr, proxy_addr, proxy_port, digest, match_digest, match_pattern, negate_match_pattern, re_modifiers, flagOUT, replace_pattern, destination_hostgroup, cache_ttl, cache_empty_result, cache_timeout, reconnect, timeout, retries, delay, next_query_flagIN, mirror_flagOUT, mirror_hostgroup, error_msg, ok_msg, sticky_conn, multiplex, gtid_from_hostgroup, log, apply, attributes, comment) VALUES (?1 , ?2 , ?3 , ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15, ?16, ?17, ?18, ?19, ?20, ?21, ?22, ?23, ?24, ?25, ?26, ?27, ?28, ?29, ?30, ?31, ?32, ?33, ?34, ?35)"; auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); @@ -1359,8 +1388,12 @@ uint64_t get_mysql_users_checksum( return raw_users_checksum; } +uint64_t get_pgsql_users_checksum(MYSQL_RES* resultset, unique_ptr& all_users) { + return GloPgAuth->get_runtime_checksum(resultset, all_users); +} + void update_mysql_users(MYSQL_RES* result) { - GloAdmin->admindb->execute("DELETE FROM mysql_users"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_USERS); char* q = (char *)"INSERT INTO mysql_users (username, password, active, use_ssl, default_hostgroup, default_schema," " schema_locked, transaction_persistent, fast_forward, backend, frontend, max_connections, attributes, comment)" " VALUES (?1 , ?2 , ?3 , ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14)"; @@ -1393,54 +1426,185 @@ void update_mysql_users(MYSQL_RES* result) { // RAII auto-finalizes statement1 (fixes memory leak) } -void update_ldap_mappings(MYSQL_RES* result) { - GloAdmin->admindb->execute("DELETE FROM mysql_ldap_mapping"); - char* q = const_cast( - "INSERT INTO mysql_ldap_mapping (priority, frontend_entity, backend_entity, comment)" - " VALUES (?1 , ?2 , ?3 , ?4)" - ); +void update_pgsql_users(MYSQL_RES* result) { + GloAdmin->admindb->execute("DELETE FROM pgsql_users"); + char* q = (char*)"INSERT INTO pgsql_users (username, password, active, use_ssl, default_hostgroup," + " transaction_persistent, fast_forward, backend, frontend, max_connections, attributes, comment)" + " VALUES (?1 , ?2 , ?3 , ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); ASSERT_SQLITE_OK(rc1, GloAdmin->admindb); - sqlite3_stmt *statement1 = statement1_unique.get(); + sqlite3_stmt* statement1 = statement1_unique.get(); int rc; while (MYSQL_ROW row = mysql_fetch_row(result)) { - rc=(*proxy_sqlite3_bind_int64)(statement1, 1, atoll(row[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // priority - rc=(*proxy_sqlite3_bind_text)(statement1, 2, row[1], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // frontend_entity - rc=(*proxy_sqlite3_bind_text)(statement1, 3, row[2], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // backend_entity - rc=(*proxy_sqlite3_bind_text)(statement1, 4, row[3], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // comment + rc = (*proxy_sqlite3_bind_text)(statement1, 1, row[0], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // username + rc = (*proxy_sqlite3_bind_text)(statement1, 2, row[1], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // password + rc = (*proxy_sqlite3_bind_int64)(statement1, 3, 1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // active + rc = (*proxy_sqlite3_bind_int64)(statement1, 4, atoll(row[2])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // use_ssl + rc = (*proxy_sqlite3_bind_int64)(statement1, 5, atoll(row[3])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // default_hostgroup + rc = (*proxy_sqlite3_bind_int64)(statement1, 6, atoll(row[4])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // transaction_persistent + rc = (*proxy_sqlite3_bind_int64)(statement1, 7, atoll(row[5])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // fast_forward + rc = (*proxy_sqlite3_bind_int64)(statement1, 8, atoll(row[6])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // backend + rc = (*proxy_sqlite3_bind_int64)(statement1, 9, atoll(row[7])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // frontend + rc = (*proxy_sqlite3_bind_int64)(statement1, 10, atoll(row[8])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // max_connections + rc = (*proxy_sqlite3_bind_text)(statement1, 11, row[9], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // attributes + rc = (*proxy_sqlite3_bind_text)(statement1, 12, row[10], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // comment SAFE_SQLITE3_STEP2(statement1); - rc=(*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); - rc=(*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); } - // RAII auto-finalizes statement1 (fixes memory leak) } -void ProxySQL_Cluster::pull_mysql_users_from_peer(const string& expected_checksum, const time_t epoch) { - char * hostname = NULL; - char * ip_address = NULL; - uint16_t port = 0; - bool fetch_failed = false; - pthread_mutex_lock(&GloProxyCluster->update_mysql_users_mutex); - nodes.get_peer_to_sync_mysql_users(&hostname, &port, &ip_address); - if (hostname) { - cluster_creds_t creds {}; +void update_pgsql_servers(SQLite3_result* resultset) { + GloAdmin->admindb->execute(SQLQueries::DELETE_PGSQL_SERVERS); + if (resultset == nullptr) { + return; + } - MYSQL *conn = mysql_init(NULL); - if (conn==NULL) { - proxy_error("Unable to run mysql_init()\n"); - goto __exit_pull_mysql_users_from_peer; - } + const char* q = + "INSERT INTO pgsql_servers (hostgroup_id, hostname, port, status, weight, compression, max_connections," + " max_replication_lag, use_ssl, max_latency_ms, comment)" + " VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11)"; - creds = GloProxyCluster->get_credentials(); - if (creds.user.size()) { // do not monitor if the username is empty - // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original - // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. - unsigned int timeout = 1; - mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); - { + auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc1, GloAdmin->admindb); + sqlite3_stmt* statement1 = statement1_unique.get(); + int rc; + + for (auto* row : resultset->rows) { + rc = (*proxy_sqlite3_bind_int64)(statement1, 1, atoll(row->fields[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 2, row->fields[1] ? row->fields[1] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 3, atoll(row->fields[2])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 4, row->fields[3] ? row->fields[3] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 5, atoll(row->fields[4])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 6, atoll(row->fields[5])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 7, atoll(row->fields[6])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 8, atoll(row->fields[7])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 9, atoll(row->fields[8])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 10, atoll(row->fields[9])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 11, row->fields[10] ? row->fields[10] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + + SAFE_SQLITE3_STEP2(statement1); + rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + } +} + +void update_pgsql_replication_hostgroups(SQLite3_result* resultset) { + GloAdmin->admindb->execute(SQLQueries::DELETE_PGSQL_REPLICATION_HOSTGROUPS); + if (resultset == nullptr) { + return; + } + + const char* q = + "INSERT INTO pgsql_replication_hostgroups (writer_hostgroup, reader_hostgroup, check_type, comment)" + " VALUES (?1, ?2, ?3, ?4)"; + + auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc1, GloAdmin->admindb); + sqlite3_stmt* statement1 = statement1_unique.get(); + int rc; + + for (auto* row : resultset->rows) { + rc = (*proxy_sqlite3_bind_int64)(statement1, 1, atoll(row->fields[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 2, atoll(row->fields[1])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 3, row->fields[2] ? row->fields[2] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 4, row->fields[3] ? row->fields[3] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + + SAFE_SQLITE3_STEP2(statement1); + rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + } +} + +void update_pgsql_hostgroup_attributes(SQLite3_result* resultset) { + GloAdmin->admindb->execute(SQLQueries::DELETE_PGSQL_HOSTGROUP_ATTRIBUTES); + if (resultset == nullptr) { + return; + } + + const char* q = + "INSERT INTO pgsql_hostgroup_attributes (" + "hostgroup_id, max_num_online_servers, autocommit, free_connections_pct, init_connect, multiplex," + " connection_warming, throttle_connections_per_sec, ignore_session_variables, hostgroup_settings," + " servers_defaults, comment" + ") VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; + + auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc1, GloAdmin->admindb); + sqlite3_stmt* statement1 = statement1_unique.get(); + int rc; + + for (auto* row : resultset->rows) { + rc = (*proxy_sqlite3_bind_int64)(statement1, 1, atoll(row->fields[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 2, atoll(row->fields[1])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 3, atoll(row->fields[2])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 4, atoll(row->fields[3])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 5, row->fields[4] ? row->fields[4] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 6, atoll(row->fields[5])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 7, atoll(row->fields[6])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_int64)(statement1, 8, atoll(row->fields[7])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 9, row->fields[8] ? row->fields[8] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 10, row->fields[9] ? row->fields[9] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 11, row->fields[10] ? row->fields[10] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_bind_text)(statement1, 12, row->fields[11] ? row->fields[11] : "", -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + + SAFE_SQLITE3_STEP2(statement1); + rc = (*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc = (*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + } +} + +void update_ldap_mappings(MYSQL_RES* result) { + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_LDAP_MAPPING); + char* q = const_cast( + "INSERT INTO mysql_ldap_mapping (priority, frontend_entity, backend_entity, comment)" + " VALUES (?1 , ?2 , ?3 , ?4)" + ); + + auto [rc1, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc1, GloAdmin->admindb); + sqlite3_stmt *statement1 = statement1_unique.get(); + int rc; + + while (MYSQL_ROW row = mysql_fetch_row(result)) { + rc=(*proxy_sqlite3_bind_int64)(statement1, 1, atoll(row[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // priority + rc=(*proxy_sqlite3_bind_text)(statement1, 2, row[1], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // frontend_entity + rc=(*proxy_sqlite3_bind_text)(statement1, 3, row[2], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // backend_entity + rc=(*proxy_sqlite3_bind_text)(statement1, 4, row[3], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // comment + + SAFE_SQLITE3_STEP2(statement1); + rc=(*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc=(*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + } + // RAII auto-finalizes statement1 (fixes memory leak) +} + +void ProxySQL_Cluster::pull_mysql_users_from_peer(const string& expected_checksum, const time_t epoch) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_mysql_users_mutex); + nodes.get_peer_to_sync_mysql_users(&hostname, &port, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_mysql_users_from_peer; + } + + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); } @@ -1698,6 +1862,19 @@ incoming_servers_t convert_mysql_servers_resultsets(const std::vector& results) { + if (results.size() != sizeof(incoming_pgsql_servers_t) / sizeof(void*)) { + return incoming_pgsql_servers_t {}; + } else { + return incoming_pgsql_servers_t { + get_SQLite3_resulset(results[0]).release(), + get_SQLite3_resulset(results[1]).release(), + get_SQLite3_resulset(results[2]).release(), + get_SQLite3_resulset(results[3]).release(), + }; + } +} + /** * @brief mysql_servers records will be fetched from remote peer and saved locally. * @@ -1751,13 +1928,13 @@ void ProxySQL_Cluster::pull_runtime_mysql_servers_from_peer(const runtime_mysql_ std::string fetch_servers_err; string_format("Cluster: Fetching 'MySQL Servers' from peer %s:%d failed: \n", fetch_servers_err, hostname, port); - // Create fetching query - fetch_query query = { - CLUSTER_QUERY_RUNTIME_MYSQL_SERVERS, - p_cluster_counter::pulled_mysql_servers_success, - p_cluster_counter::pulled_mysql_servers_failure, - { "", fetch_servers_done, fetch_servers_err } - }; + // Create fetching query + fetch_query query = { + CLUSTER_QUERY_RUNTIME_MYSQL_SERVERS, + p_cluster_counter::pulled_mysql_servers_success, + p_cluster_counter::pulled_mysql_servers_failure, + { "", fetch_servers_done, fetch_servers_err } + }; MYSQL_RES* result = nullptr; @@ -1940,13 +2117,13 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch * @details All the queries defined here require to be updated if their target table definition is * changed. More details on 'CLUSTER_QUERY_MYSQL_REPLICATION_HOSTGROUPS' definition. */ - fetch_query queries[] = { - { - CLUSTER_QUERY_MYSQL_SERVERS_V2, - p_cluster_counter::pulled_mysql_servers_success, - p_cluster_counter::pulled_mysql_servers_failure, - { "", fetch_servers_done, fetch_servers_err } - }, + fetch_query queries[] = { + { + CLUSTER_QUERY_MYSQL_SERVERS_V2, + p_cluster_counter::pulled_mysql_servers_success, + p_cluster_counter::pulled_mysql_servers_failure, + { "", fetch_servers_done, fetch_servers_err } + }, { CLUSTER_QUERY_MYSQL_REPLICATION_HOSTGROUPS, p_cluster_counter::pulled_mysql_servers_replication_hostgroups_success, @@ -2007,12 +2184,12 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch string_format("Cluster: Fetching 'MySQL Servers' from peer %s:%d failed: \n", fetch_runtime_servers_err, hostname, port); // Query definition used to fetch data from a peer. - fetch_query query = { - CLUSTER_QUERY_RUNTIME_MYSQL_SERVERS, - p_cluster_counter::pulled_mysql_servers_success, - p_cluster_counter::pulled_mysql_servers_failure, - { "", fetch_runtime_servers_done, fetch_runtime_servers_err } - }; + fetch_query query = { + CLUSTER_QUERY_RUNTIME_MYSQL_SERVERS, + p_cluster_counter::pulled_mysql_servers_success, + p_cluster_counter::pulled_mysql_servers_failure, + { "", fetch_runtime_servers_done, fetch_runtime_servers_err } + }; MYSQL_RES* fetch_res = nullptr; if (fetch_and_store(conn, query, &fetch_res) == 0) { @@ -2048,7 +2225,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_servers table\n"); proxy_info("Cluster: Writing mysql_servers table\n"); GloAdmin->mysql_servers_wrlock(); - GloAdmin->admindb->execute("DELETE FROM mysql_servers"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_SERVERS); MYSQL_ROW row; char* q = (char*)"INSERT INTO mysql_servers (hostgroup_id, hostname, port, gtid_port, status, weight, compression, max_connections, max_replication_lag, use_ssl, max_latency_ms, comment) VALUES (%s, \"%s\", %s, %s, \"%s\", %s, %s, %s, %s, %s, %s, '%s')"; while ((row = mysql_fetch_row(results[0]))) { @@ -2070,7 +2247,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_replication_hostgroups proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_replication_hostgroups table\n"); proxy_info("Cluster: Writing mysql_replication_hostgroups table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_replication_hostgroups"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_REPLICATION_HOSTGROUPS); q = (char*)"INSERT INTO mysql_replication_hostgroups (writer_hostgroup, reader_hostgroup, check_type, comment) VALUES (%s, %s, '%s', '%s')"; while ((row = mysql_fetch_row(results[1]))) { int l = 0; @@ -2090,7 +2267,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_group_replication_hostgroups proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_group_replication_hostgroups table\n"); proxy_info("Cluster: Writing mysql_group_replication_hostgroups table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_group_replication_hostgroups"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_GROUP_REPLICATION_HOSTGROUPS); q = (char*)"INSERT INTO mysql_group_replication_hostgroups ( " "writer_hostgroup, backup_writer_hostgroup, reader_hostgroup, offline_hostgroup, active, " "max_writers, writer_is_also_reader, max_transactions_behind, comment) "; @@ -2136,7 +2313,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_galera_hostgroups proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_galera_hostgroups table\n"); proxy_info("Cluster: Writing mysql_galera_hostgroups table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_galera_hostgroups"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_GALERA_HOSTGROUPS); q = (char*)"INSERT INTO mysql_galera_hostgroups ( " "writer_hostgroup, backup_writer_hostgroup, reader_hostgroup, offline_hostgroup, active, " "max_writers, writer_is_also_reader, max_transactions_behind, comment) "; @@ -2178,7 +2355,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_aws_aurora_hostgroups proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_aws_aurora_hostgroups table\n"); proxy_info("Cluster: Writing mysql_aws_aurora_hostgroups table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_aws_aurora_hostgroups"); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_AWS_AURORA_HOSTGROUPS); q = (char*)"INSERT INTO mysql_aws_aurora_hostgroups ( " "writer_hostgroup, reader_hostgroup, active, aurora_port, domain_name, max_lag_ms, check_interval_ms, " "check_timeout_ms, writer_is_also_reader, new_reader_weight, add_lag_ms, min_lag_ms, lag_num_checks, comment) "; @@ -2220,16 +2397,16 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_hostgroup_attributes proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_hostgroup_attributes table\n"); proxy_info("Cluster: Writing mysql_hostgroup_attributes table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_hostgroup_attributes"); - { - const char* q = (const char*)"INSERT INTO mysql_hostgroup_attributes ( " - "hostgroup_id, max_num_online_servers, autocommit, free_connections_pct, " - "init_connect, multiplex, connection_warming, throttle_connections_per_sec, " - "ignore_session_variables, hostgroup_settings, servers_defaults, comment) VALUES " - "(?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; - auto [rc, statement1_unique] = GloAdmin->admindb->prepare_v2(q); - ASSERT_SQLITE_OK(rc, GloAdmin->admindb); - sqlite3_stmt *statement1 = statement1_unique.get(); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_HOSTGROUP_ATTRIBUTES); + { + const char* q = (const char*)"INSERT INTO mysql_hostgroup_attributes ( " + "hostgroup_id, max_num_online_servers, autocommit, free_connections_pct, " + "init_connect, multiplex, connection_warming, throttle_connections_per_sec, " + "ignore_session_variables, hostgroup_settings, servers_defaults, comment) VALUES " + "(?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; + auto [rc, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + sqlite3_stmt *statement1 = statement1_unique.get(); while ((row = mysql_fetch_row(results[5]))) { rc=(*proxy_sqlite3_bind_int64)(statement1, 1, atol(row[0])); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // hostgroup_id @@ -2259,12 +2436,12 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch // sync mysql_servers_ssl_params proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Writing mysql_servers_ssl_params table\n"); proxy_info("Cluster: Writing mysql_servers_ssl_params table\n"); - GloAdmin->admindb->execute("DELETE FROM mysql_servers_ssl_params"); - { - const char* q = (const char*)"INSERT INTO mysql_servers_ssl_params (hostname, port, username, ssl_ca, ssl_cert, ssl_key, ssl_capath, ssl_crl, ssl_crlpath, ssl_cipher, tls_version, comment) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; - auto [rc, statement1_unique] = GloAdmin->admindb->prepare_v2(q); - ASSERT_SQLITE_OK(rc, GloAdmin->admindb); - sqlite3_stmt *statement1 = statement1_unique.get(); + GloAdmin->admindb->execute(SQLQueries::DELETE_MYSQL_SERVERS_SSL_PARAMS); + { + const char* q = (const char*)"INSERT INTO mysql_servers_ssl_params (hostname, port, username, ssl_ca, ssl_cert, ssl_key, ssl_capath, ssl_crl, ssl_crlpath, ssl_cipher, tls_version, comment) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)"; + auto [rc, statement1_unique] = GloAdmin->admindb->prepare_v2(q); + ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + sqlite3_stmt *statement1 = statement1_unique.get(); while ((row = mysql_fetch_row(results[6]))) { rc=(*proxy_sqlite3_bind_text)(statement1, 1, row[0], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // hostname @@ -2315,7 +2492,7 @@ void ProxySQL_Cluster::pull_mysql_servers_v2_from_peer(const mysql_servers_v2_ch "Cluster: Fetching MySQL Servers v2 from peer %s:%d failed: Checksum changed from %s to %s\n", hostname, port, peer_mysql_servers_v2_checksum, computed_checksum.c_str() ); - metrics.p_counter_array[p_cluster_counter::pulled_mysql_variables_failure]->Increment(); + metrics.p_counter_array[p_cluster_counter::pulled_mysql_servers_failure]->Increment(); fetch_failed = true; } @@ -2374,6 +2551,10 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c vars_type_str = const_cast("LDAP"); success_metric = p_cluster_counter::pulled_ldap_variables_success; failure_metric = p_cluster_counter::pulled_ldap_variables_failure; + } else if (var_type == "pgsql") { + vars_type_str = const_cast("PostgreSQL"); + success_metric = p_cluster_counter::pulled_pgsql_variables_success; + failure_metric = p_cluster_counter::pulled_pgsql_variables_failure; } else { proxy_error("Invalid parameter supplied to 'pull_global_variables_from_peer': var_type=%s\n", var_type.c_str()); assert(0); @@ -2387,6 +2568,8 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c nodes.get_peer_to_sync_admin_variables(&hostname, &port, &ip_address); } else if (var_type == "ldap"){ nodes.get_peer_to_sync_ldap_variables(&hostname, &port, &ip_address); + } else if (var_type == "pgsql") { + nodes.get_peer_to_sync_pgsql_variables(&hostname, &port, &ip_address); } else { proxy_error("Invalid parameter supplied to 'pull_global_variables_from_peer': var_type=%s\n", var_type.c_str()); assert(0); @@ -2429,6 +2612,8 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c s_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_ADMIN); } else if (var_type == "mysql") { s_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_MYSQL); + } else if (var_type == "pgsql") { + s_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_PGSQL); } } s_query += " ORDER BY variable_name"; @@ -2450,13 +2635,15 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c // remember that we read from runtime_global_variables but write into global_variables string_format("DELETE FROM global_variables WHERE variable_name LIKE '%s-%%'", d_query, var_type.c_str()); if (var_type == "mysql") { - s_query += " AND variable_name NOT IN ('mysql-threads')"; + d_query += " AND variable_name NOT IN ('mysql-threads')"; } if (GloVars.cluster_sync_interfaces == false) { if (var_type == "admin") { d_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_ADMIN); } else if (var_type == "mysql") { d_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_MYSQL); + } else if (var_type == "pgsql") { + d_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_PGSQL); } } GloAdmin->admindb->execute(d_query.c_str()); @@ -2505,6 +2692,14 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c proxy_info("Cluster: Saving to disk LDAP Variables from peer %s:%d\n", hostname, port); GloAdmin->flush_ldap_variables__from_memory_to_disk(); } + } else if (var_type == "pgsql") { + GloAdmin->load_pgsql_variables_to_runtime(expected_checksum, epoch); + + if (GloProxyCluster->cluster_pgsql_variables_save_to_disk == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + GloAdmin->flush_pgsql_variables__from_memory_to_disk(); + } } else { proxy_error("Invalid parameter supplied to 'pull_global_variables_from_peer': var_type=%s\n", var_type.c_str()); assert(0); @@ -2517,7 +2712,7 @@ void ProxySQL_Cluster::pull_global_variables_from_peer(const string& var_type, c "Cluster: Fetching %s Variables from peer %s:%d failed: Checksum changed from %s to %s\n", vars_type_str, hostname, port, expected_checksum.c_str(), computed_checksum.c_str() ); - metrics.p_counter_array[p_cluster_counter::pulled_mysql_variables_failure]->Increment(); + metrics.p_counter_array[failure_metric]->Increment(); fetch_failed = true; } } else { @@ -2674,128 +2869,923 @@ void ProxySQL_Cluster::pull_proxysql_servers_from_peer(const std::string& expect if (fetch_failed == true) sleep(1); } -void ProxySQL_Node_Entry::set_metrics(MYSQL_RES *_r, unsigned long long _response_time) { - MYSQL_ROW row; - metrics_idx_prev = metrics_idx; - metrics_idx++; - if (metrics_idx == PROXYSQL_NODE_METRICS_LEN) { - metrics_idx = 0; - } - ProxySQL_Node_Metrics *m = metrics[metrics_idx]; - m->reset(); - m->read_time_us = monotonic_time(); - m->response_time_us = _response_time; - while ((row = mysql_fetch_row(_r))) { - char c = row[0][0]; - switch (c) { - case 'C': - if (strcmp(row[0],"Client_Connections_connected")==0) { - m->Client_Connections_connected = atoll(row[1]); - break; - } - if (strcmp(row[0],"Client_Connections_created")==0) { - m->Client_Connections_created = atoll(row[1]); - break; - } - break; - case 'P': - if (strcmp(row[0],"ProxySQL_Uptime")==0) { - m->ProxySQL_Uptime = atoll(row[1]); - } - break; - case 'Q': - if (strcmp(row[0],"Questions")==0) { - m->Questions = atoll(row[1]); - } - break; - case 'S': - if (strcmp(row[0],"Servers_table_version")==0) { - m->Servers_table_version = atoll(row[1]); - } - break; - default: - break; +/** + * @brief Pulls PostgreSQL users configuration from a cluster peer node. + * + * This function fetches PostgreSQL users from a peer ProxySQL instance when the peer's + * checksum differs from the local checksum and the difference exceeds the configured + * threshold (cluster_pgsql_users_diffs_before_sync). It retrieves PostgreSQL user credentials + * including usernames, passwords, and connection settings for PostgreSQL authentication. + * + * The function performs the following steps: + * 1. Identifies the optimal peer to sync from using get_peer_to_sync_pgsql_users() + * 2. Establishes a MySQL connection to the peer's admin interface + * 3. Executes CLUSTER_QUERY_PGSQL_USERS to fetch user configuration + * 4. Computes checksums for the fetched data using PostgreSQL authentication logic + * 5. Validates checksums match the expected values + * 6. Loads the users to runtime via init_pgsql_users() + * 7. Optionally saves configuration to disk if cluster_pgsql_users_save_to_disk is enabled + * + * This function provides PostgreSQL-specific cluster synchronization for user credentials, + * ensuring consistent PostgreSQL authentication across all cluster nodes. + * + * @param expected_checksum The expected checksum of the PostgreSQL users on the peer + * @param epoch The epoch timestamp of the PostgreSQL users on the peer + * + * @note This function is thread-safe and reuses the update_mysql_users_mutex + * @note The function will sleep(1) if the fetch operation fails to prevent busy loops + * @see get_peer_to_sync_pgsql_users() + * @see CLUSTER_QUERY_PGSQL_USERS + * @see init_pgsql_users() + * @see get_pgsql_users_checksum() + */ +void ProxySQL_Cluster::pull_pgsql_users_from_peer(const std::string& expected_checksum, const time_t epoch) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_mysql_users_mutex); // Reuse mysql_users mutex for pgsql_users + nodes.get_peer_to_sync_pgsql_users(&hostname, &port, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_pgsql_users_from_peer; } - } -} -using metric_name = std::string; -using metric_help = std::string; -using metric_tags = std::map; + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Users from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); + proxy_info("Cluster: Fetching PostgreSQL Users from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); -using cluster_nodes_counter_tuple = - std::tuple< - p_cluster_nodes_counter::metric, - metric_name, - metric_help, - metric_tags - >; + MYSQL* rc_conn = mysql_real_connect(conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0); + if (rc_conn == nullptr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Users from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Users from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_users_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_pgsql_users_from_peer; + } -using cluster_nodes_gauge_tuple = - std::tuple< - p_cluster_nodes_gauge::metric, - metric_name, - metric_help, - metric_tags - >; + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); -using cluster_nodes_dyn_counter_tuple = - std::tuple< - p_cluster_nodes_dyn_counter::metric, - metric_name, - metric_help, - metric_tags - >; + int rc_query = mysql_query(conn, CLUSTER_QUERY_PGSQL_USERS); + if (rc_query == 0) { + MYSQL_RES* pgsql_users_result = mysql_store_result(conn); -using cluster_nodes_dyn_gauge_tuple = - std::tuple< - p_cluster_nodes_dyn_gauge::metric, - metric_name, - metric_help, - metric_tags - >; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Users from peer %s:%d completed\n", hostname, port); + proxy_info("Cluster: Fetching PostgreSQL Users from peer %s:%d completed\n", hostname, port); -using cluster_nodes_counter_vector = std::vector; -using cluster_nodes_gauge_vector = std::vector; -using cluster_nodes_dyn_counter_vector = std::vector; -using cluster_nodes_dyn_gauge_vector = std::vector; + unique_ptr pgsql_users_resultset { nullptr }; + const uint64_t users_raw_checksum = get_pgsql_users_checksum(pgsql_users_result, pgsql_users_resultset); + const string computed_checksum { get_checksum_from_hash(users_raw_checksum) }; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Computed checksum for PostgreSQL Users from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); + proxy_info("Cluster: Computed checksum for PostgreSQL Users from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); -const std::tuple< - cluster_nodes_counter_vector, - cluster_nodes_gauge_vector, - cluster_nodes_dyn_counter_vector, - cluster_nodes_dyn_gauge_vector -> -cluster_nodes_metrics_map = std::make_tuple( - cluster_nodes_counter_vector{}, - cluster_nodes_gauge_vector {}, - cluster_nodes_dyn_counter_vector { - std::make_tuple ( - p_cluster_nodes_dyn_counter::proxysql_servers_checksums_version_total, - "proxysql_servers_checksums_version_total", - "Number of times the configuration has been loaded locally.", - metric_tags {} - ), - std::make_tuple ( - p_cluster_nodes_dyn_counter::proxysql_servers_metrics_uptime_s, - "proxysql_servers_metrics_uptime_s_total", - "Current uptime of the Cluster node, in seconds.", - metric_tags {} - ), - std::make_tuple ( - p_cluster_nodes_dyn_counter::proxysql_servers_metrics_queries, - "proxysql_servers_metrics_queries_total", - "Number of queries the Cluster node has processed.", - metric_tags {} - ), - std::make_tuple ( - p_cluster_nodes_dyn_counter::proxysql_servers_metrics_client_conns_created, - "proxysql_servers_metrics_client_conns_created_total", - "Number of frontend client connections created over time on the Cluster node.", - metric_tags {} - ), - }, - cluster_nodes_dyn_gauge_vector { + if (expected_checksum == computed_checksum) { + update_pgsql_users(pgsql_users_result); + mysql_free_result(pgsql_users_result); + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Loading to runtime PostgreSQL Users from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Loading to runtime PostgreSQL Users from peer %s:%d\n", hostname, port); + + GloAdmin->init_pgsql_users(std::move(pgsql_users_resultset), expected_checksum, epoch); + if (GloProxyCluster->cluster_pgsql_users_save_to_disk == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Users from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Users from peer %s:%d\n", hostname, port); + GloAdmin->flush_pgsql_users__from_memory_to_disk(); + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "NOT saving to disk PostgreSQL Users from peer %s:%d\n", hostname, port); + proxy_info("Cluster: NOT saving to disk PostgreSQL Users from peer %s:%d\n", hostname, port); + } + + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_users_success]->Increment(); + } else { + if (pgsql_users_result) { + mysql_free_result(pgsql_users_result); + } + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Users from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str()); + proxy_info( + "Cluster: Fetching PostgreSQL Users from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str() + ); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_users_failure]->Increment(); + fetch_failed = true; + } + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Users from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Users from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_users_failure]->Increment(); + fetch_failed = true; + } + } +__exit_pull_pgsql_users_from_peer: + if (conn) { + if (conn->net.pvio) { + mysql_close(conn); + } + } + free(hostname); + + if (ip_address) + free(ip_address); + } + pthread_mutex_unlock(&GloProxyCluster->update_mysql_users_mutex); + if (fetch_failed == true) sleep(1); +} + +void ProxySQL_Cluster::pull_pgsql_variables_from_peer(const std::string& expected_checksum, const time_t epoch) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_mysql_variables_mutex); // Reuse mysql_variables mutex for pgsql_variables + nodes.get_peer_to_sync_pgsql_variables(&hostname, &port, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_pgsql_variables_from_peer; + } + + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Variables from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); + proxy_info("Cluster: Fetching PostgreSQL Variables from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); + + MYSQL* rc_conn = mysql_real_connect(conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0); + if (rc_conn == nullptr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Variables from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Variables from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_variables_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_pgsql_variables_from_peer; + } + + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); + + int rc_query = mysql_query(conn, CLUSTER_QUERY_PGSQL_VARIABLES); + if (rc_query == 0) { + MYSQL_RES* pgsql_variables_result = mysql_store_result(conn); + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Variables from peer %s:%d completed\n", hostname, port); + proxy_info("Cluster: Fetching PostgreSQL Variables from peer %s:%d completed\n", hostname, port); + + uint64_t pgsql_variables_hash = mysql_raw_checksum(pgsql_variables_result); + const string computed_checksum { get_checksum_from_hash(pgsql_variables_hash) }; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Computed checksum for PostgreSQL Variables from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); + proxy_info("Cluster: Computed checksum for PostgreSQL Variables from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); + + if (expected_checksum == computed_checksum) { + // Clear existing pgsql variables + string d_query; + string_format("DELETE FROM global_variables WHERE variable_name LIKE '%s-%%'", d_query, "pgsql"); + if (GloVars.cluster_sync_interfaces == false) { + d_query += " AND variable_name NOT IN " + string(CLUSTER_SYNC_INTERFACES_PGSQL); + } + GloAdmin->admindb->execute(d_query.c_str()); + + // Insert new pgsql variables + MYSQL_ROW row; + char *q = (char *)"INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES (?1 , ?2)"; + sqlite3_stmt *statement1 = NULL; + int rc = GloAdmin->admindb->prepare_v2(q, &statement1); + ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + + while ((row = mysql_fetch_row(pgsql_variables_result))) { + rc=(*proxy_sqlite3_bind_text)(statement1, 1, row[0], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // variable_name + rc=(*proxy_sqlite3_bind_text)(statement1, 2, row[1], -1, SQLITE_TRANSIENT); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); // variable_value + + SAFE_SQLITE3_STEP2(statement1); + rc=(*proxy_sqlite3_clear_bindings)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + rc=(*proxy_sqlite3_reset)(statement1); ASSERT_SQLITE_OK(rc, GloAdmin->admindb); + } + + mysql_free_result(pgsql_variables_result); + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Loading to runtime PostgreSQL Variables from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Loading to runtime PostgreSQL Variables from peer %s:%d\n", hostname, port); + + GloAdmin->load_pgsql_variables_to_runtime(expected_checksum, epoch); + + if (GloProxyCluster->cluster_pgsql_variables_save_to_disk == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + GloAdmin->flush_pgsql_variables__from_memory_to_disk(); + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "NOT saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + proxy_info("Cluster: NOT saving to disk PostgreSQL Variables from peer %s:%d\n", hostname, port); + } + + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_variables_success]->Increment(); + } else { + if (pgsql_variables_result) { + mysql_free_result(pgsql_variables_result); + } + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Variables from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str()); + proxy_info( + "Cluster: Fetching PostgreSQL Variables from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str() + ); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_variables_failure]->Increment(); + fetch_failed = true; + } + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Variables from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Variables from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_variables_failure]->Increment(); + fetch_failed = true; + } + } +__exit_pull_pgsql_variables_from_peer: + if (conn) { + if (conn->net.pvio) { + mysql_close(conn); + } + } + free(hostname); + + if (ip_address) + free(ip_address); + } + pthread_mutex_unlock(&GloProxyCluster->update_mysql_variables_mutex); + if (fetch_failed == true) sleep(1); +} + +/** + * @brief Pulls PostgreSQL query rules configuration from a cluster peer node. + * + * This function fetches PostgreSQL query rules from a peer ProxySQL instance when the peer's + * checksum differs from the local checksum and the difference exceeds the configured + * threshold (cluster_pgsql_query_rules_diffs_before_sync). It retrieves both regular query + * rules and fast routing rules, providing PostgreSQL-specific cluster synchronization. + * + * The function performs the following steps: + * 1. Identifies the optimal peer to sync from using get_peer_to_sync_pgsql_query_rules() + * 2. Establishes a MySQL connection to the peer's admin interface + * 3. Executes CLUSTER_QUERY_PGSQL_QUERY_RULES and CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING + * 4. Computes checksums for the fetched data using the same combined resultset hash as runtime loading + * 5. Validates checksums match the expected values + * 6. Loads the query rules to runtime via load_pgsql_query_rules_to_runtime() + * 7. Optionally saves configuration to disk if cluster_pgsql_query_rules_save_to_disk is enabled + * + * This function implements the same synchronization pattern as MySQL query rules but + * for PostgreSQL-specific tables and query rule processing. + * + * @param expected_checksum The expected checksum of the PostgreSQL query rules on the peer + * @param epoch The epoch timestamp of the PostgreSQL query rules on the peer + * + * @note This function is thread-safe and reuses the update_mysql_query_rules_mutex + * @note The function will sleep(1) if the fetch operation fails to prevent busy loops + * @see get_peer_to_sync_pgsql_query_rules() + * @see CLUSTER_QUERY_PGSQL_QUERY_RULES + * @see CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING + * @see load_pgsql_query_rules_to_runtime() + */ +void ProxySQL_Cluster::pull_pgsql_query_rules_from_peer(const std::string& expected_checksum, const time_t epoch) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_mysql_query_rules_mutex); // Reuse mysql_query_rules mutex for pgsql_query_rules + nodes.get_peer_to_sync_pgsql_query_rules(&hostname, &port, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_pgsql_query_rules_from_peer; + } + + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); + proxy_info("Cluster: Fetching PostgreSQL Query Rules from peer %s:%d started. Expected checksum: %s\n", hostname, port, expected_checksum.c_str()); + + MYSQL* rc_conn = mysql_real_connect(conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0); + if (rc_conn == nullptr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Query Rules from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_query_rules_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_pgsql_query_rules_from_peer; + } + + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); + + int rc_query = mysql_query(conn, CLUSTER_QUERY_PGSQL_QUERY_RULES); + if (rc_query == 0) { + MYSQL_RES* query_rules_result = mysql_store_result(conn); + MYSQL_RES* fast_routing_result = nullptr; + + // Fetch fast routing rules + int rc_query_fast = mysql_query(conn, CLUSTER_QUERY_PGSQL_QUERY_RULES_FAST_ROUTING); + if (rc_query_fast == 0) { + fast_routing_result = mysql_store_result(conn); + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules fast routing from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Query Rules fast routing from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + if (query_rules_result) { + mysql_free_result(query_rules_result); + } + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_query_rules_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_pgsql_query_rules_from_peer; + } + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules from peer %s:%d completed\n", hostname, port); + proxy_info("Cluster: Fetching PostgreSQL Query Rules from peer %s:%d completed\n", hostname, port); + + std::unique_ptr query_rules_resultset { get_SQLite3_resulset(query_rules_result) }; + std::unique_ptr fast_routing_resultset { get_SQLite3_resulset(fast_routing_result) }; + + const uint64_t rules_raw_checksum = + query_rules_resultset->raw_checksum() + fast_routing_resultset->raw_checksum(); + + const string computed_checksum { get_checksum_from_hash(rules_raw_checksum) }; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Computed checksum for PostgreSQL Query Rules from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); + proxy_info("Cluster: Computed checksum for PostgreSQL Query Rules from peer %s:%d : %s\n", hostname, port, computed_checksum.c_str()); + + if (expected_checksum == computed_checksum) { + mysql_free_result(query_rules_result); + mysql_free_result(fast_routing_result); + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Loading to runtime PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Loading to runtime PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + + pthread_mutex_lock(&GloAdmin->sql_query_global_mutex); + GloAdmin->load_pgsql_query_rules_to_runtime( + query_rules_resultset.release(), + fast_routing_resultset.release(), + expected_checksum, + epoch + ); + GloAdmin->save_pgsql_query_rules_from_runtime(false); + GloAdmin->save_pgsql_query_rules_fast_routing_from_runtime(false); + if (GloProxyCluster->cluster_pgsql_query_rules_save_to_disk == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + GloAdmin->flush_GENERIC__from_to("pgsql_query_rules", "memory_to_disk"); + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "NOT saving to disk PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + proxy_info("Cluster: NOT saving to disk PostgreSQL Query Rules from peer %s:%d\n", hostname, port); + } + pthread_mutex_unlock(&GloAdmin->sql_query_global_mutex); + + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_query_rules_success]->Increment(); + } else { + if (query_rules_result) { + mysql_free_result(query_rules_result); + } + if (fast_routing_result) { + mysql_free_result(fast_routing_result); + } + + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str()); + proxy_info( + "Cluster: Fetching PostgreSQL Query Rules from peer %s:%d failed: Checksum changed from %s to %s\n", + hostname, port, expected_checksum.c_str(), computed_checksum.c_str() + ); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_query_rules_failure]->Increment(); + fetch_failed = true; + } + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Query Rules from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Query Rules from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_query_rules_failure]->Increment(); + fetch_failed = true; + } + } +__exit_pull_pgsql_query_rules_from_peer: + if (conn) { + if (conn->net.pvio) { + mysql_close(conn); + } + } + free(hostname); + + if (ip_address) + free(ip_address); + } + pthread_mutex_unlock(&GloProxyCluster->update_mysql_query_rules_mutex); + if (fetch_failed == true) sleep(1); +} + +/** + * @brief Pulls runtime PostgreSQL servers configuration from a cluster peer node. + * + * This function fetches runtime PostgreSQL servers status from a peer ProxySQL instance when the peer's + * checksum differs from the local checksum and the difference exceeds the configured + * threshold. It retrieves the current operational status, health metrics, and runtime statistics + * for PostgreSQL servers in the cluster. + * + * The function performs the following steps: + * 1. Identifies the optimal peer to sync from using get_peer_to_sync_runtime_pgsql_servers() + * 2. Establishes a MySQL connection to the peer's admin interface + * 3. Executes CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS to fetch runtime server status + * 4. Computes checksum for the fetched data using mysql_raw_checksum() + * 5. Validates checksum matches the expected value from peer_runtime_pgsql_server.value + * 6. Loads runtime PostgreSQL servers status into PgHGM and commits runtime state + * 7. Optionally saves configuration to disk if save settings are enabled + * + * Runtime data includes: + * - Server status (ONLINE, OFFLINE, SHUNNED, etc.) + * - Current connections and load metrics + * - Health check results and response times + * - Error counts and statistics + * + * @param peer_runtime_pgsql_server The checksum structure containing expected checksum value and epoch timestamp for runtime PostgreSQL servers + * + * @note This function is thread-safe and requires the update_runtime_mysql_servers_mutex to be held (reused for pgsql_servers) + * @note The function will sleep(1) if the fetch operation fails to prevent busy loops + * @note The function reuses MySQL servers counters for metrics tracking + * @note Runtime records are committed through PgHGM::commit(..., only_commit_runtime_pgsql_servers=true) + * @see get_peer_to_sync_runtime_pgsql_servers() + * @see CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS + * @see mysql_raw_checksum() + * @see get_checksum_from_hash() + */ +void ProxySQL_Cluster::pull_runtime_pgsql_servers_from_peer(const runtime_pgsql_servers_checksum_t& peer_runtime_pgsql_server) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + char* peer_runtime_pgsql_servers_checksum = NULL; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_runtime_mysql_servers_mutex); // Reuse mysql_servers mutex for pgsql_servers + nodes.get_peer_to_sync_runtime_pgsql_servers(&hostname, &port, &peer_runtime_pgsql_servers_checksum, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_runtime_pgsql_servers_from_peer; + } + + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching Runtime PostgreSQL Servers from peer %s:%d started.\n", hostname, port); + proxy_info("Cluster: Fetching Runtime PostgreSQL Servers from peer %s:%d started.\n", hostname, port); + + MYSQL* rc_conn = mysql_real_connect(conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0); + if (rc_conn == nullptr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching Runtime PostgreSQL Servers from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching Runtime PostgreSQL Servers from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_runtime_pgsql_servers_from_peer; + } + + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); + + fetch_query query = { + CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS, + p_cluster_counter::metric(-1), + p_cluster_counter::metric(-1), + { + "Cluster: Fetching Runtime PostgreSQL Servers from peer " + string(hostname) + ":" + std::to_string(port) + " completed.", + "", + "Cluster: Fetching Runtime PostgreSQL Servers from peer " + string(hostname) + ":" + std::to_string(port) + " failed: " + } + }; + + MYSQL_RES* result = nullptr; + + const int rc_query = fetch_and_store(conn, query, &result); + if (rc_query != 0 || result == nullptr) { + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_runtime_pgsql_servers_from_peer; + } + + const uint64_t hash_val = mysql_raw_checksum(result); + const string computed_checksum = get_checksum_from_hash(hash_val); + const string expected_runtime_checksum = peer_runtime_pgsql_servers_checksum + ? string(peer_runtime_pgsql_servers_checksum) : peer_runtime_pgsql_server.value; + + if (!expected_runtime_checksum.empty() && computed_checksum == expected_runtime_checksum) { + GloAdmin->pgsql_servers_wrlock(); + std::unique_ptr runtime_pgsql_servers_resultset = get_SQLite3_resulset(result); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Loading runtime_pgsql_servers from peer %s:%d into pgsql_servers_incoming\n", hostname, port); + PgHGM->servers_add(runtime_pgsql_servers_resultset.get()); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Updating runtime_pgsql_servers from peer %s:%d\n", hostname, port); + PgHGM->commit( + { runtime_pgsql_servers_resultset.release(), { expected_runtime_checksum, peer_runtime_pgsql_server.epoch } }, + { nullptr, {} }, true, true + ); + + if (GloProxyCluster->cluster_pgsql_servers_save_to_disk == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving Runtime PostgreSQL Servers to Database\n"); + GloAdmin->save_pgsql_servers_runtime_to_database(false); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Servers from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Servers from peer %s:%d\n", hostname, port); + GloAdmin->flush_GENERIC__from_to(ClusterModules::PGSQL_SERVERS, "memory_to_disk"); + } + GloAdmin->pgsql_servers_wrunlock(); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_success]->Increment(); + } else { + proxy_debug( + PROXY_DEBUG_CLUSTER, 5, + "Checksum mismatch while syncing Runtime PostgreSQL Servers. Expected: %s, Computed: %s\n", + expected_runtime_checksum.empty() ? "" : expected_runtime_checksum.c_str(), + computed_checksum.c_str() + ); + proxy_info( + "Cluster: Checksum mismatch while syncing Runtime PostgreSQL Servers. Expected: %s, Computed: %s\n", + expected_runtime_checksum.empty() ? "" : expected_runtime_checksum.c_str(), + computed_checksum.c_str() + ); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + } + + if (result) { + mysql_free_result(result); + result = nullptr; + } + } +__exit_pull_runtime_pgsql_servers_from_peer: + if (conn) { + if (conn->net.pvio) { + mysql_close(conn); + } + } + free(hostname); + + if (ip_address) + free(ip_address); + + if (peer_runtime_pgsql_servers_checksum) + free(peer_runtime_pgsql_servers_checksum); + } + pthread_mutex_unlock(&GloProxyCluster->update_runtime_mysql_servers_mutex); + if (fetch_failed == true) sleep(1); +} + +/** + * @brief Pulls PostgreSQL servers v2 configuration from a cluster peer node. + * + * This function fetches PostgreSQL servers configuration from a peer ProxySQL instance when the peer's + * checksum differs from the local checksum and the difference exceeds the configured + * threshold (cluster_pgsql_servers_diffs_before_sync). It retrieves PostgreSQL server definitions + * including hostgroup mappings, connection parameters, and server status. + * + * The function performs the following steps: + * 1. Identifies the optimal peer to sync from using get_peer_to_sync_pgsql_servers_v2() + * 2. Establishes a MySQL connection to the peer's admin interface + * 3. Executes CLUSTER_QUERY_PGSQL_SERVERS_V2 plus dependent pgsql server tables + * 4. Computes and validates the combined checksum for pgsql_servers_v2 + dependencies + * 5. Optionally fetches and validates runtime pgsql servers checksum in the same operation + * 6. Loads fetched pgsql servers datasets into runtime via load_pgsql_servers_to_runtime() + * 7. Optionally saves configuration to disk if cluster_pgsql_servers_save_to_disk is enabled + * + * @param peer_pgsql_server_v2 The checksum structure containing expected checksum value and epoch timestamp for pgsql_servers_v2 + * @param peer_runtime_pgsql_server The checksum structure containing runtime PostgreSQL servers checksum + * @param fetch_runtime_pgsql_servers Boolean flag indicating whether to fetch runtime PostgreSQL servers data + * + * @note This function is thread-safe and requires the update_mysql_servers_v2_mutex to be held (reused for pgsql_servers_v2) + * @note The function will sleep(1) if the fetch operation fails to prevent busy loops + * @note The function reuses MySQL servers counters for metrics tracking + * @note Runtime data can be fetched and validated in the same operation when requested + * @see get_peer_to_sync_pgsql_servers_v2() + * @see CLUSTER_QUERY_PGSQL_SERVERS_V2 + * @see mysql_raw_checksum() + * @see get_checksum_from_hash() + */ +void ProxySQL_Cluster::pull_pgsql_servers_v2_from_peer(const pgsql_servers_v2_checksum_t& peer_pgsql_server_v2, + const runtime_pgsql_servers_checksum_t& peer_runtime_pgsql_server, bool fetch_runtime_pgsql_servers) { + char * hostname = NULL; + char * ip_address = NULL; + uint16_t port = 0; + char* peer_pgsql_servers_v2_checksum = NULL; + char* peer_runtime_pgsql_servers_checksum = NULL; + bool fetch_failed = false; + pthread_mutex_lock(&GloProxyCluster->update_mysql_servers_v2_mutex); // Reuse mysql_servers_v2 mutex for pgsql_servers_v2 + nodes.get_peer_to_sync_pgsql_servers_v2(&hostname, &port, &peer_pgsql_servers_v2_checksum, &peer_runtime_pgsql_servers_checksum, &ip_address); + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn==NULL) { + proxy_error("Unable to run mysql_init()\n"); + goto __exit_pull_pgsql_servers_v2_from_peer; + } + + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { // do not monitor if the username is empty + // READ/WRITE timeouts were enforced as an attempt to prevent deadlocks in the original + // implementation. They were proven unnecessary, leaving only 'CONNECT_TIMEOUT'. + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Servers v2 from peer %s:%d started.\n", hostname, port); + proxy_info("Cluster: Fetching PostgreSQL Servers v2 from peer %s:%d started.\n", hostname, port); + + MYSQL* rc_conn = mysql_real_connect(conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0); + if (rc_conn == nullptr) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching PostgreSQL Servers v2 from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + proxy_info("Cluster: Fetching PostgreSQL Servers v2 from peer %s:%d failed: %s\n", hostname, port, mysql_error(conn)); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + goto __exit_pull_pgsql_servers_v2_from_peer; + } + + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); + + fetch_query queries[] = { + { + CLUSTER_QUERY_PGSQL_SERVERS_V2, + p_cluster_counter::metric(-1), + p_cluster_counter::metric(-1), + { + "Cluster: Fetching PostgreSQL Servers v2 from peer " + string(hostname) + ":" + std::to_string(port) + " completed.", + "", + "Cluster: Fetching PostgreSQL Servers v2 from peer " + string(hostname) + ":" + std::to_string(port) + " failed: " + } + }, + { + CLUSTER_QUERY_PGSQL_REPLICATION_HOSTGROUPS, + p_cluster_counter::metric(-1), + p_cluster_counter::metric(-1), + { + "Cluster: Fetching PostgreSQL Replication Hostgroups from peer " + string(hostname) + ":" + std::to_string(port) + " completed.", + "", + "Cluster: Fetching PostgreSQL Replication Hostgroups from peer " + string(hostname) + ":" + std::to_string(port) + " failed: " + } + }, + { + CLUSTER_QUERY_PGSQL_HOSTGROUP_ATTRIBUTES, + p_cluster_counter::metric(-1), + p_cluster_counter::metric(-1), + { + "Cluster: Fetching PostgreSQL Hostgroup Attributes from peer " + string(hostname) + ":" + std::to_string(port) + " completed.", + "", + "Cluster: Fetching PostgreSQL Hostgroup Attributes from peer " + string(hostname) + ":" + std::to_string(port) + " failed: " + } + } + }; + + std::vector results(4, nullptr); + bool fetching_error = false; + for (size_t i = 0; i < sizeof(queries) / sizeof(fetch_query); i++) { + MYSQL_RES* fetch_res = nullptr; + if (fetch_and_store(conn, queries[i], &fetch_res) == 0) { + results[i] = fetch_res; + } else { + fetching_error = true; + fetch_failed = true; + break; + } + } + + if (fetching_error == false && fetch_runtime_pgsql_servers == true) { + fetch_query runtime_query = { + CLUSTER_QUERY_RUNTIME_PGSQL_SERVERS, + p_cluster_counter::metric(-1), + p_cluster_counter::metric(-1), + { + "Cluster: Fetching Runtime PostgreSQL Servers from peer " + string(hostname) + ":" + std::to_string(port) + " completed.", + "", + "Cluster: Fetching Runtime PostgreSQL Servers from peer " + string(hostname) + ":" + std::to_string(port) + " failed: " + } + }; + + MYSQL_RES* fetch_res = nullptr; + if (fetch_and_store(conn, runtime_query, &fetch_res) == 0) { + results[3] = fetch_res; + } else { + fetching_error = true; + fetch_failed = true; + } + } + + if (fetching_error == false) { + const string expected_pgsql_v2_checksum = peer_pgsql_servers_v2_checksum + ? string(peer_pgsql_servers_v2_checksum) : peer_pgsql_server_v2.value; + const uint64_t servers_hash = compute_servers_tables_raw_checksum(results, 3); + const string computed_pgsql_v2_checksum = get_checksum_from_hash(servers_hash); + + bool runtime_checksum_matches = true; + string expected_runtime_pgsql_checksum = peer_runtime_pgsql_servers_checksum + ? string(peer_runtime_pgsql_servers_checksum) : peer_runtime_pgsql_server.value; + string computed_runtime_pgsql_checksum; + + if (fetch_runtime_pgsql_servers == true) { + if (results[3] == nullptr || expected_runtime_pgsql_checksum.empty()) { + runtime_checksum_matches = false; + } else { + const uint64_t runtime_hash = mysql_raw_checksum(results[3]); + computed_runtime_pgsql_checksum = get_checksum_from_hash(runtime_hash); + runtime_checksum_matches = (computed_runtime_pgsql_checksum == expected_runtime_pgsql_checksum); + } + } + + if (!expected_pgsql_v2_checksum.empty() && + computed_pgsql_v2_checksum == expected_pgsql_v2_checksum && + runtime_checksum_matches == true) { + const incoming_pgsql_servers_t incoming_pgsql_servers { convert_pgsql_servers_resultsets(results) }; + const runtime_pgsql_servers_checksum_t expected_runtime_pgsql_server { + expected_runtime_pgsql_checksum, peer_runtime_pgsql_server.epoch + }; + const pgsql_servers_v2_checksum_t expected_pgsql_server_v2 { + expected_pgsql_v2_checksum, peer_pgsql_server_v2.epoch + }; + + proxy_info("Cluster: Loading to runtime PostgreSQL Servers from peer %s:%d.\n", hostname, port); + GloAdmin->pgsql_servers_wrlock(); + update_pgsql_servers(incoming_pgsql_servers.incoming_pgsql_servers_v2); + update_pgsql_replication_hostgroups(incoming_pgsql_servers.incoming_replication_hostgroups); + update_pgsql_hostgroup_attributes(incoming_pgsql_servers.incoming_hostgroup_attributes); + GloAdmin->load_pgsql_servers_to_runtime( + incoming_pgsql_servers, + fetch_runtime_pgsql_servers ? expected_runtime_pgsql_server : runtime_pgsql_servers_checksum_t {}, + expected_pgsql_server_v2 + ); + + if (GloProxyCluster->cluster_pgsql_servers_save_to_disk == true) { + if (fetch_runtime_pgsql_servers == true) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving Runtime PostgreSQL Servers to Database\n"); + GloAdmin->save_pgsql_servers_runtime_to_database(false); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Saving to disk PostgreSQL Servers from peer %s:%d\n", hostname, port); + proxy_info("Cluster: Saving to disk PostgreSQL Servers from peer %s:%d\n", hostname, port); + GloAdmin->flush_GENERIC__from_to(ClusterModules::PGSQL_SERVERS, "memory_to_disk"); + } + GloAdmin->pgsql_servers_wrunlock(); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_success]->Increment(); + } else { + proxy_debug( + PROXY_DEBUG_CLUSTER, 5, + "Checksum mismatch while syncing PostgreSQL Servers. Expected v2: %s, Computed v2: %s, Expected runtime: %s, Computed runtime: %s\n", + expected_pgsql_v2_checksum.empty() ? "" : expected_pgsql_v2_checksum.c_str(), + computed_pgsql_v2_checksum.c_str(), + expected_runtime_pgsql_checksum.empty() ? "" : expected_runtime_pgsql_checksum.c_str(), + computed_runtime_pgsql_checksum.empty() ? "" : computed_runtime_pgsql_checksum.c_str() + ); + proxy_info( + "Cluster: Checksum mismatch while syncing PostgreSQL Servers. Expected v2: %s, Computed v2: %s, Expected runtime: %s, Computed runtime: %s\n", + expected_pgsql_v2_checksum.empty() ? "" : expected_pgsql_v2_checksum.c_str(), + computed_pgsql_v2_checksum.c_str(), + expected_runtime_pgsql_checksum.empty() ? "" : expected_runtime_pgsql_checksum.c_str(), + computed_runtime_pgsql_checksum.empty() ? "" : computed_runtime_pgsql_checksum.c_str() + ); + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + } + } else { + metrics.p_counter_array[p_cluster_counter::pulled_pgsql_servers_failure]->Increment(); + fetch_failed = true; + } + + for (MYSQL_RES* result : results) { + if (result) { + mysql_free_result(result); + } + } + } +__exit_pull_pgsql_servers_v2_from_peer: + if (conn) { + if (conn->net.pvio) { + mysql_close(conn); + } + } + free(hostname); + + if (ip_address) + free(ip_address); + + if (peer_pgsql_servers_v2_checksum) + free(peer_pgsql_servers_v2_checksum); + + if (peer_runtime_pgsql_servers_checksum) + free(peer_runtime_pgsql_servers_checksum); + } + pthread_mutex_unlock(&GloProxyCluster->update_mysql_servers_v2_mutex); + if (fetch_failed == true) sleep(1); +} + +using metric_name = std::string; +using metric_help = std::string; +using metric_tags = std::map; + +using cluster_nodes_counter_tuple = + std::tuple< + p_cluster_nodes_counter::metric, + metric_name, + metric_help, + metric_tags + >; + +using cluster_nodes_gauge_tuple = + std::tuple< + p_cluster_nodes_gauge::metric, + metric_name, + metric_help, + metric_tags + >; + +using cluster_nodes_dyn_counter_tuple = + std::tuple< + p_cluster_nodes_dyn_counter::metric, + metric_name, + metric_help, + metric_tags + >; + +using cluster_nodes_dyn_gauge_tuple = + std::tuple< + p_cluster_nodes_dyn_gauge::metric, + metric_name, + metric_help, + metric_tags + >; + +using cluster_nodes_counter_vector = std::vector; +using cluster_nodes_gauge_vector = std::vector; +using cluster_nodes_dyn_counter_vector = std::vector; +using cluster_nodes_dyn_gauge_vector = std::vector; + +const std::tuple< + cluster_nodes_counter_vector, + cluster_nodes_gauge_vector, + cluster_nodes_dyn_counter_vector, + cluster_nodes_dyn_gauge_vector +> +cluster_nodes_metrics_map = std::make_tuple( + cluster_nodes_counter_vector{}, + cluster_nodes_gauge_vector {}, + cluster_nodes_dyn_counter_vector { + std::make_tuple ( + p_cluster_nodes_dyn_counter::proxysql_servers_checksums_version_total, + "proxysql_servers_checksums_version_total", + "Number of times the configuration has been loaded locally.", + metric_tags {} + ), + std::make_tuple ( + p_cluster_nodes_dyn_counter::proxysql_servers_metrics_uptime_s, + "proxysql_servers_metrics_uptime_s_total", + "Current uptime of the Cluster node, in seconds.", + metric_tags {} + ), + std::make_tuple ( + p_cluster_nodes_dyn_counter::proxysql_servers_metrics_queries, + "proxysql_servers_metrics_queries_total", + "Number of queries the Cluster node has processed.", + metric_tags {} + ), + std::make_tuple ( + p_cluster_nodes_dyn_counter::proxysql_servers_metrics_client_conns_created, + "proxysql_servers_metrics_client_conns_created_total", + "Number of frontend client connections created over time on the Cluster node.", + metric_tags {} + ), + }, + cluster_nodes_dyn_gauge_vector { std::make_tuple ( p_cluster_nodes_dyn_gauge::proxysql_servers_checksums_epoch, "proxysql_servers_checksums_epoch", @@ -2993,6 +3983,51 @@ void ProxySQL_Cluster_Nodes::Reset_Global_Checksums(bool lock) { } } +void ProxySQL_Node_Entry::set_metrics(MYSQL_RES *_r, unsigned long long _response_time) { + MYSQL_ROW row; + metrics_idx_prev = metrics_idx; + metrics_idx++; + if (metrics_idx == PROXYSQL_NODE_METRICS_LEN) { + metrics_idx = 0; + } + ProxySQL_Node_Metrics *m = metrics[metrics_idx]; + m->reset(); + m->read_time_us = monotonic_time(); + m->response_time_us = _response_time; + while ((row = mysql_fetch_row(_r))) { + char c = row[0][0]; + switch (c) { + case 'C': + if (strcmp(row[0],"Client_Connections_connected")==0) { + m->Client_Connections_connected = atoll(row[1]); + break; + } + if (strcmp(row[0],"Client_Connections_created")==0) { + m->Client_Connections_created = atoll(row[1]); + break; + } + break; + case 'P': + if (strcmp(row[0],"ProxySQL_Uptime")==0) { + m->ProxySQL_Uptime = atoll(row[1]); + } + break; + case 'Q': + if (strcmp(row[0],"Questions")==0) { + m->Questions = atoll(row[1]); + } + break; + case 'S': + if (strcmp(row[0],"Servers_table_version")==0) { + m->Servers_table_version = atoll(row[1]); + } + break; + default: + break; + } + } +} + // if it returns false , the node doesn't exist anymore and the monitor should stop bool ProxySQL_Cluster_Nodes::Update_Node_Metrics(char * _h, uint16_t _p, MYSQL_RES *_r, unsigned long long _response_time) { bool ret = false; @@ -3015,408 +4050,301 @@ bool ProxySQL_Cluster_Nodes::Update_Node_Metrics(char * _h, uint16_t _p, MYSQL_R } void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_query_rules(char **host, uint16_t *port, char** ip_address) { - unsigned long long version = 0; - unsigned long long epoch = 0; - unsigned long long max_epoch = 0; - char *hostname = NULL; - char *ip_addr = NULL; - uint16_t p = 0; -// pthread_mutex_lock(&mutex); - //unsigned long long curtime = monotonic_time(); - unsigned int diff_mqr = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_query_rules_diffs_before_sync,0); - for( std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { - ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.mysql_query_rules; - if (v->version > 1) { - if ( v->epoch > epoch ) { - max_epoch = v->epoch; - if (v->diff_check >= diff_mqr) { - epoch = v->epoch; - version = v->version; - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - hostname=strdup(node->get_hostname()); - - const char* ip = node->get_ipaddress(); - if (ip) - ip_addr= strdup(ip); - - p = node->get_port(); - } - } - } - it++; - } -// pthread_mutex_unlock(&mutex); - if (epoch) { - if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with mysql_query_rules epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } - } - } - if (hostname) { - *host = hostname; - *port = p; - *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_query_rules version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with mysql_query_rules version %llu, epoch %llu\n", hostname, p, version, epoch); - } + get_peer_to_sync_variables_module("mysql_query_rules", host, port, ip_address, nullptr, nullptr); } void ProxySQL_Cluster_Nodes::get_peer_to_sync_runtime_mysql_servers(char **host, uint16_t *port, char **peer_checksum, char** ip_address) { - unsigned long long version = 0; - unsigned long long epoch = 0; - unsigned long long max_epoch = 0; - char *hostname = NULL; - char *ip_addr = NULL; - uint16_t p = 0; - char *pc = NULL; -// pthread_mutex_lock(&mutex); - //unsigned long long curtime = monotonic_time(); - unsigned int diff_ms = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_servers_diffs_before_sync,0); - for( std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { - ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.mysql_servers; - if (v->version > 1) { - if ( v->epoch > epoch ) { - max_epoch = v->epoch; - if (v->diff_check >= diff_ms) { - epoch = v->epoch; - version = v->version; - if (pc) { - free(pc); - } - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - pc = strdup(v->checksum); - hostname=strdup(node->get_hostname()); - const char* ip = node->get_ipaddress(); - if (ip) - ip_addr=strdup(ip); - p = node->get_port(); - } - } - } - it++; - } -// pthread_mutex_unlock(&mutex); - if (epoch) { - if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with mysql_servers epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (pc) { - free(pc); - pc = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } - } - } - if (hostname) { - *host = hostname; - *port = p; - *ip_address = ip_addr; - *peer_checksum = pc; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers version %llu, epoch %llu, checksum %s\n", hostname, p, version, epoch, pc); - proxy_info("Cluster: detected peer %s:%d with mysql_servers version %llu, epoch %llu\n", hostname, p, version, epoch); - } -} - -void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_servers_v2(char** host, uint16_t* port, - char** peer_mysql_servers_v2_checksum, char** peer_runtime_mysql_servers_checksum, char** ip_address) { - unsigned long long version = 0; - unsigned long long epoch = 0; - unsigned long long max_epoch = 0; - char* hostname = NULL; - char* ip_addr = NULL; - uint16_t p = 0; - char* mysql_servers_v2_checksum = NULL; - char* runtime_mysql_servers_checksum = NULL; - //pthread_mutex_lock(&mutex); - //unsigned long long curtime = monotonic_time(); - unsigned int diff_ms = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_servers_diffs_before_sync, 0); - for (std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { - ProxySQL_Node_Entry* node = it->second; - ProxySQL_Checksum_Value_2* v = &node->checksums_values.mysql_servers_v2; - if (v->version > 1) { - if (v->epoch > epoch) { - max_epoch = v->epoch; - if (v->diff_check >= diff_ms) { - epoch = v->epoch; - version = v->version; - if (mysql_servers_v2_checksum) { - free(mysql_servers_v2_checksum); - } - if (runtime_mysql_servers_checksum) { - free(runtime_mysql_servers_checksum); - } - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - mysql_servers_v2_checksum = strdup(v->checksum); - runtime_mysql_servers_checksum = strdup(node->checksums_values.mysql_servers.checksum); - hostname = strdup(node->get_hostname()); - const char* ip = node->get_ipaddress(); - if (ip) - ip_addr = strdup(ip); - p = node->get_port(); - } - } - } - it++; - } - // pthread_mutex_unlock(&mutex); - if (epoch) { - if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with mysql_servers_v2 epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (mysql_servers_v2_checksum) { - free(mysql_servers_v2_checksum); - mysql_servers_v2_checksum = NULL; - } - if (runtime_mysql_servers_checksum) { - free(runtime_mysql_servers_checksum); - runtime_mysql_servers_checksum = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } - } - } - if (hostname) { - *host = hostname; - *port = p; - *ip_address = ip_addr; - *peer_mysql_servers_v2_checksum = mysql_servers_v2_checksum; - *peer_runtime_mysql_servers_checksum = runtime_mysql_servers_checksum; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_servers_v2 version %llu, epoch %llu, mysql_servers_v2 checksum %s, runtime_mysql_servers %s\n", hostname, p, version, epoch, mysql_servers_v2_checksum, runtime_mysql_servers_checksum); - proxy_info("Cluster: detected peer %s:%d with mysql_servers_v2 version %llu, epoch %llu\n", hostname, p, version, epoch); - } -} - -void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_users(char **host, uint16_t *port, char** ip_address) { - unsigned long long version = 0; - unsigned long long epoch = 0; - unsigned long long max_epoch = 0; - char *hostname = NULL; - char *ip_addr = NULL; - uint16_t p = 0; -// pthread_mutex_lock(&mutex); - //unsigned long long curtime = monotonic_time(); - unsigned int diff_mu = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_users_diffs_before_sync,0); - for( std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { - ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.mysql_users; - if (v->version > 1) { - if ( v->epoch > epoch ) { - max_epoch = v->epoch; - if (v->diff_check >= diff_mu) { - epoch = v->epoch; - version = v->version; - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - hostname=strdup(node->get_hostname()); - const char* ip = node->get_ipaddress(); - if (ip) - ip_addr = strdup(ip); - p = node->get_port(); - } - } - } - it++; - } -// pthread_mutex_unlock(&mutex); - if (epoch) { - if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with mysql_users epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } - } - } - if (hostname) { - *host = hostname; - *port = p; - *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_users version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with mysql_users version %llu, epoch %llu\n", hostname, p, version, epoch); - } -} - -void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_variables(char **host, uint16_t *port, char** ip_address) { - unsigned long long version = 0; - unsigned long long epoch = 0; - unsigned long long max_epoch = 0; - char *hostname = NULL; - char* ip_addr = NULL; - uint16_t p = 0; - unsigned int diff_mu = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_mysql_variables_diffs_before_sync,0); - for (std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end();) { - ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.mysql_variables; - if (v->version > 1) { - if ( v->epoch > epoch ) { - max_epoch = v->epoch; - if (v->diff_check >= diff_mu) { - epoch = v->epoch; - version = v->version; - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - hostname=strdup(node->get_hostname()); - const char* ip = node->get_ipaddress(); - if (ip) - ip_addr = strdup(ip); - p = node->get_port(); - } - } - } - it++; - } - if (epoch) { - if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with mysql_variables epoch %llu, but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } + get_peer_to_sync_variables_module("runtime_mysql_servers", host, port, ip_address, peer_checksum, nullptr); +} + +void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_servers_v2(char** host, uint16_t* port, + char** peer_mysql_servers_v2_checksum, char** peer_runtime_mysql_servers_checksum, char** ip_address) { + get_peer_to_sync_variables_module("mysql_servers_v2", host, port, ip_address, peer_mysql_servers_v2_checksum, peer_runtime_mysql_servers_checksum); +} + +void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_users(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("mysql_users", host, port, ip_address, nullptr, nullptr); +} + +/** + * @brief Unified function to find optimal peer for sync operations + * + * Data-driven implementation that replaces separate functions for various modules. + * Uses module configuration to determine: + * - Which cluster variable to check for diff threshold + * - Which checksum field to examine in each node + * - Whether to return checksum data + * - Module name for debug logging + * + * @param module_name The name of the module ("mysql_query_rules", "mysql_users", "proxysql_servers", + * "pgsql_users", "pgsql_query_rules", "runtime_mysql_servers", "runtime_pgsql_servers", + * "mysql_servers_v2", "pgsql_servers_v2", "mysql_variables", "admin_variables", + * "ldap_variables", "pgsql_variables") + * @param host Pointer to store the selected peer's hostname + * @param port Pointer to store the selected peer's port + * @param ip_address Pointer to store the selected peer's IP address + * @param peer_checksum Optional: pointer to store checksum (NULL if not needed) + * @param peer_secondary_checksum Optional: pointer to store secondary checksum for V2 functions (NULL if not needed) + */ +void ProxySQL_Cluster_Nodes::get_peer_to_sync_variables_module(const char* module_name, char **host, uint16_t *port, char** ip_address, char **peer_checksum, char **peer_secondary_checksum) { + // Data-driven mapping of module names to their cluster configurations + struct ModuleConfig { + const char* name; + std::atomic ProxySQL_Cluster::*diff_member; + std::function checksum_getter; + std::function secondary_checksum_getter; + bool has_checksum; + bool has_secondary_checksum; + const char* secondary_module_name; + }; + + // Initialize all supported modules with their configuration + const ModuleConfig modules[] = { + // Basic 3-param modules (no checksum) + {"mysql_query_rules", &ProxySQL_Cluster::cluster_mysql_query_rules_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_query_rules; }, nullptr, false, false, nullptr}, + {"mysql_users", &ProxySQL_Cluster::cluster_mysql_users_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_users; }, nullptr, false, false, nullptr}, + {"proxysql_servers", &ProxySQL_Cluster::cluster_proxysql_servers_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.proxysql_servers; }, nullptr, false, false, nullptr}, + {"pgsql_users", &ProxySQL_Cluster::cluster_pgsql_users_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_users; }, nullptr, false, false, nullptr}, + {"pgsql_query_rules", &ProxySQL_Cluster::cluster_pgsql_query_rules_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_query_rules; }, nullptr, false, false, nullptr}, + + // Runtime 4-param modules (with checksum) + {"runtime_mysql_servers", &ProxySQL_Cluster::cluster_mysql_servers_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_servers; }, nullptr, true, false, nullptr}, + {"runtime_pgsql_servers", &ProxySQL_Cluster::cluster_pgsql_servers_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_servers; }, nullptr, true, false, nullptr}, + + // V2 5-param modules (with dual checksums) + {"mysql_servers_v2", &ProxySQL_Cluster::cluster_mysql_servers_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_servers_v2; }, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_servers; }, true, true, "runtime_mysql_servers"}, + {"pgsql_servers_v2", &ProxySQL_Cluster::cluster_pgsql_servers_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_servers_v2; }, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_servers; }, true, true, "runtime_pgsql_servers"}, + + // Variables modules (already unified) + {"mysql_variables", &ProxySQL_Cluster::cluster_mysql_variables_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.mysql_variables; }, nullptr, false, false, nullptr}, + {"admin_variables", &ProxySQL_Cluster::cluster_admin_variables_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.admin_variables; }, nullptr, false, false, nullptr}, + {"ldap_variables", &ProxySQL_Cluster::cluster_ldap_variables_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.ldap_variables; }, nullptr, false, false, nullptr}, + {"pgsql_variables", &ProxySQL_Cluster::cluster_pgsql_variables_diffs_before_sync, + [](ProxySQL_Node_Entry* node) { return &node->checksums_values.pgsql_variables; }, nullptr, false, false, nullptr} + }; + + // Find the matching module configuration + const ModuleConfig* config = nullptr; + for (const auto& module : modules) { + if (strcmp(module_name, module.name) == 0) { + config = &module; + break; } } - if (hostname) { - *host = hostname; - *port = p; - *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with mysql_variables version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with mysql_variables version %llu, epoch %llu\n", hostname, p, version, epoch); - } -} + if (!config) { + proxy_error("Invalid module name supplied to get_peer_to_sync_variables_module: %s\n", module_name); + return; + } -void ProxySQL_Cluster_Nodes::get_peer_to_sync_admin_variables(char **host, uint16_t *port, char** ip_address) { unsigned long long version = 0; unsigned long long epoch = 0; unsigned long long max_epoch = 0; char *hostname = NULL; - char *ip_addr = NULL; + char* ip_addr = NULL; uint16_t p = 0; - unsigned int diff_mu = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_admin_variables_diffs_before_sync,0); + char *checksum = NULL; + char *secondary_checksum = NULL; + + // Get diff threshold using member pointer with atomic load + unsigned int diff_threshold = (unsigned int)(GloProxyCluster->*(config->diff_member)).load(); + for (std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end();) { ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.admin_variables; + // Use function pointer to access the correct checksum field + ProxySQL_Checksum_Value_2 * v = config->checksum_getter(node); + if (v->version > 1) { if ( v->epoch > epoch ) { max_epoch = v->epoch; - if (v->diff_check >= diff_mu) { + if (v->diff_check >= diff_threshold) { epoch = v->epoch; version = v->version; - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } + + // Clean up existing allocations + if (hostname) free(hostname); + if (ip_addr) free(ip_addr); + if (checksum) free(checksum); + if (secondary_checksum) free(secondary_checksum); + + // Allocate new values hostname=strdup(node->get_hostname()); const char* ip = node->get_ipaddress(); if (ip) ip_addr = strdup(ip); p = node->get_port(); + + if (config->has_checksum) { + checksum = strdup(v->checksum); + } + + if (config->has_secondary_checksum && config->secondary_checksum_getter) { + ProxySQL_Checksum_Value_2 * secondary_v = config->secondary_checksum_getter(node); + if (secondary_v) { + secondary_checksum = strdup(secondary_v->checksum); + } + } } } } - it++; + + ++it; } + if (epoch) { if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with admin_variables epoch %llu, but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } + proxy_warning("Cluster: detected a peer with %s epoch %llu, but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", config->name, max_epoch, epoch); + + // Clean up allocated memory + if (hostname) { free(hostname); hostname = NULL; } + if (ip_addr) { free(ip_addr); ip_addr = NULL; } + if (checksum) { free(checksum); checksum = NULL; } + if (secondary_checksum) { free(secondary_checksum); secondary_checksum = NULL; } } } + if (hostname) { *host = hostname; *port = p; - *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with admin_variables version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with admin_variables version %llu, epoch %llu\n", hostname, p, version, epoch); + if (ip_address) { + *ip_address = ip_addr; + } + if (peer_checksum) { + *peer_checksum = checksum; + } else if (checksum) { + free(checksum); // Free if not requested + } + if (peer_secondary_checksum) { + *peer_secondary_checksum = secondary_checksum; + } else if (secondary_checksum) { + free(secondary_checksum); // Free if not requested + } + + const char* log_name = config->secondary_module_name ? config->secondary_module_name : config->name; + if (config->has_secondary_checksum && secondary_checksum) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu, primary checksum %s, secondary checksum %s\n", hostname, p, config->name, version, epoch, checksum, secondary_checksum); + proxy_info("Cluster: detected peer %s:%d with %s version %llu, epoch %llu\n", hostname, p, config->name, version, epoch); + } else if (config->has_checksum) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu, checksum %s\n", hostname, p, log_name, version, epoch, checksum); + proxy_info("Cluster: detected peer %s:%d with %s version %llu, epoch %llu\n", hostname, p, log_name, version, epoch); + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with %s version %llu, epoch %llu\n", hostname, p, log_name, version, epoch); + proxy_info("Cluster: detected peer %s:%d with %s version %llu, epoch %llu\n", hostname, p, log_name, version, epoch); + } } } +void ProxySQL_Cluster_Nodes::get_peer_to_sync_mysql_variables(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("mysql_variables", host, port, ip_address, nullptr, nullptr); +} + +void ProxySQL_Cluster_Nodes::get_peer_to_sync_admin_variables(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("admin_variables", host, port, ip_address, nullptr, nullptr); +} + void ProxySQL_Cluster_Nodes::get_peer_to_sync_ldap_variables(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("ldap_variables", host, port, ip_address, nullptr, nullptr); +} + +void ProxySQL_Cluster_Nodes::get_peer_to_sync_pgsql_variables(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("pgsql_variables", host, port, ip_address, nullptr, nullptr); +} + +void ProxySQL_Cluster_Nodes::get_peer_to_sync_proxysql_servers(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("proxysql_servers", host, port, ip_address, nullptr, nullptr); +} + + + +/** + * @brief Identifies the optimal cluster peer for PostgreSQL users synchronization. + * + * This function scans all available cluster nodes to find the best peer for synchronizing + * PostgreSQL users configuration. It selects a peer based on the following criteria: + * 1. The peer must have a valid pgsql_users checksum (version > 1) + * 2. The peer should have the latest epoch timestamp + * 3. The peer's diff_check count must exceed cluster_pgsql_users_diffs_before_sync threshold + * + * The algorithm prioritizes nodes with the most recent configuration changes while ensuring + * sufficient differences have accumulated to justify synchronization. This prevents excessive + * network traffic and unnecessary synchronization operations. + * + * @param host Pointer to store the hostname of the selected peer (caller must free) + * @param port Pointer to store the port number of the selected peer + * @param ip_address Pointer to store the IP address of the selected peer (caller must free) + * + * @note If no suitable peer is found, *host will be set to NULL + * @note If a peer has the maximum epoch but insufficient diff_check, a warning is logged and sync is skipped + * @note The function performs memory allocation for hostname and ip_address that must be freed by the caller + * @note This is a PostgreSQL counterpart to get_peer_to_sync_mysql_users() + * @see cluster_pgsql_users_diffs_before_sync + * @see ProxySQL_Checksum_Value_2::pgsql_users + */ +void ProxySQL_Cluster_Nodes::get_peer_to_sync_pgsql_users(char **host, uint16_t *port, char** ip_address) { + get_peer_to_sync_variables_module("pgsql_users", host, port, ip_address, nullptr, nullptr); +} + +/** + * @brief Identifies the optimal cluster peer for PostgreSQL query rules synchronization. + * + * This function scans all available cluster nodes to find the best peer for synchronizing + * PostgreSQL query rules configuration. It selects a peer based on the following criteria: + * 1. The peer must have a valid pgsql_query_rules checksum (version > 1) + * 2. The peer should have the latest epoch timestamp + * 3. The peer's diff_check count must exceed cluster_pgsql_query_rules_diffs_before_sync threshold + * + * The algorithm prioritizes nodes with the most recent query rules changes while ensuring + * sufficient differences have accumulated to justify synchronization. This prevents excessive + * network traffic and unnecessary synchronization operations for query rules that haven't + * changed significantly. + * + * @param host Pointer to store the hostname of the selected peer (caller must free) + * @param port Pointer to store the port number of the selected peer + * @param ip_address Pointer to store the IP address of the selected peer (caller must free) + * + * @note If no suitable peer is found, *host will be set to NULL + * @note If a peer has the maximum epoch but insufficient diff_check, a warning is logged and sync is skipped + * @note The function performs memory allocation for hostname and ip_address that must be freed by the caller + * @note This is a PostgreSQL counterpart to get_peer_to_sync_mysql_query_rules() + * @see cluster_pgsql_query_rules_diffs_before_sync + * @see ProxySQL_Checksum_Value_2::pgsql_query_rules + */ +void ProxySQL_Cluster_Nodes::get_peer_to_sync_pgsql_query_rules(char **host, uint16_t *port, char** ip_address) { unsigned long long version = 0; unsigned long long epoch = 0; unsigned long long max_epoch = 0; char *hostname = NULL; - char* ip_addr = NULL; + char *ip_addr = NULL; uint16_t p = 0; - unsigned int diff_mu = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_ldap_variables_diffs_before_sync,0); - for (std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end();) { + unsigned int diff_mu = (unsigned int)GloProxyCluster->cluster_pgsql_query_rules_diffs_before_sync; + for( std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.ldap_variables; + ProxySQL_Checksum_Value_2 * v = &node->checksums_values.pgsql_query_rules; if (v->version > 1) { if ( v->epoch > epoch ) { max_epoch = v->epoch; if (v->diff_check >= diff_mu) { epoch = v->epoch; version = v->version; - if (hostname) { - free(hostname); - } - if (ip_addr) { - free(ip_addr); - } - hostname=strdup(node->get_hostname()); const char* ip = node->get_ipaddress(); - if (ip) - ip_addr = strdup(ip); + if (!safe_update_peer_info(&hostname, &ip_addr, node->get_hostname(), ip)) { + proxy_error("Memory allocation failed while updating pgsql_query_rules peer info\n"); + return; + } p = node->get_port(); } } @@ -3425,43 +4353,96 @@ void ProxySQL_Cluster_Nodes::get_peer_to_sync_ldap_variables(char **host, uint16 } if (epoch) { if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with ldap_variables epoch %llu, but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); - if (hostname) { - free(hostname); - hostname = NULL; - } - if (ip_addr) { - free(ip_addr); - ip_addr = NULL; - } + proxy_warning("Cluster: detected a peer with pgsql_query_rules epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); + // Clean up allocated memory using helper function + safe_update_peer_info(&hostname, &ip_addr, NULL, NULL); } } if (hostname) { *host = hostname; *port = p; *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with ldap_variables version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with ldap_variables version %llu, epoch %llu\n", hostname, p, version, epoch); + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_query_rules version %llu, epoch %llu\n", hostname, p, version, epoch); + proxy_info("Cluster: detected peer %s:%d with pgsql_query_rules version %llu, epoch %llu\n", hostname, p, version, epoch); } } -void ProxySQL_Cluster_Nodes::get_peer_to_sync_proxysql_servers(char **host, uint16_t *port, char** ip_address) { +/** + * @brief Identifies the optimal cluster peer for runtime PostgreSQL servers synchronization. + * + * This function scans all available cluster nodes to find the best peer for synchronizing + * runtime PostgreSQL servers status and metrics. It selects a peer based on the following criteria: + * 1. The peer must have a valid pgsql_servers runtime checksum (version > 1) + * 2. The peer should have the latest epoch timestamp + * 3. The peer's diff_check count must exceed cluster_pgsql_servers_diffs_before_sync threshold + * + * The function focuses on runtime data synchronization, which includes server status, + * health metrics, connection counts, and other operational statistics. This enables + * cluster nodes to maintain consistent views of PostgreSQL server operational states. + * + * @param host Pointer to store the hostname of the selected peer (caller must free) + * @param port Pointer to store the port number of the selected peer + * @param peer_checksum Pointer to store the runtime checksum string of the selected peer (caller must free, optional) + * @param ip_address Pointer to store the IP address of the selected peer (caller must free) + * + * @note If no suitable peer is found, *host will be set to NULL + * @note If a peer has the maximum epoch but insufficient diff_check, a warning is logged and sync is skipped + * @note The function performs memory allocation for hostname, ip_address, and checksum that must be freed by the caller + * @note This is a PostgreSQL counterpart to get_peer_to_sync_runtime_mysql_servers() + * @see cluster_pgsql_servers_diffs_before_sync + * @see ProxySQL_Checksum_Value_2::pgsql_servers + */ +void ProxySQL_Cluster_Nodes::get_peer_to_sync_runtime_pgsql_servers(char **host, uint16_t *port, char **peer_checksum, char** ip_address) { + get_peer_to_sync_variables_module("runtime_pgsql_servers", host, port, ip_address, peer_checksum, nullptr); +} + +/** + * @brief Identifies the optimal cluster peer for PostgreSQL servers v2 synchronization. + * + * This function scans all available cluster nodes to find the best peer for synchronizing + * PostgreSQL servers configuration. It selects a peer based on the following criteria: + * 1. The peer must have a valid pgsql_servers_v2 checksum (version > 1) + * 2. The peer should have the latest epoch timestamp + * 3. The peer's diff_check count must exceed cluster_pgsql_servers_diffs_before_sync threshold + * + * In addition to connection information, this function also provides checksums for both + * the static configuration (pgsql_servers_v2) and runtime status (pgsql_servers) if requested. + * This enables the caller to perform comprehensive synchronization of both configuration + * and runtime data. + * + * @param host Pointer to store the hostname of the selected peer (caller must free) + * @param port Pointer to store the port number of the selected peer + * @param peer_pgsql_servers_v2_checksum Pointer to store the pgsql_servers_v2 checksum string (caller must free, optional) + * @param peer_runtime_pgsql_servers_checksum Pointer to store the runtime pgsql_servers checksum string (caller must free, optional) + * @param ip_address Pointer to store the IP address of the selected peer (caller must free) + * + * @note If no suitable peer is found, *host will be set to NULL + * @note If a peer has the maximum epoch but insufficient diff_check, a warning is logged and sync is skipped + * @note The function performs memory allocation for all returned strings that must be freed by the caller + * @note Runtime checksum is only provided if the peer has valid runtime data (version > 1) + * @note This is a PostgreSQL counterpart to get_peer_to_sync_mysql_servers_v2() + * @see cluster_pgsql_servers_diffs_before_sync + * @see ProxySQL_Checksum_Value_2::pgsql_servers_v2 + * @see ProxySQL_Checksum_Value_2::pgsql_servers + */ +void ProxySQL_Cluster_Nodes::get_peer_to_sync_pgsql_servers_v2(char** host, uint16_t* port, char** peer_pgsql_servers_v2_checksum, + char** peer_runtime_pgsql_servers_checksum, char** ip_address) { unsigned long long version = 0; unsigned long long epoch = 0; unsigned long long max_epoch = 0; char *hostname = NULL; char *ip_addr = NULL; uint16_t p = 0; -// pthread_mutex_lock(&mutex); - //unsigned long long curtime = monotonic_time(); - unsigned int diff_ps = (unsigned int)__sync_fetch_and_add(&GloProxyCluster->cluster_proxysql_servers_diffs_before_sync,0); + char *checksum_v2 = NULL; + char *checksum_runtime = NULL; + unsigned int diff_ms = (unsigned int)GloProxyCluster->cluster_pgsql_servers_diffs_before_sync; for( std::unordered_map::iterator it = umap_proxy_nodes.begin(); it != umap_proxy_nodes.end(); ) { ProxySQL_Node_Entry * node = it->second; - ProxySQL_Checksum_Value_2 * v = &node->checksums_values.proxysql_servers; + ProxySQL_Checksum_Value_2 * v = &node->checksums_values.pgsql_servers_v2; if (v->version > 1) { if ( v->epoch > epoch ) { max_epoch = v->epoch; - if (v->diff_check >= diff_ps) { + if (v->diff_check >= diff_ms) { epoch = v->epoch; version = v->version; if (hostname) { @@ -3470,20 +4451,31 @@ void ProxySQL_Cluster_Nodes::get_peer_to_sync_proxysql_servers(char **host, uint if (ip_addr) { free(ip_addr); } + if (checksum_v2) { + free(checksum_v2); + } + if (checksum_runtime) { + free(checksum_runtime); + } hostname=strdup(node->get_hostname()); const char* ip = node->get_ipaddress(); if (ip) ip_addr = strdup(ip); p = node->get_port(); + checksum_v2 = strdup(v->checksum); + // Get runtime checksum as well + ProxySQL_Checksum_Value_2 * v_runtime = &node->checksums_values.pgsql_servers; + if (v_runtime->version > 1) { + checksum_runtime = strdup(v_runtime->checksum); + } } } } it++; } -// pthread_mutex_unlock(&mutex); if (epoch) { if (max_epoch > epoch) { - proxy_warning("Cluster: detected a peer with proxysql_servers epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); + proxy_warning("Cluster: detected a peer with pgsql_servers_v2 epoch %llu , but not enough diff_check. We won't sync from epoch %llu: temporarily skipping sync\n", max_epoch, epoch); if (hostname) { free(hostname); hostname = NULL; @@ -3492,17 +4484,38 @@ void ProxySQL_Cluster_Nodes::get_peer_to_sync_proxysql_servers(char **host, uint free(ip_addr); ip_addr = NULL; } + if (checksum_v2) { + free(checksum_v2); + checksum_v2 = NULL; + } + if (checksum_runtime) { + free(checksum_runtime); + checksum_runtime = NULL; + } } } if (hostname) { *host = hostname; *port = p; *ip_address = ip_addr; - proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with proxysql_servers version %llu, epoch %llu\n", hostname, p, version, epoch); - proxy_info("Cluster: detected peer %s:%d with proxysql_servers version %llu, epoch %llu\n", hostname, p, version, epoch); + if (peer_pgsql_servers_v2_checksum) { + *peer_pgsql_servers_v2_checksum = checksum_v2; + } else { + if (checksum_v2) + free(checksum_v2); + } + if (peer_runtime_pgsql_servers_checksum) { + *peer_runtime_pgsql_servers_checksum = checksum_runtime; + } else { + if (checksum_runtime) + free(checksum_runtime); + } + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Detected peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu\n", hostname, p, version, epoch); + proxy_info("Cluster: detected peer %s:%d with pgsql_servers_v2 version %llu, epoch %llu\n", hostname, p, version, epoch); } } + SQLite3_result * ProxySQL_Cluster_Nodes::stats_proxysql_servers_checksums() { const int colnum=9; SQLite3_result *result=new SQLite3_result(colnum); @@ -4168,21 +5181,95 @@ cluster_metrics_map = std::make_tuple( { "status", "success" } } ), - std::make_tuple ( - p_cluster_counter::pulled_ldap_variables_failure, - "proxysql_cluster_pulled_total", - "Number of times a 'module' have been pulled from a peer.", - metric_tags { - { "module_name", "ldap_variables" }, - { "status", "failure" } - } - ), + std::make_tuple ( + p_cluster_counter::pulled_ldap_variables_failure, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "ldap_variables" }, + { "status", "failure" } + } + ), + + // pgsql modules + std::make_tuple ( + p_cluster_counter::pulled_pgsql_query_rules_success, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_query_rules" }, + { "status", "success" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_query_rules_failure, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_query_rules" }, + { "status", "failure" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_servers_success, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_servers" }, + { "status", "success" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_servers_failure, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_servers" }, + { "status", "failure" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_users_success, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_users" }, + { "status", "success" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_users_failure, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_users" }, + { "status", "failure" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_variables_success, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_variables" }, + { "status", "success" } + } + ), + std::make_tuple ( + p_cluster_counter::pulled_pgsql_variables_failure, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", + metric_tags { + { "module_name", "pgsql_variables" }, + { "status", "failure" } + } + ), - // mysql_ldap_mappings_* - std::make_tuple ( - p_cluster_counter::pulled_mysql_ldap_mapping_success, - "proxysql_cluster_pulled_total", - "Number of times a 'module' have been pulled from a peer.", + // mysql_ldap_mappings_* + std::make_tuple ( + p_cluster_counter::pulled_mysql_ldap_mapping_success, + "proxysql_cluster_pulled_total", + "Number of times a 'module' have been pulled from a peer.", metric_tags { { "module_name", "mysql_ldap_mapping" }, { "status", "success" } @@ -4262,6 +5349,42 @@ cluster_metrics_map = std::make_tuple( { "module_name", "ldap_variables" }, { "reason", "servers_share_epoch" } } + ), + std::make_tuple ( + p_cluster_counter::sync_conflict_pgsql_query_rules_share_epoch, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_query_rules" }, + { "reason", "servers_share_epoch" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_conflict_pgsql_servers_share_epoch, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_servers" }, + { "reason", "servers_share_epoch" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_conflict_pgsql_users_share_epoch, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_users" }, + { "reason", "servers_share_epoch" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_conflict_pgsql_variables_share_epoch, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_variables" }, + { "reason", "servers_share_epoch" } + } ), // ==================================================================== @@ -4330,7 +5453,42 @@ cluster_metrics_map = std::make_tuple( { "reason", "version_one" } } ), - // ==================================================================== + std::make_tuple ( + p_cluster_counter::sync_delayed_pgsql_query_rules_version_one, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_query_rules" }, + { "reason", "version_one" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_delayed_pgsql_servers_version_one, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_servers" }, + { "reason", "version_one" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_delayed_pgsql_users_version_one, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_users" }, + { "reason", "version_one" } + } + ), + std::make_tuple ( + p_cluster_counter::sync_delayed_pgsql_variables_version_one, + "proxysql_cluster_syn_conflict_total", + "Number of times a 'module' has not been able to be synced.", + metric_tags { + { "module_name", "pgsql_variables" }, + { "reason", "version_one" } + } + ), }, cluster_gauge_vector {} ); @@ -4353,6 +5511,13 @@ ProxySQL_Cluster::ProxySQL_Cluster() : proxysql_servers_to_monitor(NULL) { cluster_mysql_servers_diffs_before_sync = 3; cluster_mysql_users_diffs_before_sync = 3; cluster_proxysql_servers_diffs_before_sync = 3; + cluster_mysql_variables_diffs_before_sync = 3; + cluster_ldap_variables_diffs_before_sync = 3; + cluster_admin_variables_diffs_before_sync = 3; + cluster_pgsql_query_rules_diffs_before_sync = 3; + cluster_pgsql_servers_diffs_before_sync = 3; + cluster_pgsql_users_diffs_before_sync = 3; + cluster_pgsql_variables_diffs_before_sync = 3; cluster_mysql_query_rules_save_to_disk = true; cluster_mysql_servers_save_to_disk = true; cluster_mysql_users_save_to_disk = true; @@ -4461,6 +5626,630 @@ const char* ProxySQL_Node_Address::get_host_address() const { return host_address; } +/* + * Unified Pull Framework for ProxySQL Cluster Synchronization + * + * This framework provides a data-driven approach to unify all pull_*_from_peer functions, + * reducing code duplication and improving maintainability. + */ + +/** + * Configuration structure for unified pull operations + */ +struct PullOperationConfig { + // Basic function info + const char* module_name; + + // Peer selection callback + std::function peer_selector; + + // Query configuration + std::vector queries; + bool use_multiple_queries; + + // Configuration callbacks + std::function checksum_validator; + std::function data_loader; + std::function runtime_loader; + std::function save_to_disk_checker; + + // Metrics tracking + p_cluster_counter::metric success_metric; + p_cluster_counter::metric failure_metric; + + // Mutex for thread safety + pthread_mutex_t* operation_mutex; + + // Logging callbacks + std::function get_module_display_name; + std::function get_description; + + // Optional additional parameters for complex operations + std::function custom_setup; + std::function custom_processor; + void* custom_context; +}; + +/** + * Unified pull operation handler that abstracts common patterns + * + * @param config Configuration structure defining the pull operation + * @param expected_checksum Expected checksum for validation (simple operations) + * @param epoch Expected epoch timestamp (simple operations) + * @param complex_context Additional context for complex operations (checksum structs, etc.) + */ +void ProxySQL_Cluster::pull_from_peer_unified(const PullOperationConfig& config, + const string& expected_checksum, + const time_t epoch, + void* complex_context) { + char *hostname = NULL; + char *ip_address = NULL; + uint16_t port = 0; + bool fetch_failed = false; + + // Acquire mutex for thread safety + pthread_mutex_lock(config.operation_mutex); + + // Select peer using the configured callback + config.peer_selector(&hostname, &port, &ip_address); + + if (hostname) { + cluster_creds_t creds {}; + + MYSQL *conn = mysql_init(NULL); + if (conn == NULL) { + proxy_error("Unable to initialize MySQL connection for %s\n", config.module_name); + goto exit_unified_pull; + } + + // Setup credentials + creds = GloProxyCluster->get_credentials(); + if (creds.user.size()) { + unsigned int timeout = 1; + mysql_options(conn, MYSQL_OPT_CONNECT_TIMEOUT, &timeout); + { + unsigned char val = 1; + mysql_options(conn, MYSQL_OPT_SSL_ENFORCE, &val); + mysql_options(conn, MARIADB_OPT_SSL_KEYLOG_CALLBACK, (void*)proxysql_keylog_write_line_callback); + } + + // Log operation start + const char* display_name = config.get_module_display_name ? config.get_module_display_name() : config.module_name; + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Fetching %s from peer %s:%d started. Expected checksum: %s\n", + display_name, hostname, port, expected_checksum.c_str()); + proxy_info("Cluster: Fetching %s from peer %s:%d started. Expected checksum: %s\n", + display_name, hostname, port, expected_checksum.c_str()); + + // Establish connection + MYSQL* rc_conn = mysql_real_connect( + conn, ip_address ? ip_address : hostname, creds.user.c_str(), creds.pass.c_str(), NULL, port, NULL, 0 + ); + + if (rc_conn == NULL) { + proxy_error("Cluster: unable to fetch %s from peer %s:%d. Error: %s\n", + display_name, hostname, port, mysql_error(conn)); + fetch_failed = true; + } else { + // Update DNS cache + MySQL_Monitor::update_dns_cache_from_mysql_conn(conn); + + // Custom setup callback if provided + if (config.custom_setup) { + config.custom_setup(config.custom_context, conn); + } + + // Execute queries and validate checksums + bool checksum_valid = false; + + if (config.use_multiple_queries) { + // Handle multiple queries with combined checksum computation + vector results; + results.reserve(config.queries.size()); + + bool all_queries_succeeded = true; + for (const auto& query : config.queries) { + MYSQL_RES* result = NULL; + if (!fetch_query_with_metrics(conn, query, &result)) { + all_queries_succeeded = false; + break; + } + results.push_back(result); + } + + if (all_queries_succeeded) { + // Compute combined checksum for all results + string combined_checksum = compute_combined_checksum(results); + + // Validate using the configured validator + if (config.checksum_validator) { + checksum_valid = config.checksum_validator(combined_checksum, epoch); + } else { + checksum_valid = (combined_checksum == expected_checksum); + } + + // Process results using custom processor if provided + if (config.custom_processor) { + config.custom_processor(config.custom_context, results.data()); + } + + // Cleanup results + for (auto result : results) { + if (result) mysql_free_result(result); + } + } + } else { + // Single query operation + MYSQL_RES* result = NULL; + fetch_query single_query = config.queries[0]; + + if (fetch_query_with_metrics(conn, single_query, &result)) { + string computed_checksum = compute_single_checksum(result); + + if (config.checksum_validator) { + checksum_valid = config.checksum_validator(computed_checksum, epoch); + } else { + checksum_valid = (computed_checksum == expected_checksum); + } + + if (checksum_valid) { + // Process result using custom processor if provided + if (config.custom_processor) { + MYSQL_RES* single_array[] = {result}; + config.custom_processor(config.custom_context, single_array); + } + } + + mysql_free_result(result); + } else { + fetch_failed = true; + } + } + + // Load data to runtime if checksum is valid + if (checksum_valid && config.data_loader) { + config.data_loader(expected_checksum, true); + } + + // Load to runtime if loader provided + if (checksum_valid && config.runtime_loader) { + config.runtime_loader(); + } + + // Save to disk if enabled + if (checksum_valid && config.save_to_disk_checker && config.save_to_disk_checker()) { + // Save operation would be implemented by the specific module + } + + if (!checksum_valid) { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "Checksum validation failed for %s from peer %s:%d\n", + display_name, hostname, port); + fetch_failed = true; + } + } + + mysql_close(conn); + } else { + proxy_error("Cluster: unable to fetch %s - empty credentials\n", config.module_name); + fetch_failed = true; + } + } else { + proxy_debug(PROXY_DEBUG_CLUSTER, 5, "No peer available for %s sync\n", config.module_name); + fetch_failed = true; + } + +exit_unified_pull: + // Cleanup resources + if (hostname) free(hostname); + if (ip_address) free(ip_address); + + // Release mutex + pthread_mutex_unlock(config.operation_mutex); + + // Handle fetch failure with delay to prevent busy loops + if (fetch_failed) { + sleep(1); + } + + // Update metrics based on success/failure + if (fetch_failed) { + if (config.failure_metric != p_cluster_counter::__size) { + GloProxyCluster->metrics.p_counter_array[config.failure_metric]->Increment(); + } + } else { + if (config.success_metric != p_cluster_counter::__size) { + GloProxyCluster->metrics.p_counter_array[config.success_metric]->Increment(); + } + } +} + +/** + * Helper function to execute a single query with metrics tracking + * + * @param conn MySQL connection + * @param query Query configuration + * @param result Output parameter for query result + * @return true if successful, false otherwise + */ +bool ProxySQL_Cluster::fetch_query_with_metrics(MYSQL* conn, const fetch_query& query, MYSQL_RES** result) { + if (mysql_query(conn, query.query) != 0) { + proxy_error("Cluster: query failed for %s: %s\n", query.msgs[1].c_str(), mysql_error(conn)); + if (query.failure_counter != p_cluster_counter::__size) { + GloProxyCluster->metrics.p_counter_array[query.failure_counter]->Increment(); + } + return false; + } + + *result = mysql_store_result(conn); + if (*result == NULL) { + proxy_error("Cluster: unable to store result for %s: %s\n", query.msgs[1].c_str(), mysql_error(conn)); + if (query.failure_counter != p_cluster_counter::__size) { + GloProxyCluster->metrics.p_counter_array[query.failure_counter]->Increment(); + } + return false; + } + + if (query.success_counter != p_cluster_counter::__size) { + GloProxyCluster->metrics.p_counter_array[query.success_counter]->Increment(); + } + return true; +} + +/** + * Helper function to compute checksum for a single result set + * + * @param result MySQL result set + * @return Computed checksum string + */ +string ProxySQL_Cluster::compute_single_checksum(MYSQL_RES* result) { + uint64_t checksum_value = mysql_raw_checksum(result); + return std::to_string(checksum_value); +} + +/** + * Helper function to compute combined checksum for multiple result sets + * + * @param results Vector of MySQL result sets + * @return Combined checksum string + */ +string ProxySQL_Cluster::compute_combined_checksum(const vector& results) { + string combined_checksum; + combined_checksum.reserve(256); + + for (const auto& result : results) { + uint64_t partial_checksum = mysql_raw_checksum(result); + combined_checksum += std::to_string(partial_checksum) + "|"; + } + + // Remove trailing pipe + if (!combined_checksum.empty() && combined_checksum.back() == '|') { + combined_checksum.pop_back(); + } + + return combined_checksum; +} + +/* + * Memory Management Framework for ProxySQL Cluster Synchronization + * + * This framework provides safe, standardized memory management patterns + * to prevent memory leaks and ensure consistent error handling throughout + * the cluster synchronization codebase. + */ + +/** + * Safe string allocation with error checking + * + * @param source Source string to duplicate (can be NULL) + * @return Allocated string or NULL if allocation failed or source is NULL + */ +char* safe_strdup(const char* source) { + if (source == nullptr) { + return nullptr; + } + + char* result = strdup(source); + if (result == nullptr) { + proxy_error("Memory allocation failed in safe_strdup for string: %s\n", source); + } + return result; +} + +/** + * Safe memory allocation with error checking + * + * @param size Size to allocate + * @return Pointer to allocated memory or NULL if allocation failed + */ +void* safe_malloc(size_t size) { + void* result = malloc(size); + if (result == nullptr) { + proxy_error("Memory allocation failed in safe_malloc for size: %zu\n", size); + } + return result; +} + +/** + * Safe array of strings allocation with error checking + * + * @param count Number of strings to allocate + * @return Array of char pointers or NULL if allocation failed + */ +char** safe_string_array_alloc(size_t count) { + if (count == 0) { + return nullptr; + } + + char** result = (char**)safe_malloc(sizeof(char*) * count); + if (result == nullptr) { + return nullptr; + } + + // Initialize all pointers to NULL for safe cleanup + for (size_t i = 0; i < count; i++) { + result[i] = nullptr; + } + + return result; +} + +/** + * Safe string array update with automatic cleanup + * + * @param target_array Array to update (pointer to char**) + * @param count Number of elements in array + * @param new_values Array of new string values (can contain NULL) + * @return true if successful, false if allocation failed + */ +bool safe_update_string_array(char*** target_array, size_t count, const char** new_values) { + if (target_array == nullptr || count == 0) { + return false; + } + + // Allocate new array + char** new_array = safe_string_array_alloc(count); + if (new_array == nullptr) { + return false; + } + + // Copy strings with error checking + for (size_t i = 0; i < count; i++) { + if (new_values[i] != nullptr) { + new_array[i] = safe_strdup(new_values[i]); + if (new_array[i] == nullptr) { + // If allocation fails, clean up what we've allocated so far + for (size_t j = 0; j < i; j++) { + if (new_array[j] != nullptr) { + free(new_array[j]); + new_array[j] = nullptr; + } + } + free(new_array); + return false; + } + } + } + + // Clean up old array and replace with new one + char** old_array = *target_array; + *target_array = new_array; + + if (old_array != nullptr) { + for (size_t i = 0; i < count; i++) { + if (old_array[i] != nullptr) { + free(old_array[i]); + old_array[i] = nullptr; + } + } + free(old_array); + } + + return true; +} + +/** + * RAII wrapper for char* strings to ensure automatic cleanup + */ +struct ScopedCharPointer { + char* ptr; + + explicit ScopedCharPointer(char* p = nullptr) : ptr(p) {} + + ~ScopedCharPointer() { + if (ptr != nullptr) { + free(ptr); + ptr = nullptr; + } + } + + // Disable copy constructor and assignment to prevent double-free + ScopedCharPointer(const ScopedCharPointer&) = delete; + ScopedCharPointer& operator=(const ScopedCharPointer&) = delete; + + // Enable move constructor and assignment + ScopedCharPointer(ScopedCharPointer&& other) noexcept : ptr(other.ptr) { + other.ptr = nullptr; + } + + ScopedCharPointer& operator=(ScopedCharPointer&& other) noexcept { + if (this != &other) { + if (ptr != nullptr) { + free(ptr); + } + ptr = other.ptr; + other.ptr = nullptr; + } + return *this; + } + + // Release ownership of the pointer + char* release() { + char* result = ptr; + ptr = nullptr; + return result; + } + + // Get the raw pointer + char* get() const { return ptr; } + + // Boolean conversion + explicit operator bool() const { return ptr != nullptr; } +}; + +/** + * RAII wrapper for char** arrays to ensure automatic cleanup + */ +struct ScopedCharArrayPointer { + char** ptr; + size_t count; + + explicit ScopedCharArrayPointer(char** p = nullptr, size_t c = 0) : ptr(p), count(c) {} + + ~ScopedCharArrayPointer() { + if (ptr != nullptr) { + for (size_t i = 0; i < count; i++) { + if (ptr[i] != nullptr) { + free(ptr[i]); + ptr[i] = nullptr; + } + } + free(ptr); + ptr = nullptr; + } + } + + // Disable copy constructor and assignment to prevent double-free + ScopedCharArrayPointer(const ScopedCharArrayPointer&) = delete; + ScopedCharArrayPointer& operator=(const ScopedCharArrayPointer&) = delete; + + // Enable move constructor and assignment + ScopedCharArrayPointer(ScopedCharArrayPointer&& other) noexcept : ptr(other.ptr), count(other.count) { + other.ptr = nullptr; + other.count = 0; + } + + ScopedCharArrayPointer& operator=(ScopedCharArrayPointer&& other) noexcept { + if (this != &other) { + if (ptr != nullptr) { + for (size_t i = 0; i < count; i++) { + if (ptr[i] != nullptr) { + free(ptr[i]); + } + } + free(ptr); + } + ptr = other.ptr; + count = other.count; + other.ptr = nullptr; + other.count = 0; + } + return *this; + } + + // Release ownership of the pointer + char** release() { + char** result = ptr; + ptr = nullptr; + count = 0; + return result; + } + + // Get the raw pointer + char** get() const { return ptr; } + + // Get the count + size_t size() const { return count; } + + // Boolean conversion + explicit operator bool() const { return ptr != nullptr; } +}; + +/** + * Enhanced safe string update with better error handling + * + * @param target Pointer to target string pointer + * @param new_value New string value (can be NULL) + * @return true if successful, false if allocation failed + */ +bool safe_update_string(char** target, const char* new_value) { + if (target == nullptr) { + return false; + } + + ScopedCharPointer new_string(safe_strdup(new_value)); + if (new_value != nullptr && !new_string) { + return false; + } + + // Clean up old string and replace with new one + char* old_string = *target; + *target = new_string.release(); + + if (old_string != nullptr) { + free(old_string); + } + + return true; +} + +/** + * Safe query string construction with error checking + * + * @param format Format string (printf-style) + * @param ... Variable arguments for format + * @return Allocated query string or NULL if allocation failed + */ +char* safe_query_construct(const char* format, ...) { + if (format == nullptr) { + return nullptr; + } + + // First, calculate the required buffer size + va_list args1, args2; + va_start(args1, format); + va_copy(args2, args1); + + int size = vsnprintf(nullptr, 0, format, args1); + va_end(args1); + + if (size < 0) { + va_end(args2); + proxy_error("Query format string error in safe_query_construct\n"); + return nullptr; + } + + // Allocate buffer with extra space for null terminator + char* buffer = (char*)safe_malloc(size + 1); + if (buffer == nullptr) { + va_end(args2); + return nullptr; + } + + // Format the string into the buffer + int result = vsnprintf(buffer, size + 1, format, args2); + va_end(args2); + + if (result < 0 || result > size) { + free(buffer); + proxy_error("Query formatting error in safe_query_construct\n"); + return nullptr; + } + + return buffer; +} + +/** + * Clean up string pointers with NULL assignment (for backward compatibility) + * + * @param str1 First string pointer + * @param str2 Second string pointer + * @param str3 Third string pointer + */ +void safe_cleanup_strings(char** str1, char** str2, char** str3) { + if (str1 && *str1) { free(*str1); *str1 = nullptr; } + if (str2 && *str2) { free(*str2); *str2 = nullptr; } + if (str3 && *str3) { free(*str3); *str3 = nullptr; } +} + void ProxySQL_Node_Address::resolve_hostname() { if (ip_addr) { free(ip_addr); diff --git a/test/tap/tap/Makefile b/test/tap/tap/Makefile index 75bd9ec5cc..d1af8add3d 100644 --- a/test/tap/tap/Makefile +++ b/test/tap/tap/Makefile @@ -12,11 +12,13 @@ IDIRS := -I$(PROXYSQL_IDIR) \ -I${CURL_IDIR} \ -I${SQLITE3_IDIR} \ -I$(DOTENV_IDIR) \ + -I$(POSTGRESQL_IDIR) \ -I$(RE2_IDIR) LIBPROXYSQLAR := $(PROXYSQL_LDIR)/libproxysql.a +AR ?= ar OPT := $(STDCPP) -O2 -ggdb -Wl,--no-as-needed $(WASAN) @@ -26,21 +28,21 @@ OPT := $(STDCPP) -O2 -ggdb -Wl,--no-as-needed $(WASAN) # being used inside ProxySQL linked 'SQLite3', which is also used by `libtap.so`. LWGCOV := ifeq ($(WITHGCOV),1) - LWGCOV := -lgcov + LWGCOV := -lgcov --coverage endif ### main targets -.PHONY: default +.PHONY: default debug default: all -.PHONY: all +.PHONY: all debug all: libtap_mariadb.a libtap_mysql57.a libtap_mysql8.a \ - libtap.so libcpp_dotenv.so libre2.so + libtap.a libtap.so libcpp_dotenv.so libre2.so debug: OPT := $(STDCPP) -O0 -DDEBUG -ggdb -Wl,--no-as-needed $(WASAN) -debug: libtap_mariadb.a libtap_mysql57.a libtap_mysql8.a libtap.so +debug: libtap_mariadb.a libtap_mysql57.a libtap_mysql8.a libtap.a libtap.so ### helper targets @@ -62,17 +64,20 @@ tap.o: tap.cpp cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a libcurl.so -lssl -lc mcp_client.o: mcp_client.cpp mcp_client.h libcurl.so $(CXX) -fPIC -c mcp_client.cpp $(IDIRS) $(OPT) -libtap_mariadb.a: tap.o command_line.o utils_mariadb.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a - ar rcs libtap_mariadb.a tap.o command_line.o utils_mariadb.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo +libtap_mariadb.a: tap.o command_line.o utils_mariadb.o noise_utils_mariadb.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a + $(AR) rcs libtap_mariadb.a tap.o command_line.o utils_mariadb.o noise_utils_mariadb.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo -libtap_mysql57.a: tap.o command_line.o utils_mysql57.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a - ar rcs libtap_mysql57.a tap.o command_line.o utils_mysql57.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo +libtap_mysql57.a: tap.o command_line.o utils_mysql57.o noise_utils_mysql57.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a + $(AR) rcs libtap_mysql57.a tap.o command_line.o utils_mysql57.o noise_utils_mysql57.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo -libtap_mysql8.a: tap.o command_line.o utils_mysql8.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a - ar rcs libtap_mysql8.a tap.o command_line.o utils_mysql8.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo +libtap_mysql8.a: tap.o command_line.o utils_mysql8.o noise_utils_mysql8.o mcp_client.o cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a + $(AR) rcs libtap_mysql8.a tap.o command_line.o utils_mysql8.o noise_utils_mysql8.o mcp_client.o $(SQLITE3_LDIR)/sqlite3.o $(PROXYSQL_LDIR)/obj/sha256crypt.oo + +libtap.a: libtap_mariadb.a + cp libtap_mariadb.a libtap.a libtap.so: libtap_mariadb.a cpp-dotenv/dynamic/cpp-dotenv/libcpp_dotenv.so libre2.so - $(CXX) -shared -o libtap.so -Wl,--whole-archive libtap_mariadb.a -Wl,--no-whole-archive $(LWGCOV) + $(CXX) -shared -o libtap.so -Wl,--whole-archive libtap_mariadb.a -Wl,--no-whole-archive -L$(POSTGRESQL_PATH)/interfaces/libpq -lpq -L$(RE2_LDIR)/so -lre2 -Wl,-rpath,$(POSTGRESQL_PATH)/interfaces/libpq -Wl,-rpath,$(RE2_LDIR)/so $(LWGCOV) ### tap deps targets @@ -92,7 +97,7 @@ cpp-dotenv/static/cpp-dotenv/libcpp_dotenv.a: cd cpp-dotenv/static/cpp-dotenv && patch src/dotenv.cpp < ../../dotenv.cpp.patch cd cpp-dotenv/static/cpp-dotenv && patch include/dotenv.h < ../../dotenv.h.patch cd cpp-dotenv/static/cpp-dotenv && patch -p0 < ../../nm_clang_fix.patch - cd cpp-dotenv/static/cpp-dotenv && cmake . -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Debug + cd cpp-dotenv/static/cpp-dotenv && cmake . -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Debug -DCMAKE_POLICY_VERSION_MINIMUM=3.5 cd cpp-dotenv/static/cpp-dotenv && CC=${CC} CXX=${CXX} ${MAKE} cpp-dotenv/dynamic/cpp-dotenv/libcpp_dotenv.so: @@ -101,7 +106,7 @@ cpp-dotenv/dynamic/cpp-dotenv/libcpp_dotenv.so: cd cpp-dotenv/dynamic/cpp-dotenv && patch src/dotenv.cpp < ../../dotenv.cpp.patch cd cpp-dotenv/dynamic/cpp-dotenv && patch include/dotenv.h < ../../dotenv.h.patch cd cpp-dotenv/dynamic/cpp-dotenv && patch -p0 < ../../nm_clang_fix.patch - cd cpp-dotenv/dynamic/cpp-dotenv && cmake . -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_BUILD_RPATH="../tap:../../tap" -DCMAKE_BUILD_TYPE=Debug + cd cpp-dotenv/dynamic/cpp-dotenv && cmake . -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_BUILD_RPATH="../tap:../../tap" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_POLICY_VERSION_MINIMUM=3.5 cd cpp-dotenv/dynamic/cpp-dotenv && CC=${CC} CXX=${CXX} ${MAKE} @@ -110,8 +115,9 @@ cpp-dotenv/dynamic/cpp-dotenv/libcpp_dotenv.so: .SILENT: clean_utils .PHONY: clean_utils clean_utils: - find . -name 'utils_*.*' -delete || true - find . -name 'libtap_*.*' -delete || true + find . -name 'utils_*.o' -delete || true + find . -name 'noise_utils_*.o' -delete || true + find . -name 'libtap_*.a' -delete || true find . -name 'libtap.so' -delete || true .SILENT: clean @@ -126,3 +132,20 @@ cleanall: clean # Remove cpp-dotenv source directories (213MB) cd cpp-dotenv/static && rm -rf cpp-dotenv-*/ || true cd cpp-dotenv/dynamic && rm -rf cpp-dotenv-*/ || true + +# Keep the v3.0 archive recipes intact so future merges stay clean, but +# preserve this branch's stale-archive workaround in separate helper rules. +LIBTAP_ARCHIVES := libtap_mariadb.a libtap_mysql57.a libtap_mysql8.a + +$(LIBTAP_ARCHIVES): preclean-libtap-archives | Makefile + +.PHONY: preclean-libtap-archives +preclean-libtap-archives: + rm -f $(LIBTAP_ARCHIVES) + +ifeq ($(wildcard noise_utils.cpp noise_utils.h),) +NOISE_UTILS_STUBS := noise_utils_mariadb.o noise_utils_mysql57.o noise_utils_mysql8.o + +$(NOISE_UTILS_STUBS): + $(CXX) -x c++ -fPIC -c /dev/null $(IDIRS) $(OPT) -o $@ +endif diff --git a/test/tap/tests/test_cluster_sync_pgsql-t.cpp b/test/tap/tests/test_cluster_sync_pgsql-t.cpp new file mode 100644 index 0000000000..0646b3b74b --- /dev/null +++ b/test/tap/tests/test_cluster_sync_pgsql-t.cpp @@ -0,0 +1,822 @@ +/** + * @file test_cluster_sync_pgsql-t.cpp + * @brief Checks that ProxySQL PostgreSQL tables are properly syncing between cluster instances. + * @details This test checks PostgreSQL cluster sync for: + * - 'pgsql_servers' changes propagating through the 'pgsql_servers_v2' cluster sync path + * - 'pgsql_users' sync between cluster nodes + * - 'pgsql_query_rules' sync between cluster nodes + * - PostgreSQL modules checksums appear in runtime_checksums_values + * - Basic PostgreSQL admin tables and cluster variables are accessible + * + * Optional replica validation: + * ---------------------------- + * When 'TAP_PGSQL_SYNC_REPLICA_PORT' is set, the test temporarily backs up and restores + * modified PostgreSQL admin tables on the primary, then verifies that runtime state and + * replica main-table state are updated on the target replica. If the corresponding + * '*_save_to_disk' variable is enabled, the test also verifies persistence into the replica + * disk tables. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "libconfig.h" + +#include "proxysql_utils.h" + +#include "mysql.h" +#ifndef SPOOKYV2 +#include "SpookyV2.h" +#define SPOOKYV2 +#endif +#include "tap.h" +#include "command_line.h" +#include "utils.h" + +using std::vector; +using std::string; + +const uint32_t SYNC_TIMEOUT = 10; +using pgsql_server_tuple = std::tuple; + +bool parse_bool_value(const string& value) { + return value == "1" || strcasecmp(value.c_str(), "true") == 0; +} + +int get_admin_bool_value(MYSQL* admin, const string& variable_name, bool& value) { + string variable_value {}; + const int rc = get_variable_value(admin, variable_name, variable_value); + if (rc != EXIT_SUCCESS) { + return rc; + } + + value = parse_bool_value(variable_value); + return EXIT_SUCCESS; +} + +int backup_admin_table(MYSQL* admin, const string& table_name, const string& backup_table_name) { + string drop_query {}; + string create_query {}; + + string_format("DROP TABLE IF EXISTS %s", drop_query, backup_table_name.c_str()); + if (mysql_query_t(admin, drop_query)) { + return EXIT_FAILURE; + } + + string_format( + "CREATE TABLE %s AS SELECT * FROM %s", + create_query, + backup_table_name.c_str(), + table_name.c_str() + ); + if (mysql_query_t(admin, create_query)) { + return EXIT_FAILURE; + } + + return EXIT_SUCCESS; +} + +int restore_admin_table( + MYSQL* admin, const string& table_name, const string& backup_table_name, const string& load_query = "" +) { + string delete_query {}; + string restore_query {}; + string drop_query {}; + int rc = EXIT_SUCCESS; + + string_format("DELETE FROM %s", delete_query, table_name.c_str()); + if (mysql_query_t(admin, delete_query)) { + rc = EXIT_FAILURE; + goto cleanup; + } + + string_format( + "INSERT INTO %s SELECT * FROM %s", + restore_query, + table_name.c_str(), + backup_table_name.c_str() + ); + if (mysql_query_t(admin, restore_query)) { + rc = EXIT_FAILURE; + goto cleanup; + } + + if (!load_query.empty() && mysql_query_t(admin, load_query)) { + rc = EXIT_FAILURE; + } + +cleanup: + string_format("DROP TABLE IF EXISTS %s", drop_query, backup_table_name.c_str()); + if (mysql_query_t(admin, drop_query)) { + rc = EXIT_FAILURE; + } + + return rc; +} + +int fetch_single_count(MYSQL* admin, const string& query, int& count, bool fresh_connection = false) { + MYSQL* query_admin = admin; + MYSQL* fresh_admin = nullptr; + + if (fresh_connection) { + fresh_admin = mysql_init(NULL); + if (!fresh_admin) { + diag("Failed to initialize fresh admin connection for query: %s", query.c_str()); + return EXIT_FAILURE; + } + + if (!mysql_real_connect( + fresh_admin, + admin->host, + admin->user, + admin->passwd, + admin->db, + admin->port, + admin->unix_socket, + admin->client_flag + )) { + diag("Failed to connect fresh admin session for query '%s': %s", query.c_str(), mysql_error(fresh_admin)); + mysql_close(fresh_admin); + return EXIT_FAILURE; + } + + query_admin = fresh_admin; + } + + if (mysql_query_t(query_admin, query)) { + if (fresh_admin) { + mysql_close(fresh_admin); + } + return EXIT_FAILURE; + } + + MYSQL_RES* result = mysql_store_result(query_admin); + if (!result) { + diag("Failed to store result from query: %s", query.c_str()); + if (fresh_admin) { + mysql_close(fresh_admin); + } + return EXIT_FAILURE; + } + + MYSQL_ROW row = mysql_fetch_row(result); + if (!row || !row[0]) { + diag("Failed to fetch count row from query: %s", query.c_str()); + mysql_free_result(result); + if (fresh_admin) { + mysql_close(fresh_admin); + } + return EXIT_FAILURE; + } + + count = atoi(row[0]); + mysql_free_result(result); + if (fresh_admin) { + mysql_close(fresh_admin); + } + + return EXIT_SUCCESS; +} + +int wait_for_expected_count( + MYSQL* admin, const string& query, int expected_count, const string& label, bool fresh_connection = false +) { + for (uint32_t waited = 0; waited < SYNC_TIMEOUT; ++waited) { + int count = 0; + if (fetch_single_count(admin, query, count, fresh_connection) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (count == expected_count) { + return EXIT_SUCCESS; + } + sleep(1); + } + + diag("Timed out waiting for %s using query: %s", label.c_str(), query.c_str()); + return EXIT_FAILURE; +} + +int check_pgsql_servers_v2_sync( + MYSQL* proxy_admin, MYSQL* replica_admin, bool save_to_disk, + const vector& insert_pgsql_servers_values +) { + const string backup_table_name { "pgsql_servers_sync_test_backup_5297" }; + const char* t_insert_pgsql_servers = + "INSERT INTO pgsql_servers (" + " hostgroup_id, hostname, port, status, weight, compression, max_connections," + " max_replication_lag, use_ssl, max_latency_ms, comment" + ") VALUES (%d, '%s', %d, '%s', %d, %d, %d, %d, %d, %d, '%s')"; + vector insert_pgsql_servers_queries {}; + int rc = EXIT_FAILURE; + + for (const auto& values : insert_pgsql_servers_values) { + string insert_pgsql_servers_query {}; + string_format( + t_insert_pgsql_servers, + insert_pgsql_servers_query, + std::get<0>(values), + std::get<1>(values).c_str(), + std::get<2>(values), + std::get<3>(values).c_str(), + std::get<4>(values), + std::get<5>(values), + std::get<6>(values), + std::get<7>(values), + std::get<8>(values), + std::get<9>(values), + std::get<10>(values).c_str() + ); + insert_pgsql_servers_queries.push_back(insert_pgsql_servers_query); + } + + if (backup_admin_table(proxy_admin, "pgsql_servers", backup_table_name) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (mysql_query_t(proxy_admin, "DELETE FROM pgsql_servers")) { + goto cleanup; + } + + for (const auto& query : insert_pgsql_servers_queries) { + if (mysql_query_t(proxy_admin, query)) { + goto cleanup; + } + } + if (mysql_query_t(proxy_admin, "LOAD PGSQL SERVERS TO RUNTIME")) { + goto cleanup; + } + + for (const auto& values : insert_pgsql_servers_values) { + const char* t_runtime_pgsql_servers_query = + "SELECT COUNT(*) FROM runtime_pgsql_servers WHERE hostgroup_id=%d AND hostname='%s'" + " AND port=%d AND status='%s' AND weight=%d AND" + " compression=%d AND max_connections=%d AND max_replication_lag=%d" + " AND use_ssl=%d AND max_latency_ms=%d AND comment='%s'"; + const char* t_main_pgsql_servers_query = + "SELECT COUNT(*) FROM pgsql_servers WHERE hostgroup_id=%d AND hostname='%s'" + " AND port=%d AND status='%s' AND weight=%d AND" + " compression=%d AND max_connections=%d AND max_replication_lag=%d" + " AND use_ssl=%d AND max_latency_ms=%d AND comment='%s'"; + string runtime_pgsql_servers_query {}; + string main_pgsql_servers_query {}; + string_format( + t_runtime_pgsql_servers_query, + runtime_pgsql_servers_query, + std::get<0>(values), + std::get<1>(values).c_str(), + std::get<2>(values), + std::get<3>(values).c_str(), + std::get<4>(values), + std::get<5>(values), + std::get<6>(values), + std::get<7>(values), + std::get<8>(values), + std::get<9>(values), + std::get<10>(values).c_str() + ); + if (wait_for_expected_count(replica_admin, runtime_pgsql_servers_query, 1, "runtime_pgsql_servers sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + string_format( + t_main_pgsql_servers_query, + main_pgsql_servers_query, + std::get<0>(values), + std::get<1>(values).c_str(), + std::get<2>(values), + std::get<3>(values).c_str(), + std::get<4>(values), + std::get<5>(values), + std::get<6>(values), + std::get<7>(values), + std::get<8>(values), + std::get<9>(values), + std::get<10>(values).c_str() + ); + if (wait_for_expected_count(replica_admin, main_pgsql_servers_query, 1, "pgsql_servers main sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + if (save_to_disk) { + string disk_pgsql_servers_query = main_pgsql_servers_query; + const string from_table { "FROM pgsql_servers" }; + const string to_table { "FROM disk.pgsql_servers" }; + const size_t from_pos = disk_pgsql_servers_query.find(from_table); + if (from_pos == string::npos) { + diag("Failed to rewrite pgsql_servers query for disk validation"); + goto cleanup; + } + disk_pgsql_servers_query.replace(from_pos, from_table.length(), to_table); + if (wait_for_expected_count(replica_admin, disk_pgsql_servers_query, 1, "pgsql_servers disk sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + } + } + + rc = EXIT_SUCCESS; + +cleanup: + if (restore_admin_table(proxy_admin, "pgsql_servers", backup_table_name, "LOAD PGSQL SERVERS TO RUNTIME") != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + return rc; +} + +int check_pgsql_users_sync(MYSQL* proxy_admin, MYSQL* replica_admin, bool save_to_disk) { + const string backup_table_name { "pgsql_users_sync_test_backup_5297" }; + const string username { "cluster_sync_pgsql_user_5297" }; + const string password { "cluster_sync_pgsql_pass_5297" }; + const string attributes { "" }; + const string comment { "cluster_sync_pgsql_user_5297" }; + const int default_hostgroup = 801; + const int max_connections = 33; + int rc = EXIT_FAILURE; + string insert_user_query {}; + string runtime_user_query {}; + string main_user_query {}; + + if (backup_admin_table(proxy_admin, "pgsql_users", backup_table_name) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (mysql_query_t(proxy_admin, "DELETE FROM pgsql_users")) { + goto cleanup; + } + + string_format( + "INSERT INTO pgsql_users (username, password, active, use_ssl, default_hostgroup, transaction_persistent, fast_forward, backend, frontend, max_connections, attributes, comment) " + "VALUES ('%s', '%s', 1, 0, %d, 1, 0, 0, 1, %d, '%s', '%s')", + insert_user_query, + username.c_str(), + password.c_str(), + default_hostgroup, + max_connections, + attributes.c_str(), + comment.c_str() + ); + if (mysql_query_t(proxy_admin, insert_user_query)) { + goto cleanup; + } + if (mysql_query_t(proxy_admin, "LOAD PGSQL USERS TO RUNTIME")) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM runtime_pgsql_users WHERE username='%s' AND password='%s' AND active=1 AND use_ssl=0 AND default_hostgroup=%d " + "AND transaction_persistent=1 AND fast_forward=0 AND backend=0 AND frontend=1 AND max_connections=%d " + "AND attributes='%s' AND comment='%s'", + runtime_user_query, + username.c_str(), + password.c_str(), + default_hostgroup, + max_connections, + attributes.c_str(), + comment.c_str() + ); + if (wait_for_expected_count(replica_admin, runtime_user_query, 1, "runtime_pgsql_users sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM pgsql_users WHERE username='%s' AND password='%s' AND active=1 AND use_ssl=0 AND default_hostgroup=%d " + "AND transaction_persistent=1 AND fast_forward=0 AND backend=0 AND frontend=1 AND max_connections=%d " + "AND attributes='%s' AND comment='%s'", + main_user_query, + username.c_str(), + password.c_str(), + default_hostgroup, + max_connections, + attributes.c_str(), + comment.c_str() + ); + if (wait_for_expected_count(replica_admin, main_user_query, 1, "pgsql_users main sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + if (save_to_disk) { + string disk_user_query = main_user_query; + const string from_table { "FROM pgsql_users" }; + const string to_table { "FROM disk.pgsql_users" }; + const size_t from_pos = disk_user_query.find(from_table); + if (from_pos == string::npos) { + diag("Failed to rewrite pgsql_users query for disk validation"); + goto cleanup; + } + disk_user_query.replace(from_pos, from_table.length(), to_table); + if (wait_for_expected_count(replica_admin, disk_user_query, 1, "pgsql_users disk sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + } + + rc = EXIT_SUCCESS; + +cleanup: + if (restore_admin_table(proxy_admin, "pgsql_users", backup_table_name, "LOAD PGSQL USERS TO RUNTIME") != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + return rc; +} + +int check_pgsql_query_rules_sync(MYSQL* proxy_admin, MYSQL* replica_admin, bool save_to_disk) { + const string rules_backup_table_name { "pgsql_query_rules_sync_test_backup_5297" }; + const string fast_routing_backup_table_name { "pgsql_query_rules_fast_routing_sync_test_backup_5297" }; + const int rule_id = 98001; + const int destination_hostgroup = 801; + const int fast_routing_flag_in = 902; + const string match_pattern { "^SELECT 42$" }; + const string database_name { "cluster_sync_pgsql_db_5297" }; + const string fast_routing_comment { "cluster_sync_pgsql_fast_routing_5297" }; + const string comment { "cluster_sync_pgsql_rule_5297" }; + int rc = EXIT_FAILURE; + string insert_rule_query {}; + string insert_fast_routing_query {}; + string runtime_query_rules_query {}; + string runtime_fast_routing_query {}; + string main_query_rules_query {}; + string main_fast_routing_query {}; + + if (backup_admin_table(proxy_admin, "pgsql_query_rules", rules_backup_table_name) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (backup_admin_table(proxy_admin, "pgsql_query_rules_fast_routing", fast_routing_backup_table_name) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (mysql_query_t(proxy_admin, "DELETE FROM pgsql_query_rules")) { + goto cleanup; + } + if (mysql_query_t(proxy_admin, "DELETE FROM pgsql_query_rules_fast_routing")) { + goto cleanup; + } + + string_format( + "INSERT INTO pgsql_query_rules (rule_id, active, database, match_pattern, destination_hostgroup, apply, comment) " + "VALUES (%d, 1, '%s', '%s', %d, 1, '%s')", + insert_rule_query, + rule_id, + database_name.c_str(), + match_pattern.c_str(), + destination_hostgroup, + comment.c_str() + ); + if (mysql_query_t(proxy_admin, insert_rule_query)) { + goto cleanup; + } + + string_format( + "INSERT INTO pgsql_query_rules_fast_routing (username, database, flagIN, destination_hostgroup, comment) " + "VALUES ('%s', '%s', %d, %d, '%s')", + insert_fast_routing_query, + "", + database_name.c_str(), + fast_routing_flag_in, + destination_hostgroup, + fast_routing_comment.c_str() + ); + if (mysql_query_t(proxy_admin, insert_fast_routing_query)) { + goto cleanup; + } + if (mysql_query_t(proxy_admin, "LOAD PGSQL QUERY RULES TO RUNTIME")) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM runtime_pgsql_query_rules WHERE rule_id=%d AND match_pattern='%s' " + "AND destination_hostgroup=%d AND apply=1 AND comment='%s' AND database='%s'", + runtime_query_rules_query, + rule_id, + match_pattern.c_str(), + destination_hostgroup, + comment.c_str(), + database_name.c_str() + ); + if (wait_for_expected_count(replica_admin, runtime_query_rules_query, 1, "runtime_pgsql_query_rules sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM runtime_pgsql_query_rules_fast_routing WHERE username='%s' AND database='%s' " + "AND flagIN=%d AND destination_hostgroup=%d AND comment='%s'", + runtime_fast_routing_query, + "", + database_name.c_str(), + fast_routing_flag_in, + destination_hostgroup, + fast_routing_comment.c_str() + ); + if (wait_for_expected_count(replica_admin, runtime_fast_routing_query, 1, "runtime_pgsql_query_rules_fast_routing sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM pgsql_query_rules WHERE rule_id=%d AND active=1 AND match_pattern='%s' " + "AND destination_hostgroup=%d AND apply=1 AND comment='%s' AND database='%s'", + main_query_rules_query, + rule_id, + match_pattern.c_str(), + destination_hostgroup, + comment.c_str(), + database_name.c_str() + ); + if (wait_for_expected_count(replica_admin, main_query_rules_query, 1, "pgsql_query_rules main sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + string_format( + "SELECT COUNT(*) FROM pgsql_query_rules_fast_routing WHERE username='%s' AND database='%s' " + "AND flagIN=%d AND destination_hostgroup=%d AND comment='%s'", + main_fast_routing_query, + "", + database_name.c_str(), + fast_routing_flag_in, + destination_hostgroup, + fast_routing_comment.c_str() + ); + if (wait_for_expected_count(replica_admin, main_fast_routing_query, 1, "pgsql_query_rules_fast_routing main sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + + if (save_to_disk) { + string disk_query_rules_query = main_query_rules_query; + string disk_fast_routing_query = main_fast_routing_query; + const string rules_from_table { "FROM pgsql_query_rules" }; + const string rules_to_table { "FROM disk.pgsql_query_rules" }; + const string fast_from_table { "FROM pgsql_query_rules_fast_routing" }; + const string fast_to_table { "FROM disk.pgsql_query_rules_fast_routing" }; + const size_t rules_from_pos = disk_query_rules_query.find(rules_from_table); + const size_t fast_from_pos = disk_fast_routing_query.find(fast_from_table); + if (rules_from_pos == string::npos || fast_from_pos == string::npos) { + diag("Failed to rewrite pgsql query rules queries for disk validation"); + goto cleanup; + } + disk_query_rules_query.replace(rules_from_pos, rules_from_table.length(), rules_to_table); + disk_fast_routing_query.replace(fast_from_pos, fast_from_table.length(), fast_to_table); + if (wait_for_expected_count(replica_admin, disk_query_rules_query, 1, "pgsql_query_rules disk sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + if (wait_for_expected_count(replica_admin, disk_fast_routing_query, 1, "pgsql_query_rules_fast_routing disk sync", true) != EXIT_SUCCESS) { + goto cleanup; + } + } + + rc = EXIT_SUCCESS; + +cleanup: + if (restore_admin_table(proxy_admin, "pgsql_query_rules_fast_routing", fast_routing_backup_table_name) != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + if (restore_admin_table(proxy_admin, "pgsql_query_rules", rules_backup_table_name, "LOAD PGSQL QUERY RULES TO RUNTIME") != EXIT_SUCCESS) { + return EXIT_FAILURE; + } + return rc; +} + +int check_pgsql_checksums_in_runtime_table(MYSQL* admin) { + const char* pgsql_checksums[] = { + "pgsql_query_rules", + "pgsql_servers", + "pgsql_servers_v2", + "pgsql_users", + "pgsql_variables" + }; + + for (const char* checksum_name : pgsql_checksums) { + const char* t_check_checksum = + "SELECT COUNT(*) FROM runtime_checksums_values WHERE name='%s'"; + + char query[256]; + snprintf(query, sizeof(query), t_check_checksum, checksum_name); + + MYSQL_QUERY(admin, query); + MYSQL_RES* result = mysql_store_result(admin); + if (!result) { + diag("Failed to store result from query: %s", query); + return EXIT_FAILURE; + } + if (mysql_num_rows(result) == 0) { + diag("No results returned from query: %s", query); + mysql_free_result(result); + return EXIT_FAILURE; + } + MYSQL_ROW row = mysql_fetch_row(result); + if (!row) { + diag("Failed to fetch row from result"); + mysql_free_result(result); + return EXIT_FAILURE; + } + int count = atoi(row[0]); + mysql_free_result(result); + + if (count != 1) { + diag("PostgreSQL checksum '%s' not found in runtime_checksums_values", checksum_name); + return EXIT_FAILURE; + } + } + + return EXIT_SUCCESS; +} + +int main(int argc, char** argv) { + CommandLine cl; + + if (cl.getEnv()) { + diag("Failed to get configuration from environment"); + return EXIT_FAILURE; + } + + plan(13); + + // Connect to admin interfaces + MYSQL* proxysql_admin = mysql_init(NULL); + if (!proxysql_admin) { + diag("mysql_init() failed"); + return exit_status(); + } + + if (!mysql_real_connect(proxysql_admin, cl.host, cl.admin_username, cl.admin_password, NULL, cl.admin_port, NULL, 0)) { + diag("Failed to connect to primary admin: %s", mysql_error(proxysql_admin)); + return exit_status(); + } + + // Check each PostgreSQL checksum individually + const char* pgsql_checksums[] = { + "pgsql_query_rules", + "pgsql_servers", + "pgsql_servers_v2", + "pgsql_users", + "pgsql_variables" + }; + + for (const char* checksum_name : pgsql_checksums) { + const char* t_check_checksum = + "SELECT COUNT(*) FROM runtime_checksums_values WHERE name='%s'"; + + char query[256]; + snprintf(query, sizeof(query), t_check_checksum, checksum_name); + + MYSQL_QUERY(proxysql_admin, query); + MYSQL_RES* result = mysql_store_result(proxysql_admin); + if (!result) { + diag("Failed to store result from query: %s", query); + ok(false, "PostgreSQL checksum '%s' found in runtime_checksums_values", checksum_name); + continue; + } + if (mysql_num_rows(result) == 0) { + diag("No results returned from query: %s", query); + mysql_free_result(result); + ok(false, "PostgreSQL checksum '%s' found in runtime_checksums_values", checksum_name); + continue; + } + MYSQL_ROW row = mysql_fetch_row(result); + if (!row) { + diag("Failed to fetch row from result"); + mysql_free_result(result); + ok(false, "PostgreSQL checksum '%s' found in runtime_checksums_values", checksum_name); + continue; + } + int count = atoi(row[0]); + mysql_free_result(result); + + ok(count == 1, "PostgreSQL checksum '%s' found in runtime_checksums_values", checksum_name); + } + + int res = check_pgsql_checksums_in_runtime_table(proxysql_admin); + ok(res == EXIT_SUCCESS, "PostgreSQL checksum validation passed"); + + // Test basic PostgreSQL configuration is supported + MYSQL_QUERY(proxysql_admin, "SELECT 1 FROM pgsql_servers LIMIT 1"); + MYSQL_RES* pgsql_servers_result = mysql_store_result(proxysql_admin); + ok(mysql_errno(proxysql_admin) == 0, "PostgreSQL servers table is accessible"); + if (pgsql_servers_result) { + mysql_free_result(pgsql_servers_result); + } + + MYSQL_QUERY(proxysql_admin, "SELECT 1 FROM pgsql_users LIMIT 1"); + MYSQL_RES* pgsql_users_result = mysql_store_result(proxysql_admin); + ok(mysql_errno(proxysql_admin) == 0, "PostgreSQL users table is accessible"); + if (pgsql_users_result) { + mysql_free_result(pgsql_users_result); + } + + MYSQL_QUERY(proxysql_admin, "SELECT 1 FROM pgsql_query_rules LIMIT 1"); + MYSQL_RES* pgsql_query_rules_result = mysql_store_result(proxysql_admin); + ok(mysql_errno(proxysql_admin) == 0, "PostgreSQL query rules table is accessible"); + if (pgsql_query_rules_result) { + mysql_free_result(pgsql_query_rules_result); + } + + // Check cluster variables exist + MYSQL_QUERY(proxysql_admin, "SHOW VARIABLES LIKE 'cluster_pgsql_%'"); + MYSQL_RES* pgsql_cluster_vars_result = mysql_store_result(proxysql_admin); + ok(mysql_errno(proxysql_admin) == 0, "PostgreSQL cluster variables are accessible"); + if (pgsql_cluster_vars_result) { + mysql_free_result(pgsql_cluster_vars_result); + } + + { + bool servers_save_to_disk = false; + bool users_save_to_disk = false; + bool query_rules_save_to_disk = false; + const char* replica_port_env = getenv("TAP_PGSQL_SYNC_REPLICA_PORT"); + + if (!replica_port_env || strlen(replica_port_env) == 0) { + ok(true, "PostgreSQL servers_v2 sync check skipped (set TAP_PGSQL_SYNC_REPLICA_PORT to enable)"); + ok(true, "PostgreSQL users sync check skipped (set TAP_PGSQL_SYNC_REPLICA_PORT to enable)"); + ok(true, "PostgreSQL query rules sync check skipped (set TAP_PGSQL_SYNC_REPLICA_PORT to enable)"); + } else { + MYSQL* replica_admin = mysql_init(NULL); + if (!replica_admin) { + ok(false, "Failed to initialize replica admin connection for PostgreSQL servers_v2 sync check"); + ok(false, "Failed to initialize replica admin connection for PostgreSQL users sync check"); + ok(false, "Failed to initialize replica admin connection for PostgreSQL query rules sync check"); + } else if (!mysql_real_connect( + replica_admin, + cl.host, + cl.admin_username, + cl.admin_password, + NULL, + static_cast(atoi(replica_port_env)), + NULL, + 0 + )) { + ok(false, "Failed to connect to replica admin for PostgreSQL servers_v2 sync check"); + ok(false, "Failed to connect to replica admin for PostgreSQL users sync check"); + ok(false, "Failed to connect to replica admin for PostgreSQL query rules sync check"); + } else { + const int servers_save_to_disk_rc = get_admin_bool_value( + proxysql_admin, "admin-cluster_pgsql_servers_save_to_disk", servers_save_to_disk + ); + if (servers_save_to_disk_rc != EXIT_SUCCESS) { + diag("Failed to retrieve admin-cluster_pgsql_servers_save_to_disk"); + } + const int users_save_to_disk_rc = get_admin_bool_value( + proxysql_admin, "admin-cluster_pgsql_users_save_to_disk", users_save_to_disk + ); + if (users_save_to_disk_rc != EXIT_SUCCESS) { + diag("Failed to retrieve admin-cluster_pgsql_users_save_to_disk"); + } + const int query_rules_save_to_disk_rc = get_admin_bool_value( + proxysql_admin, "admin-cluster_pgsql_query_rules_save_to_disk", query_rules_save_to_disk + ); + if (query_rules_save_to_disk_rc != EXIT_SUCCESS) { + diag("Failed to retrieve admin-cluster_pgsql_query_rules_save_to_disk"); + } + + const vector pgsql_servers_values { + { 801, "127.0.0.1", 15432, "ONLINE", 1, 0, 200, 0, 0, 1000, "cluster_sync_pgsql_test_5297" } + }; + const int servers_sync_res = (servers_save_to_disk_rc == EXIT_SUCCESS) + ? check_pgsql_servers_v2_sync( + proxysql_admin, replica_admin, servers_save_to_disk, pgsql_servers_values + ) + : EXIT_FAILURE; + ok( + servers_sync_res == EXIT_SUCCESS, + "PostgreSQL servers_v2 synced to replica%s", + (servers_save_to_disk ? " and disk persisted" : "") + ); + + const int users_sync_res = (users_save_to_disk_rc == EXIT_SUCCESS) + ? check_pgsql_users_sync( + proxysql_admin, replica_admin, users_save_to_disk + ) + : EXIT_FAILURE; + ok( + users_sync_res == EXIT_SUCCESS, + "PostgreSQL users synced to replica%s", + (users_save_to_disk ? " and disk persisted" : "") + ); + + const int query_rules_sync_res = (query_rules_save_to_disk_rc == EXIT_SUCCESS) + ? check_pgsql_query_rules_sync( + proxysql_admin, replica_admin, query_rules_save_to_disk + ) + : EXIT_FAILURE; + ok( + query_rules_sync_res == EXIT_SUCCESS, + "PostgreSQL query rules synced to replica%s", + (query_rules_save_to_disk ? " and disk persisted" : "") + ); + } + + if (replica_admin) { + mysql_close(replica_admin); + } + } + } + + mysql_close(proxysql_admin); + + return exit_status(); +}