From c6a8bce25d97c43f06f892af6a96f8adc204fb5f Mon Sep 17 00:00:00 2001 From: Alex Gaetano Padula Date: Wed, 11 Mar 2026 03:14:03 -0400 Subject: [PATCH 1/3] Correct isolation mapping, dup_ref handling, perf regressions, and #72/#73/#74/#77/#78/#74/#75/#76/#79/#80/#81/#82/#83/#84 Correct handlerton compatibility for MariaDB 11.4+/12+. Correct MTR suite for 11.4+ and 12+. Fix concurrent conflict handling (issue #77). Major performance pass eliminating per-row overhead in hot paths. --- Correctness fixes --- Isolation levels now respected: Add resolve_effective_isolation() that maps the MariaDB session's SET TRANSACTION ISOLATION LEVEL (via thd_tx_isolation()) to the corresponding TidesDB isolation level: READ UNCOMMITTED -> TDB_ISOLATION_READ_UNCOMMITTED READ COMMITTED -> TDB_ISOLATION_READ_COMMITTED REPEATABLE READ -> TDB_ISOLATION_REPEATABLE_READ (or TDB_ISOLATION_SNAPSHOT if table option says so) SERIALIZABLE -> TDB_ISOLATION_SERIALIZABLE Previously ensure_stmt_txn() and external_lock() used share->isolation_level (static table option) for multi-statement txns and hard-coded READ_COMMITTED for autocommit. start_consistent_snapshot hard-coded REPEATABLE_READ. The session's tx_isolation was completely ignored. All three paths now call resolve_effective_isolation(thd, share->isolation_level). TidesDB's SNAPSHOT level (no SQL equivalent) is honoured when the session is at REPEATABLE READ and the table option specifies SNAPSHOT. Map MariaDB REPEATABLE_READ -> TidesDB SNAPSHOT, which is the semantic equivalent of InnoDB's repeatable-read (write-write conflict detection only, no read-set tracking). dup_ref population (root cause of 'Can't find record' on IODKU): write_row() now copies the conflicting row's PK bytes into dup_ref before returning HA_ERR_FOUND_DUPP_KEY, for both PK duplicates and unique secondary index duplicates. REPLACE INTO secondary index orphan fix: Remove write_can_replace_ from the skip_unique check entirely. Now only the tidesdb_skip_unique_check session variable skips uniqueness checks. Both REPLACE INTO and INSERT ON DUPLICATE KEY UPDATE let write_row() return HA_ERR_FOUND_DUPP_KEY so the server properly handles delete+reinsert and cleans up old secondary index entries via delete_row(). extra() fix: HA_EXTRA_INSERT_WITH_UPDATE no longer sets write_can_replace_. DDL isolation fix: inplace_alter_table() (ALTER TABLE ADD INDEX) previously used share->isolation_level for scan transactions. If the table was created with SERIALIZABLE, the index build would track millions of read-set entries causing unbounded memory growth. Now hard-coded to TDB_ISOLATION_READ_COMMITTED for both initial and batch-restart transactions. DDL never needs OCC conflict detection. Bulk insert mid-commit isolation: The batch re-begin in write_row() previously used share->isolation_level. Now uses TDB_ISOLATION_READ_COMMITTED since bulk inserts do not need snapshot consistency across batches. Bulk insert iterator invalidation: After bulk insert mid-commit creates a fresh transaction, the old scan_iter held a dangling pointer to the freed txn. Now invalidate the iterator (tidesdb_iter_free + NULL) and update scan_txn to point to the new transaction. Bump txn_generation so other handler objects detect the change. --- Library fixes (src/tidesdb.c) --- SNAPSHOT isolation read-set tracking: tidesdb_txn_add_to_read_set() was only skipping tracking for isolation levels < TDB_ISOLATION_REPEATABLE_READ (i.e. RU and RC). SNAPSHOT (enum value 3) fell through and tracked every read, but SNAPSHOT only needs write-write conflict detection. Fix: skip read tracking when isolation_level == TDB_ISOLATION_SNAPSHOT. Also fix tidesdb_txn_check_read_conflicts() to skip for SNAPSHOT. Previously SNAPSHOT ran full read-conflict and SSI checks at commit, making it behave identically to SERIALIZABLE. --- Performance fixes --- Debug logging gated behind srv_debug_trace: ~25 sql_print_information('[T77]...') calls were firing unconditionally on every INSERT, SELECT, scan, lock, and commit. At high throughput this serialises all threads on the error-log mutex. All calls now gated with if (unlikely(srv_debug_trace)), matching the existing TDB_TRACE macro pattern. Zero overhead when disabled. tidesdb_skip_unique_check session variable: New per-session boolean: SET SESSION tidesdb_skip_unique_check=1. Skips both PK and UNIQUE secondary index uniqueness checks during INSERT. Same pattern as MyRocks' rocksdb_skip_unique_check. Eliminates the expensive per-row tidesdb_txn_get point lookup (traverses all LSM levels) during bulk loads. Stack-allocated index key buffers in update_row(): Replace two std::unique_ptr(new uchar[MAX_KEY_LENGTH*2+2]) heap allocations per UPDATE with stack-allocated buffers (~4KB each, well within stack limits). Dup-check iterator caching: Cache iterators per unique-index slot in write_row(). Reuse via seek() instead of iter_new()/iter_free() per row. Add free_dup_iter_cache() helper; free in close(), delete_all_rows(), external_lock(F_UNLCK), and bulk insert mid-commit. Transaction reuse: Use tidesdb_txn_reset() instead of tidesdb_txn_free() + tidesdb_txn_begin_with_isolation() in bulk insert mid-commit and autocommit paths. Iterator survival: Preserve iterators across read-only autocommit statements instead of tearing down and rebuilding. Skip PK dup check for REPLACE INTO without secondary indexes. Zero-copy fetch path: Use get_val_buf_ for BLOB/encrypted path in fetch_row_by_pk(). HA_CLUSTERED_INDEX: Add HA_CLUSTERED_INDEX to table_flags(). Encryption buffer reuse: serialize_row() now writes to enc_buf_ and returns it, preserving row_buf_ capacity across rows. Avoids re-allocation per encrypted write. Covering index decode expansion: New decode_sort_key_part() handles INT, DATE, DATETIME, TIMESTAMP, YEAR, and CHAR/BINARY (binary/latin1). try_keyread_from_index() and icp_check_secondary() now use it instead of the old integer-only type gate. Encryption key version caching: Call encryption_key_get_latest_version() once per statement, cache in cached_enc_key_ver_. Invalidate via enc_key_ver_valid_ = false at statement end in external_lock(F_UNLCK). Per-statement syscall/THDVAR caching: New per-statement caches in ha_tidesdb.h: cached_time_ / cached_time_valid_ - time(NULL) for TTL cached_sess_ttl_ - THDVAR(thd, ttl) cached_skip_unique_ - THDVAR(thd, skip_unique_check) cached_thdvars_valid_ - invalidation flag write_row: ha_thd() 4x, thd_get_ha_data() 3x, THDVAR() 3x -> each called once and cached. Eliminates ~9 indirect/virtual calls per row. Similar reductions in compute_row_ttl(), update_row(), rnd_init(), index_init(), ensure_scan_iter(). All caches invalidated in external_lock(F_UNLCK). Inplace index build: Replace std::set with std::unordered_set for O(1) duplicate detection. --- Feature additions --- Issue #76: tidesdb_data_home_dir MYSQL_SYSVAR_STR (read-only) that overrides the auto-computed data directory. When set, the plugin uses that path instead of /../tidesdb_data. Issue #73: SHOW ENGINE TIDESDB STATUS tidesdb_show_status() callback registered on the handlerton. Displays DB-level stats (memory, storage, background queues) and block cache stats (hits, misses, hit rate, entries). Issue #79: Per-index USE_BTREE ha_index_option_struct with use_btree boolean, registered via tidesdb_index_option_list. index_type() checks per-index option first, falls back to table-level. create() and inplace_alter_table() apply per-index USE_BTREE when creating secondary index CFs. Syntax: KEY idx_a (a) USE_BTREE=1. Issue #78: index_type() override Returns 'LSM' for default block-based tables and 'BTREE' for USE_BTREE=1 tables. SHOW KEYS now displays correct index type. Issue #74: ANALYZE TABLE cardinality Samples up to 100K entries from each secondary index CF, counts distinct key prefixes to compute rec_per_key. Before: EXPLAIN showed rows=1 for any index lookup. After: accurate estimates. Issue #82: Table option default variables 5 session-scoped dynamic system variables via HA_TOPTION_SYSVAR: tidesdb_default_compression (NONE/SNAPPY/LZ4/ZSTD/LZ4_FAST) tidesdb_default_write_buffer_size (default 32MB) tidesdb_default_bloom_filter (default ON) tidesdb_default_use_btree (default OFF) tidesdb_default_block_indexes (default ON) Issue #81: Conflict information tidesdb_print_all_conflicts global dynamic sysvar (like innodb_print_all_deadlocks). When ON, every TDB_ERR_CONFLICT logs a warning. Last conflict info displayed in SHOW ENGINE TIDESDB STATUS under 'Conflicts' section. --- New MTR tests --- tidesdb_replace_iodku - REPLACE INTO + IODKU with PK, secondary, unique tidesdb_index_stats - issue #78 index_type + issue #74 cardinality tidesdb_isolation - session isolation level mapping tidesdb_engine_status - SHOW ENGINE TIDESDB STATUS tidesdb_per_index_btree - per-index USE_BTREE tidesdb_data_home_dir - sysvar visibility and read-only enforcement tidesdb_insert_conflict - INSERT vs INSERT conflict (awaits library fix) --- Known issues --- Issue #77: concurrent UPDATE+DELETE crashes the server (SIGSEGV) in core MVCC conflict detection. Not fixable in the plugin layer; requires changes to src/tidesdb.c. Issue #83: INSERT vs INSERT conflict not detected. Library-level bug; tidesdb_txn_get within a transaction's snapshot cannot see uncommitted writes from other transactions by design. Enforcement must happen at commit time via TDB_ERR_CONFLICT. Fix confirmed coming in next library release. UNIQUE secondary index check still creates a full merge-heap iterator via tidesdb_iter_new per UNIQUE index per INSERT when skip_unique_check is off. A future tidesdb_txn_exists_prefix() API would avoid building the full merge heap. --- .../suite/tidesdb/include/have_tidesdb.inc | 2 +- .../suite/tidesdb/r/tidesdb_analyze.result | 1 + .../r/tidesdb_concurrent_conflict.result | 58 + .../suite/tidesdb/r/tidesdb_crud.result | 2 +- .../tidesdb/r/tidesdb_data_home_dir.reject | 10 + .../tidesdb/r/tidesdb_data_home_dir.result | 10 + .../suite/tidesdb/r/tidesdb_encryption.result | 2 +- .../tidesdb/r/tidesdb_engine_status.result | 42 + .../suite/tidesdb/r/tidesdb_fk_convert.result | 86 - .../tidesdb/r/tidesdb_index_stats.result | 114 ++ .../tidesdb/r/tidesdb_info_schema.result | 396 +++++ .../tidesdb/r/tidesdb_insert_conflict.result | 36 + .../suite/tidesdb/r/tidesdb_isolation.result | 115 ++ .../suite/tidesdb/r/tidesdb_online_ddl.result | 56 +- .../suite/tidesdb/r/tidesdb_options.result | 26 +- .../suite/tidesdb/r/tidesdb_partition.result | 2 +- .../tidesdb/r/tidesdb_per_index_btree.result | 42 + .../suite/tidesdb/r/tidesdb_rename.result | 14 +- .../tidesdb/r/tidesdb_replace_iodku.result | 200 +++ mysql-test/suite/tidesdb/r/tidesdb_ttl.result | 2 +- .../suite/tidesdb/r/tidesdb_vcol.result | 2 +- mysql-test/suite/tidesdb/suite.opt | 4 +- .../suite/tidesdb/t/tidesdb_alter_crash.test | 1 + .../suite/tidesdb/t/tidesdb_analyze.test | 1 + .../suite/tidesdb/t/tidesdb_backup.test | 1 + ...rt.opt => tidesdb_concurrent_conflict.opt} | 0 .../t/tidesdb_concurrent_conflict.test | 73 + .../tidesdb/t/tidesdb_concurrent_errors.test | 1 + .../t/tidesdb_consistent_snapshot.test | 1 + mysql-test/suite/tidesdb/t/tidesdb_crud.test | 1 + .../suite/tidesdb/t/tidesdb_data_home_dir.opt | 2 + .../tidesdb/t/tidesdb_data_home_dir.test | 16 + .../suite/tidesdb/t/tidesdb_drop_create.test | 1 + .../suite/tidesdb/t/tidesdb_encryption.test | 1 + .../suite/tidesdb/t/tidesdb_engine_status.opt | 2 + .../tidesdb/t/tidesdb_engine_status.test | 20 + .../suite/tidesdb/t/tidesdb_fk_convert.test | 81 - .../suite/tidesdb/t/tidesdb_index_stats.opt | 2 + .../suite/tidesdb/t/tidesdb_index_stats.test | 127 ++ .../suite/tidesdb/t/tidesdb_info_schema.test | 1 + .../tidesdb/t/tidesdb_insert_conflict.opt | 2 + .../tidesdb/t/tidesdb_insert_conflict.test | 54 + .../suite/tidesdb/t/tidesdb_isolation.opt | 2 + .../suite/tidesdb/t/tidesdb_isolation.test | 125 ++ mysql-test/suite/tidesdb/t/tidesdb_json.test | 1 + .../suite/tidesdb/t/tidesdb_online_ddl.test | 21 +- .../suite/tidesdb/t/tidesdb_options.test | 1 + .../suite/tidesdb/t/tidesdb_partition.test | 1 + .../tidesdb/t/tidesdb_per_index_btree.opt | 2 + .../tidesdb/t/tidesdb_per_index_btree.test | 46 + .../suite/tidesdb/t/tidesdb_pk_index.test | 1 + .../suite/tidesdb/t/tidesdb_rename.test | 1 + .../suite/tidesdb/t/tidesdb_replace_iodku.opt | 2 + .../tidesdb/t/tidesdb_replace_iodku.test | 177 ++ .../suite/tidesdb/t/tidesdb_savepoint.test | 1 + mysql-test/suite/tidesdb/t/tidesdb_sql.test | 1 + .../suite/tidesdb/t/tidesdb_stress.test | 1 + mysql-test/suite/tidesdb/t/tidesdb_ttl.test | 1 + mysql-test/suite/tidesdb/t/tidesdb_vcol.test | 1 + .../tidesdb/t/tidesdb_write_pressure.test | 1 + tidesdb/ha_tidesdb.cc | 1440 +++++++++++------ tidesdb/ha_tidesdb.h | 59 +- 62 files changed, 2809 insertions(+), 686 deletions(-) create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_concurrent_conflict.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_engine_status.result delete mode 100644 mysql-test/suite/tidesdb/r/tidesdb_fk_convert.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_index_stats.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_insert_conflict.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_isolation.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_per_index_btree.result create mode 100644 mysql-test/suite/tidesdb/r/tidesdb_replace_iodku.result rename mysql-test/suite/tidesdb/t/{tidesdb_fk_convert.opt => tidesdb_concurrent_conflict.opt} (100%) create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_engine_status.test delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_fk_convert.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_index_stats.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_isolation.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_isolation.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.test create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt create mode 100644 mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.test diff --git a/mysql-test/suite/tidesdb/include/have_tidesdb.inc b/mysql-test/suite/tidesdb/include/have_tidesdb.inc index 0ff55a1d..9bb5dbe8 100644 --- a/mysql-test/suite/tidesdb/include/have_tidesdb.inc +++ b/mysql-test/suite/tidesdb/include/have_tidesdb.inc @@ -1,5 +1,5 @@ ---require r/have_tidesdb.require disable_query_log; --error 0,1286 eval SET @@default_storage_engine = TidesDB; +ALTER DATABASE test DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci; enable_query_log; diff --git a/mysql-test/suite/tidesdb/r/tidesdb_analyze.result b/mysql-test/suite/tidesdb/r/tidesdb_analyze.result index 46fa237c..926e7f8e 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_analyze.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_analyze.result @@ -21,6 +21,7 @@ test.t1 analyze Note TIDESDB: level 3 sstables=N size=N bytes keys=N test.t1 analyze Note TIDESDB: level 4 sstables=N size=N bytes keys=N test.t1 analyze Note TIDESDB: level 5 sstables=N size=N bytes keys=N test.t1 analyze Note TIDESDB: idx CF 'test__t1__idx_idx_val' keys=N data_size=N bytes levels=5 +test.t1 analyze Note TIDESDB: idx 'idx_val' sampled=6 distinct=6 rec_per_key=1 test.t1 analyze status OK # ANALYZE a table without secondary indexes CREATE TABLE t2 ( diff --git a/mysql-test/suite/tidesdb/r/tidesdb_concurrent_conflict.result b/mysql-test/suite/tidesdb/r/tidesdb_concurrent_conflict.result new file mode 100644 index 00000000..3d52a67e --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_concurrent_conflict.result @@ -0,0 +1,58 @@ +call mtr.add_suppression("TIDESDB:.*TDB_ERR_CONFLICT"); +# +# Issue #77: Concurrent conflict detection +# +CREATE TABLE t ( +i INT NOT NULL PRIMARY KEY, +x INT +) ENGINE=TidesDB; +INSERT INTO t VALUES (1,10),(2,20),(3,30),(4,40),(5,50); +connect con1, localhost, root,,; +connect con2, localhost, root,,; +# ---- TEST 1: Two UPDATEs on same row ---- +connection con1; +START TRANSACTION; +UPDATE t SET x = 999 WHERE i = 1; +connection con2; +START TRANSACTION; +UPDATE t SET x = 888 WHERE i = 1; +COMMIT; +connection con1; +COMMIT; +Got one of the listed errors +connection default; +# con2 wins: x should be 888 +SELECT * FROM t WHERE i = 1; +i x +1 888 +# ---- TEST 2: UPDATE vs DELETE on same row ---- +connection con1; +START TRANSACTION; +UPDATE t SET x = 777 WHERE i = 2; +connection con2; +START TRANSACTION; +DELETE FROM t WHERE i = 2; +COMMIT; +connection con1; +COMMIT; +Got one of the listed errors +connection default; +# con2 wins: row 2 should be gone +SELECT * FROM t WHERE i = 2; +i x +# Remaining rows intact +SELECT * FROM t ORDER BY i; +i x +1 888 +3 30 +4 40 +5 50 +# Cleanup +connection con1; +disconnect con1; +connection con2; +disconnect con2; +connection default; +DROP TABLE t; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_crud.result b/mysql-test/suite/tidesdb/r/tidesdb_crud.result index ca7e0e0d..8e9e4df6 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_crud.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_crud.result @@ -21,7 +21,7 @@ t1 CREATE TABLE `t1` ( `score` decimal(10,2) DEFAULT NULL, `bio` text DEFAULT NULL, `born` date DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci # # ============================================ # TEST 2: INSERT — single row diff --git a/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject b/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject new file mode 100644 index 00000000..0ffe0fcd --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject @@ -0,0 +1,10 @@ +# +# Verify tidesdb_data_home_dir is visible and read-only +# +SHOW VARIABLES LIKE 'tidesdb_data_home_dir'; +Variable_name Value +tidesdb_data_home_dir +SET GLOBAL tidesdb_data_home_dir = '/tmp/test'; +ERROR HY000: Variable 'tidesdb_data_home_dir' is a read only variable +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.result b/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.result new file mode 100644 index 00000000..0ffe0fcd --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.result @@ -0,0 +1,10 @@ +# +# Verify tidesdb_data_home_dir is visible and read-only +# +SHOW VARIABLES LIKE 'tidesdb_data_home_dir'; +Variable_name Value +tidesdb_data_home_dir +SET GLOBAL tidesdb_data_home_dir = '/tmp/test'; +ERROR HY000: Variable 'tidesdb_data_home_dir' is a read only variable +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_encryption.result b/mysql-test/suite/tidesdb/r/tidesdb_encryption.result index 63e27e8d..57b3936b 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_encryption.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_encryption.result @@ -42,7 +42,7 @@ t_enc2 CREATE TABLE `t_enc2` ( `name` varchar(50) DEFAULT NULL, `amount` int(11) DEFAULT NULL, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `ENCRYPTED`=YES `ENCRYPTION_KEY_ID`=2 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `ENCRYPTED`=YES `ENCRYPTION_KEY_ID`=2 INSERT INTO t_enc2 VALUES (1, 'alice', 100); SELECT * FROM t_enc2; id name amount diff --git a/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result b/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result new file mode 100644 index 00000000..c7cd1498 --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result @@ -0,0 +1,42 @@ +# +# SHOW ENGINE TIDESDB STATUS should return output +# +CREATE TABLE t1 (id INT PRIMARY KEY, val INT) ENGINE=TidesDB; +INSERT INTO t1 VALUES (1,10),(2,20),(3,30); +SHOW ENGINE TIDESDB STATUS; +Type Name Status +TIDESDB ================== TidesDB Engine Status ================== +Data directory: /home/agpmastersystem/server-mariadb-N.N.N/builddir/mysql-test/var/mysqld.N/tidesdb_data +Column families: N +Global sequence: N + +--- Memory --- +Total system memory: N MB +Resolved memory limit: N MB +Memory pressure level: N +Total memtable bytes: N +Transaction memory bytes: N + +--- Storage --- +Total SSTables: N +Open SSTable handles: N +Total data size: N bytes +Immutable memtables: N + +--- Background --- +Flush pending: N +Flush queue size: N +Compaction queue size: N + +--- Block Cache --- +Enabled: YES +Entries: N +Size: N bytes +Hits: N +Misses: N +Hit rate: N.N% +Partitions: N + +DROP TABLE t1; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_fk_convert.result b/mysql-test/suite/tidesdb/r/tidesdb_fk_convert.result deleted file mode 100644 index baad4691..00000000 --- a/mysql-test/suite/tidesdb/r/tidesdb_fk_convert.result +++ /dev/null @@ -1,86 +0,0 @@ -# -# Issue #61: Converting InnoDB tables with foreign keys to TidesDB -# Requires a server-side patch to sql/sql_table.cc that honours -# FOREIGN_KEY_CHECKS=0 during can_switch_engines(). If the patch -# is absent (upstream MariaDB), the test is skipped. -# -# Create an InnoDB table with a self-referencing foreign key -CREATE TABLE t_fk61 ( -a INT, -b INT NOT NULL, -INDEX idx_a (a), -CONSTRAINT `fk_self` FOREIGN KEY (b) REFERENCES t_fk61 (a) -ON DELETE CASCADE -ON UPDATE RESTRICT -) ENGINE=InnoDB; -INSERT INTO t_fk61 (a, b) VALUES (1, 1), (2, 1), (3, 2); -SHOW CREATE TABLE t_fk61; -Table Create Table -t_fk61 CREATE TABLE `t_fk61` ( - `a` int(11) DEFAULT NULL, - `b` int(11) NOT NULL, - KEY `idx_a` (`a`), - KEY `fk_self` (`b`), - CONSTRAINT `fk_self` FOREIGN KEY (`b`) REFERENCES `t_fk61` (`a`) ON DELETE CASCADE -) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci -# Without FOREIGN_KEY_CHECKS=0, the conversion should fail -ALTER TABLE t_fk61 ENGINE=TidesDB; -ERROR 23000: Cannot delete or update a parent row: a foreign key constraint fails -# With FOREIGN_KEY_CHECKS=0, the conversion should succeed -# (requires server patch; skip if absent) -SET FOREIGN_KEY_CHECKS=0; -ALTER TABLE t_fk61 ENGINE=TidesDB; -SET FOREIGN_KEY_CHECKS=1; -# Verify data survived the conversion -SELECT * FROM t_fk61 ORDER BY a; -a b -1 1 -2 1 -3 2 -# Verify the table is now TidesDB and FKs are gone -SHOW CREATE TABLE t_fk61; -Table Create Table -t_fk61 CREATE TABLE `t_fk61` ( - `a` int(11) DEFAULT NULL, - `b` int(11) NOT NULL, - KEY `idx_a` (`a`), - KEY `fk_self` (`b`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci -# Verify we can still do DML -INSERT INTO t_fk61 (a, b) VALUES (4, 99); -SELECT * FROM t_fk61 ORDER BY a; -a b -1 1 -2 1 -3 2 -4 99 -DROP TABLE t_fk61; -# Test with parent-child FK relationship (two tables) -CREATE TABLE t_parent61 ( -id INT PRIMARY KEY -) ENGINE=InnoDB; -CREATE TABLE t_child61 ( -id INT PRIMARY KEY, -parent_id INT, -CONSTRAINT `fk_parent` FOREIGN KEY (parent_id) REFERENCES t_parent61 (id) -) ENGINE=InnoDB; -INSERT INTO t_parent61 VALUES (1), (2), (3); -INSERT INTO t_child61 VALUES (10, 1), (20, 2); -# Convert parent table with FK_CHECKS=0 -SET FOREIGN_KEY_CHECKS=0; -ALTER TABLE t_parent61 ENGINE=TidesDB; -ALTER TABLE t_child61 ENGINE=TidesDB; -SET FOREIGN_KEY_CHECKS=1; -SELECT * FROM t_parent61 ORDER BY id; -id -1 -2 -3 -SELECT * FROM t_child61 ORDER BY id; -id parent_id -10 1 -20 2 -DROP TABLE t_child61; -DROP TABLE t_parent61; -# -# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_index_stats.result b/mysql-test/suite/tidesdb/r/tidesdb_index_stats.result new file mode 100644 index 00000000..59cfda23 --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_index_stats.result @@ -0,0 +1,114 @@ +# +# ============================================ +# TEST 1: Index type reporting (issue #78) +# LSM tables should show LSM, not BTREE +# ============================================ +# +CREATE TABLE t_lsm ( +i INT NOT NULL PRIMARY KEY, +y INT, +KEY idx_y (y) +) ENGINE=TIDESDB USE_BTREE=0; +SHOW KEYS FROM t_lsm; +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored +t_lsm 0 PRIMARY 1 i A 2 NULL NULL LSM NO +t_lsm 1 idx_y 1 y A 2 NULL NULL YES LSM NO +DROP TABLE t_lsm; +# +# ============================================ +# TEST 2: BTREE tables should show BTREE +# ============================================ +# +CREATE TABLE t_btree ( +i INT NOT NULL PRIMARY KEY, +y INT, +KEY idx_y (y) +) ENGINE=TIDESDB USE_BTREE=1; +SHOW KEYS FROM t_btree; +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored +t_btree 0 PRIMARY 1 i A 2 NULL NULL BTREE NO +t_btree 1 idx_y 1 y A 2 NULL NULL YES BTREE NO +DROP TABLE t_btree; +# +# ============================================ +# TEST 3: Default (USE_BTREE=0) shows LSM +# ============================================ +# +CREATE TABLE t_default ( +i INT NOT NULL PRIMARY KEY, +y INT, +KEY idx_y (y) +) ENGINE=TIDESDB; +SHOW KEYS FROM t_default; +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored +t_default 0 PRIMARY 1 i A 2 NULL NULL LSM NO +t_default 1 idx_y 1 y A 2 NULL NULL YES LSM NO +DROP TABLE t_default; +# +# ============================================ +# TEST 4: ANALYZE TABLE updates rec_per_key +# for non-unique secondary indexes (issue #74) +# ============================================ +# +CREATE TABLE t_stats ( +id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, +k INT NOT NULL, +val VARCHAR(50), +KEY k_idx (k) +) ENGINE=TIDESDB; +# Insert 200 rows with only 2 distinct values for k +SELECT COUNT(*) AS total_rows FROM t_stats; +total_rows +200 +# Before ANALYZE, optimizer may not estimate well +EXPLAIN SELECT * FROM t_stats WHERE k = 0; +id select_type table type possible_keys key key_len ref rows Extra +1 SIMPLE t_stats ref k_idx k_idx 4 const 1 +ANALYZE TABLE t_stats; +Table Op Msg_type Msg_text +test.t_stats analyze status Engine-independent statistics collected +test.t_stats analyze Note TIDESDB: CF 'test__t_stats' total_keys=N data_size=N bytes memtable=N bytes levels=5 read_amp=N cache_hit=N% +test.t_stats analyze Note TIDESDB: avg_key=N bytes avg_value=N bytes +test.t_stats analyze Note TIDESDB: level 1 sstables=N size=N bytes keys=N +test.t_stats analyze Note TIDESDB: level 2 sstables=N size=N bytes keys=N +test.t_stats analyze Note TIDESDB: level 3 sstables=N size=N bytes keys=N +test.t_stats analyze Note TIDESDB: level 4 sstables=N size=N bytes keys=N +test.t_stats analyze Note TIDESDB: level 5 sstables=N size=N bytes keys=N +test.t_stats analyze Note TIDESDB: idx CF 'test__t_stats__idx_k_idx' keys=N data_size=N bytes levels=5 +test.t_stats analyze Note TIDESDB: idx 'k_idx' sampled=N distinct=N rec_per_key=N +test.t_stats analyze status OK +# After ANALYZE, the optimizer should estimate ~100 rows for k=0 +EXPLAIN SELECT * FROM t_stats WHERE k = 0; +id select_type table type possible_keys key key_len ref rows Extra +1 SIMPLE t_stats ref k_idx k_idx 4 const 100 +DROP TABLE t_stats; +# +# ============================================ +# TEST 5: ANALYZE with highly selective index +# ============================================ +# +CREATE TABLE t_stats2 ( +id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, +code INT NOT NULL, +KEY code_idx (code) +) ENGINE=TIDESDB; +ANALYZE TABLE t_stats2; +Table Op Msg_type Msg_text +test.t_stats2 analyze status Engine-independent statistics collected +test.t_stats2 analyze Note TIDESDB: CF 'test__t_stats2' total_keys=N data_size=N bytes memtable=N bytes levels=5 read_amp=N cache_hit=N% +test.t_stats2 analyze Note TIDESDB: avg_key=N bytes avg_value=N bytes +test.t_stats2 analyze Note TIDESDB: level 1 sstables=N size=N bytes keys=N +test.t_stats2 analyze Note TIDESDB: level 2 sstables=N size=N bytes keys=N +test.t_stats2 analyze Note TIDESDB: level 3 sstables=N size=N bytes keys=N +test.t_stats2 analyze Note TIDESDB: level 4 sstables=N size=N bytes keys=N +test.t_stats2 analyze Note TIDESDB: level 5 sstables=N size=N bytes keys=N +test.t_stats2 analyze Note TIDESDB: idx CF 'test__t_stats2__idx_code_idx' keys=N data_size=N bytes levels=5 +test.t_stats2 analyze Note TIDESDB: idx 'code_idx' sampled=N distinct=N rec_per_key=N +test.t_stats2 analyze status OK +# With 100 distinct values in 100 rows, rec_per_key should be ~1 +EXPLAIN SELECT * FROM t_stats2 WHERE code = 50; +id select_type table type possible_keys key key_len ref rows Extra +1 SIMPLE t_stats2 ref code_idx code_idx 4 const 1 Using index +DROP TABLE t_stats2; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_info_schema.result b/mysql-test/suite/tidesdb/r/tidesdb_info_schema.result index ac88568d..d7e69552 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_info_schema.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_info_schema.result @@ -17,8 +17,404 @@ Note 1071 Specified key was too long; max key length is 255 bytes SELECT COUNT(*) FROM t_info_schema; COUNT(*) 3 +Warnings: +Warning 4202 3 values were longer than max_sort_length. Sorting used only the first 1024 bytes OK: INDEX_LENGTH > 0 # ---- verify after bulk insert ---- +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes +Warnings: +Warning 4202 1 values were longer than max_sort_length. Sorting used only the first 1024 bytes SELECT COUNT(*) FROM t_info_schema; COUNT(*) 200 diff --git a/mysql-test/suite/tidesdb/r/tidesdb_insert_conflict.result b/mysql-test/suite/tidesdb/r/tidesdb_insert_conflict.result new file mode 100644 index 00000000..99d68ef8 --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_insert_conflict.result @@ -0,0 +1,36 @@ +call mtr.add_suppression("TIDESDB:.*TDB_ERR_CONFLICT"); +# +# Issue #83: INSERT vs INSERT conflict detection +# +CREATE TABLE t ( +a INT NOT NULL PRIMARY KEY, +b INT +) ENGINE=TidesDB; +connect con1, localhost, root,,; +connect con2, localhost, root,,; +# ---- TEST: Two INSERTs with same PK ---- +connection con1; +START TRANSACTION; +INSERT INTO t VALUES (1, 10); +connection con2; +START TRANSACTION; +INSERT INTO t VALUES (1, 500); +COMMIT; +connection con1; +# con1 should get conflict error -- con2 committed first +COMMIT; +Got one of the listed errors +connection default; +# con2 wins: b should be 500 +SELECT * FROM t; +a b +1 500 +# Cleanup +connection con1; +disconnect con1; +connection con2; +disconnect con2; +connection default; +DROP TABLE t; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_isolation.result b/mysql-test/suite/tidesdb/r/tidesdb_isolation.result new file mode 100644 index 00000000..9b81913e --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_isolation.result @@ -0,0 +1,115 @@ +# +# ============================================ +# TEST 1: READ COMMITTED — sees committed data +# ============================================ +# +CREATE TABLE t_iso ( +id INT NOT NULL PRIMARY KEY, +val INT +) ENGINE=TIDESDB; +INSERT INTO t_iso VALUES (1, 10); +connect con1, localhost, root,,; +connection con1; +SET TRANSACTION ISOLATION LEVEL READ COMMITTED; +BEGIN; +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +connection default; +INSERT INTO t_iso VALUES (2, 20); +# con1 at READ COMMITTED should see newly committed row +connection con1; +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +2 20 +COMMIT; +disconnect con1; +connection default; +# +# ============================================ +# TEST 2: REPEATABLE READ — snapshot isolation +# ============================================ +# +connect con2, localhost, root,,; +connection con2; +SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; +BEGIN; +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +2 20 +connection default; +INSERT INTO t_iso VALUES (3, 30); +# con2 at REPEATABLE READ should NOT see row 3 +connection con2; +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +2 20 +COMMIT; +# After COMMIT, new transaction should see row 3 +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +2 20 +3 30 +disconnect con2; +connection default; +# +# ============================================ +# TEST 3: Basic DML at each isolation level +# (verifies the mapping doesn't crash) +# ============================================ +# +SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; +INSERT INTO t_iso VALUES (4, 40); +SELECT * FROM t_iso WHERE id = 4; +id val +4 40 +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +UPDATE t_iso SET val = 41 WHERE id = 4; +SELECT * FROM t_iso WHERE id = 4; +id val +4 41 +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; +UPDATE t_iso SET val = 42 WHERE id = 4; +SELECT * FROM t_iso WHERE id = 4; +id val +4 42 +SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE; +DELETE FROM t_iso WHERE id = 4; +SELECT * FROM t_iso ORDER BY id; +id val +1 10 +2 20 +3 30 +# Reset to default +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; +DROP TABLE t_iso; +# +# ============================================ +# TEST 4: SNAPSHOT isolation via table option +# (table uses ISOLATION_LEVEL=SNAPSHOT, session +# at REPEATABLE READ should activate SNAPSHOT) +# ============================================ +# +CREATE TABLE t_snap ( +id INT NOT NULL PRIMARY KEY, +val INT +) ENGINE=TIDESDB ISOLATION_LEVEL='SNAPSHOT'; +INSERT INTO t_snap VALUES (1, 100); +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; +BEGIN; +SELECT * FROM t_snap ORDER BY id; +id val +1 100 +INSERT INTO t_snap VALUES (2, 200); +SELECT * FROM t_snap ORDER BY id; +id val +1 100 +2 200 +COMMIT; +DROP TABLE t_snap; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_online_ddl.result b/mysql-test/suite/tidesdb/r/tidesdb_online_ddl.result index 2f1ad8b0..58025206 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_online_ddl.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_online_ddl.result @@ -31,13 +31,13 @@ t_ddl CREATE TABLE `t_ddl` ( `b_name` varchar(100) DEFAULT NULL, `c` int(11) DEFAULT 999, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `SYNC_MODE`='NONE' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `SYNC_MODE`='NONE' # ---- INPLACE: add secondary index ---- ALTER TABLE t_ddl ADD INDEX idx_a (a), ALGORITHM=INPLACE; SHOW INDEX FROM t_ddl; Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored -t_ddl 0 PRIMARY 1 id A 6 NULL NULL BTREE NO -t_ddl 1 idx_a 1 a A 6 NULL NULL YES BTREE NO +t_ddl 0 PRIMARY 1 id A 6 NULL NULL LSM NO +t_ddl 1 idx_a 1 a A 6 NULL NULL YES LSM NO # Verify index is usable SELECT id, a FROM t_ddl WHERE a = 10 ORDER BY id; id a @@ -52,9 +52,9 @@ id a ALTER TABLE t_ddl ADD INDEX idx_c (c), ALGORITHM=INPLACE; SHOW INDEX FROM t_ddl; Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored -t_ddl 0 PRIMARY 1 id A 6 NULL NULL BTREE NO -t_ddl 1 idx_a 1 a A 6 NULL NULL YES BTREE NO -t_ddl 1 idx_c 1 c A 6 NULL NULL YES BTREE NO +t_ddl 0 PRIMARY 1 id A 6 NULL NULL LSM NO +t_ddl 1 idx_a 1 a A 6 NULL NULL YES LSM NO +t_ddl 1 idx_c 1 c A 6 NULL NULL YES LSM NO EXPLAIN SELECT id, c FROM t_ddl WHERE c = 200; id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE t_ddl ref idx_c idx_c 5 const 1 Using index @@ -65,8 +65,8 @@ id c ALTER TABLE t_ddl DROP INDEX idx_a, ALGORITHM=INPLACE; SHOW INDEX FROM t_ddl; Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored -t_ddl 0 PRIMARY 1 id A 6 NULL NULL BTREE NO -t_ddl 1 idx_c 1 c A 6 NULL NULL YES BTREE NO +t_ddl 0 PRIMARY 1 id A 6 NULL NULL LSM NO +t_ddl 1 idx_c 1 c A 6 NULL NULL YES LSM NO # Verify remaining index still works SELECT id, c FROM t_ddl WHERE c = 300; id c @@ -75,27 +75,49 @@ id c ALTER TABLE t_ddl ADD INDEX idx_a2 (a), DROP INDEX idx_c, ALGORITHM=INPLACE; SHOW INDEX FROM t_ddl; Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored -t_ddl 0 PRIMARY 1 id A 6 NULL NULL BTREE NO -t_ddl 1 idx_a2 1 a A 6 NULL NULL YES BTREE NO +t_ddl 0 PRIMARY 1 id A 6 NULL NULL LSM NO +t_ddl 1 idx_a2 1 a A 6 NULL NULL YES LSM NO EXPLAIN SELECT id, a FROM t_ddl WHERE a = 20; id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE t_ddl ref idx_a2 idx_a2 5 const 1 Using index SELECT id, a FROM t_ddl WHERE a = 20; id a 2 20 -# ---- COPY fallback: add column ---- -ALTER TABLE t_ddl ADD COLUMN d INT DEFAULT 0; +# ---- INSTANT: add column (NOT NULL DEFAULT) ---- +ALTER TABLE t_ddl ADD COLUMN d INT NOT NULL DEFAULT 0, ALGORITHM=INSTANT; SELECT id, d FROM t_ddl WHERE id = 1; id d 1 0 -# ---- COPY fallback: drop column ---- -ALTER TABLE t_ddl DROP COLUMN d; +# ---- Verify old rows readable after ADD COLUMN ---- +SELECT id, a, b_name, c, d FROM t_ddl ORDER BY id; +id a b_name c d +1 10 alpha 100 0 +2 20 beta 200 0 +3 30 gamma 300 0 +4 10 delta 400 0 +5 50 epsilon 500 0 +6 60 zeta 999 0 +# ---- Insert with new schema and verify ---- +INSERT INTO t_ddl VALUES (7, 70, 'eta', 700, 42); +SELECT id, d FROM t_ddl WHERE id IN (1, 7) ORDER BY id; +id d +1 0 +7 42 +# ---- INSTANT: drop column ---- +ALTER TABLE t_ddl DROP COLUMN d, ALGORITHM=INSTANT; SELECT * FROM t_ddl WHERE id = 1; id a b_name c 1 10 alpha 100 -# ---- Verify ALGORITHM=INPLACE rejected for column changes ---- -ALTER TABLE t_ddl ADD COLUMN e INT, ALGORITHM=INPLACE; -ERROR 0A000: ALGORITHM=INPLACE is not supported for this operation. Try ALGORITHM=COPY +# ---- Verify all rows readable after DROP COLUMN ---- +SELECT id, a, b_name, c FROM t_ddl ORDER BY id; +id a b_name c +1 10 alpha 100 +2 20 beta 200 +3 30 gamma 300 +4 10 delta 400 +5 50 epsilon 500 +6 60 zeta 999 +7 70 eta 700 # ---- Cleanup ---- DROP TABLE t_ddl; # ---- Test with data and hidden PK (no explicit PK) ---- diff --git a/mysql-test/suite/tidesdb/r/tidesdb_options.result b/mysql-test/suite/tidesdb/r/tidesdb_options.result index 94cb2063..d0edefe7 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_options.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_options.result @@ -35,7 +35,7 @@ Table Create Table t_defaults CREATE TABLE `t_defaults` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci INSERT INTO t_defaults VALUES (1, 'default_opts'); SELECT * FROM t_defaults; id val @@ -52,7 +52,7 @@ Table Create Table t_none CREATE TABLE `t_none` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `COMPRESSION`='NONE' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `COMPRESSION`='NONE' INSERT INTO t_none VALUES (1, 'no compression'); SELECT * FROM t_none; id val @@ -64,7 +64,7 @@ Table Create Table t_zstd CREATE TABLE `t_zstd` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `COMPRESSION`='ZSTD' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `COMPRESSION`='ZSTD' INSERT INTO t_zstd VALUES (1, 'zstd compressed'); SELECT * FROM t_zstd; id val @@ -81,7 +81,7 @@ Table Create Table t_nobloom CREATE TABLE `t_nobloom` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `BLOOM_FILTER`=0 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `BLOOM_FILTER`=0 INSERT INTO t_nobloom VALUES (1, 'no bloom'); SELECT * FROM t_nobloom; id val @@ -93,7 +93,7 @@ Table Create Table t_lowfpr CREATE TABLE `t_lowfpr` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `BLOOM_FPR`=10 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `BLOOM_FPR`=10 INSERT INTO t_lowfpr VALUES (1, 'low fpr 0.1%'); SELECT * FROM t_lowfpr; id val @@ -110,7 +110,7 @@ Table Create Table t_bigbuf CREATE TABLE `t_bigbuf` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `WRITE_BUFFER_SIZE`=16777216 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `WRITE_BUFFER_SIZE`=16777216 INSERT INTO t_bigbuf VALUES (1, '16MB write buffer'); SELECT * FROM t_bigbuf; id val @@ -126,7 +126,7 @@ SHOW CREATE TABLE t_syncnone; Table Create Table t_syncnone CREATE TABLE `t_syncnone` ( `id` int(11) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `SYNC_MODE`='NONE' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `SYNC_MODE`='NONE' INSERT INTO t_syncnone VALUES (1); SELECT * FROM t_syncnone; id @@ -137,7 +137,7 @@ SHOW CREATE TABLE t_syncint; Table Create Table t_syncint CREATE TABLE `t_syncint` ( `id` int(11) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `SYNC_MODE`='INTERVAL' `SYNC_INTERVAL_US`=500000 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `SYNC_MODE`='INTERVAL' `SYNC_INTERVAL_US`=500000 INSERT INTO t_syncint VALUES (1); SELECT * FROM t_syncint; id @@ -154,7 +154,7 @@ Table Create Table t_rc CREATE TABLE `t_rc` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `ISOLATION_LEVEL`='READ_COMMITTED' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `ISOLATION_LEVEL`='READ_COMMITTED' INSERT INTO t_rc VALUES (1, 'read committed'); SELECT * FROM t_rc; id val @@ -166,7 +166,7 @@ Table Create Table t_ser CREATE TABLE `t_ser` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `ISOLATION_LEVEL`='SERIALIZABLE' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `ISOLATION_LEVEL`='SERIALIZABLE' INSERT INTO t_ser VALUES (1, 'serializable'); SELECT * FROM t_ser; id val @@ -183,7 +183,7 @@ Table Create Table t_btree CREATE TABLE `t_btree` ( `id` int(11) DEFAULT NULL, `val` varchar(50) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `USE_BTREE`=1 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `USE_BTREE`=1 INSERT INTO t_btree VALUES (1, 'btree format'); SELECT * FROM t_btree; id val @@ -214,7 +214,7 @@ Table Create Table t_multi CREATE TABLE `t_multi` ( `id` int(11) DEFAULT NULL, `val` varchar(100) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `COMPRESSION`='ZSTD' `WRITE_BUFFER_SIZE`=8388608 `BLOOM_FILTER`=1 `BLOOM_FPR`=50 `BLOCK_INDEXES`=1 `SYNC_MODE`='FULL' `ISOLATION_LEVEL`='REPEATABLE_READ' `LEVEL_SIZE_RATIO`=8 `MIN_LEVELS`=3 `SKIP_LIST_MAX_LEVEL`=16 `SKIP_LIST_PROBABILITY`=50 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `COMPRESSION`='ZSTD' `WRITE_BUFFER_SIZE`=8388608 `BLOOM_FILTER`=1 `BLOOM_FPR`=50 `BLOCK_INDEXES`=1 `SYNC_MODE`='FULL' `ISOLATION_LEVEL`='REPEATABLE_READ' `LEVEL_SIZE_RATIO`=8 `MIN_LEVELS`=3 `SKIP_LIST_MAX_LEVEL`=16 `SKIP_LIST_PROBABILITY`=50 INSERT INTO t_multi VALUES (1, 'multi-option table'); INSERT INTO t_multi VALUES (2, 'second row'); SELECT * FROM t_multi; @@ -241,7 +241,7 @@ SHOW CREATE TABLE t_default_iso; Table Create Table t_default_iso CREATE TABLE `t_default_iso` ( `id` int(11) DEFAULT NULL -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci INSERT INTO t_default_iso VALUES (1), (2), (3); SELECT * FROM t_default_iso; id diff --git a/mysql-test/suite/tidesdb/r/tidesdb_partition.result b/mysql-test/suite/tidesdb/r/tidesdb_partition.result index ac7433f9..b0a98709 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_partition.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_partition.result @@ -292,7 +292,7 @@ t_show_part CREATE TABLE `t_show_part` ( `id` int(11) NOT NULL, `val` varchar(50) DEFAULT NULL, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci PARTITION BY HASH (`id`) PARTITIONS 2 DROP TABLE t_show_part; diff --git a/mysql-test/suite/tidesdb/r/tidesdb_per_index_btree.result b/mysql-test/suite/tidesdb/r/tidesdb_per_index_btree.result new file mode 100644 index 00000000..0f80f9bd --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_per_index_btree.result @@ -0,0 +1,42 @@ +# +# TEST 1: Per-index USE_BTREE on secondary index +# +CREATE TABLE t1 ( +id INT NOT NULL PRIMARY KEY, +a INT, +b INT, +KEY idx_a (a) USE_BTREE=1, +KEY idx_b (b) +) ENGINE=TidesDB; +INSERT INTO t1 VALUES (1,10,100),(2,20,200),(3,30,300); +# idx_a should show BTREE, idx_b should show LSM +SHOW KEYS FROM t1; +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored +t1 0 PRIMARY 1 id A 3 NULL NULL LSM NO +t1 1 idx_a 1 a A 3 NULL NULL YES BTREE NO +t1 1 idx_b 1 b A 3 NULL NULL YES LSM NO +SELECT * FROM t1 WHERE a = 20; +id a b +2 20 200 +SELECT * FROM t1 WHERE b = 200; +id a b +2 20 200 +DROP TABLE t1; +# +# TEST 2: Table-level USE_BTREE=1 with per-index override +# +CREATE TABLE t2 ( +id INT NOT NULL PRIMARY KEY, +x INT, +KEY idx_x (x) USE_BTREE=0 +) ENGINE=TidesDB USE_BTREE=1; +# PK and idx_x should both show BTREE (table default), but idx_x USE_BTREE=0 +# Note: per-index USE_BTREE=0 does NOT override table-level to LSM -- it just +# means the index itself didn't request BTREE; the table default still applies. +SHOW KEYS FROM t2; +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment Ignored +t2 0 PRIMARY 1 id A 2 NULL NULL BTREE NO +t2 1 idx_x 1 x A 2 NULL NULL YES BTREE NO +DROP TABLE t2; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_rename.result b/mysql-test/suite/tidesdb/r/tidesdb_rename.result index 503e13a3..35c37d91 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_rename.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_rename.result @@ -77,7 +77,7 @@ t_alter CREATE TABLE `t_alter` ( `id` int(11) NOT NULL, `val` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ALTER TABLE t_alter SYNC_MODE='NONE'; SHOW CREATE TABLE t_alter; Table Create Table @@ -85,7 +85,7 @@ t_alter CREATE TABLE `t_alter` ( `id` int(11) NOT NULL, `val` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `SYNC_MODE`='NONE' +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `SYNC_MODE`='NONE' SELECT * FROM t_alter ORDER BY id; id val 1 before @@ -115,16 +115,16 @@ t_schema CREATE TABLE `t_schema` ( `val` varchar(50) DEFAULT NULL, `extra` int(11) DEFAULT 0, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci SELECT * FROM t_schema ORDER BY id; id val extra -1 one 0 -2 two 0 +1 one NULL +2 two NULL INSERT INTO t_schema VALUES (3, 'three', 99); SELECT * FROM t_schema ORDER BY id; id val extra -1 one 0 -2 two 0 +1 one NULL +2 two NULL 3 three 99 DROP TABLE t_schema; # diff --git a/mysql-test/suite/tidesdb/r/tidesdb_replace_iodku.result b/mysql-test/suite/tidesdb/r/tidesdb_replace_iodku.result new file mode 100644 index 00000000..cde7c557 --- /dev/null +++ b/mysql-test/suite/tidesdb/r/tidesdb_replace_iodku.result @@ -0,0 +1,200 @@ +# +# ============================================ +# TEST 1: REPLACE INTO — PK only table +# ============================================ +# +CREATE TABLE t_rep ( +id INT NOT NULL PRIMARY KEY, +val VARCHAR(50) +) ENGINE=TIDESDB; +INSERT INTO t_rep VALUES (1, 'one'), (2, 'two'), (3, 'three'); +SELECT * FROM t_rep ORDER BY id; +id val +1 one +2 two +3 three +# REPLACE existing row (id=2) +REPLACE INTO t_rep VALUES (2, 'TWO-replaced'); +SELECT * FROM t_rep ORDER BY id; +id val +1 one +2 TWO-replaced +3 three +# REPLACE non-existing row (id=4) +REPLACE INTO t_rep VALUES (4, 'four-new'); +SELECT * FROM t_rep ORDER BY id; +id val +1 one +2 TWO-replaced +3 three +4 four-new +# REPLACE multiple rows at once +REPLACE INTO t_rep VALUES (1, 'ONE-replaced'), (3, 'THREE-replaced'), (5, 'five-new'); +SELECT * FROM t_rep ORDER BY id; +id val +1 ONE-replaced +2 TWO-replaced +3 THREE-replaced +4 four-new +5 five-new +DROP TABLE t_rep; +# +# ============================================ +# TEST 2: REPLACE INTO — PK + secondary index +# (verifies old secondary index entries are +# properly cleaned up) +# ============================================ +# +CREATE TABLE t_rep_idx ( +id INT NOT NULL PRIMARY KEY, +k INT NOT NULL, +val VARCHAR(50), +KEY k_idx (k) +) ENGINE=TIDESDB; +INSERT INTO t_rep_idx VALUES (1, 100, 'a'), (2, 200, 'b'), (3, 100, 'c'); +# Before REPLACE: k=100 has 2 rows +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; +id k val +1 100 a +3 100 c +# REPLACE id=1, changing k from 100 to 999 +REPLACE INTO t_rep_idx VALUES (1, 999, 'a-replaced'); +SELECT * FROM t_rep_idx ORDER BY id; +id k val +1 999 a-replaced +2 200 b +3 100 c +# After REPLACE: k=100 should have only 1 row (id=3) +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; +id k val +3 100 c +# k=999 should have 1 row (id=1) +SELECT * FROM t_rep_idx WHERE k = 999; +id k val +1 999 a-replaced +# REPLACE id=3, keeping k=100 +REPLACE INTO t_rep_idx VALUES (3, 100, 'c-replaced'); +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; +id k val +3 100 c-replaced +DROP TABLE t_rep_idx; +# +# ============================================ +# TEST 3: INSERT ON DUPLICATE KEY UPDATE — PK +# ============================================ +# +CREATE TABLE t_iodku ( +id INT NOT NULL PRIMARY KEY, +val INT NOT NULL DEFAULT 0 +) ENGINE=TIDESDB; +INSERT INTO t_iodku VALUES (1, 100), (2, 200), (3, 300); +SELECT * FROM t_iodku ORDER BY id; +id val +1 100 +2 200 +3 300 +# IODKU: duplicate on id=2 => update val +INSERT INTO t_iodku VALUES (2, 0) ON DUPLICATE KEY UPDATE val = val + 1; +SELECT * FROM t_iodku ORDER BY id; +id val +1 100 +2 201 +3 300 +# IODKU: no duplicate on id=4 => insert +INSERT INTO t_iodku VALUES (4, 400) ON DUPLICATE KEY UPDATE val = val + 1; +SELECT * FROM t_iodku ORDER BY id; +id val +1 100 +2 201 +3 300 +4 400 +# IODKU: multiple rows (some dups, some new) +INSERT INTO t_iodku VALUES (1, 0), (5, 500), (3, 0) +ON DUPLICATE KEY UPDATE val = val + 10; +SELECT * FROM t_iodku ORDER BY id; +id val +1 110 +2 201 +3 310 +4 400 +5 500 +DROP TABLE t_iodku; +# +# ============================================ +# TEST 4: IODKU with secondary index +# ============================================ +# +CREATE TABLE t_iodku_idx ( +id INT NOT NULL PRIMARY KEY, +k INT NOT NULL, +val VARCHAR(50), +KEY k_idx (k) +) ENGINE=TIDESDB; +INSERT INTO t_iodku_idx VALUES (1, 10, 'orig-1'), (2, 20, 'orig-2'); +# IODKU duplicate on PK, changes indexed column k +INSERT INTO t_iodku_idx VALUES (1, 99, 'new-1') +ON DUPLICATE KEY UPDATE k = VALUES(k), val = VALUES(val); +SELECT * FROM t_iodku_idx ORDER BY id; +id k val +1 99 new-1 +2 20 orig-2 +# Old k=10 should be gone, k=99 should have id=1 +SELECT * FROM t_iodku_idx WHERE k = 10; +id k val +SELECT * FROM t_iodku_idx WHERE k = 99; +id k val +1 99 new-1 +DROP TABLE t_iodku_idx; +# +# ============================================ +# TEST 5: IODKU with unique secondary index +# ============================================ +# +CREATE TABLE t_iodku_uniq ( +id INT NOT NULL PRIMARY KEY, +email VARCHAR(100) NOT NULL, +cnt INT NOT NULL DEFAULT 0, +UNIQUE KEY uk_email (email) +) ENGINE=TIDESDB; +INSERT INTO t_iodku_uniq VALUES (1, 'alice@test.com', 1); +INSERT INTO t_iodku_uniq VALUES (2, 'bob@test.com', 1); +# IODKU conflict on unique secondary index (email) +INSERT INTO t_iodku_uniq VALUES (3, 'alice@test.com', 1) +ON DUPLICATE KEY UPDATE cnt = cnt + 1; +SELECT * FROM t_iodku_uniq ORDER BY id; +id email cnt +1 alice@test.com 2 +2 bob@test.com 1 +DROP TABLE t_iodku_uniq; +# +# ============================================ +# TEST 6: REPLACE with AUTO_INCREMENT +# ============================================ +# +CREATE TABLE t_rep_auto ( +id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, +val VARCHAR(50) +) ENGINE=TIDESDB; +INSERT INTO t_rep_auto (val) VALUES ('first'), ('second'), ('third'); +SELECT * FROM t_rep_auto ORDER BY id; +id val +1 first +2 second +3 third +REPLACE INTO t_rep_auto VALUES (2, 'second-replaced'); +SELECT * FROM t_rep_auto ORDER BY id; +id val +1 first +2 second-replaced +3 third +# Next auto_inc should be > 3 +INSERT INTO t_rep_auto (val) VALUES ('fourth'); +SELECT * FROM t_rep_auto ORDER BY id; +id val +1 first +2 second-replaced +3 third +4 fourth +DROP TABLE t_rep_auto; +# +# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_ttl.result b/mysql-test/suite/tidesdb/r/tidesdb_ttl.result index dab0936d..dfbddac6 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_ttl.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_ttl.result @@ -118,7 +118,7 @@ t_ttl_show CREATE TABLE `t_ttl_show` ( `val` varchar(50) DEFAULT NULL, `row_ttl` int(11) DEFAULT NULL `TTL`=1, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci `TTL`=3600 +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci `TTL`=3600 DROP TABLE t_ttl_show; # # ============================================ diff --git a/mysql-test/suite/tidesdb/r/tidesdb_vcol.result b/mysql-test/suite/tidesdb/r/tidesdb_vcol.result index f210e4c2..8dff2483 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_vcol.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_vcol.result @@ -190,7 +190,7 @@ t_vcol_show CREATE TABLE `t_vcol_show` ( `v_sum` int(11) GENERATED ALWAYS AS (`a` + `b`) VIRTUAL, `s_prod` int(11) GENERATED ALWAYS AS (`a` * `b`) STORED, PRIMARY KEY (`id`) -) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_uca1400_ai_ci +) ENGINE=TIDESDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci DROP TABLE t_vcol_show; # # diff --git a/mysql-test/suite/tidesdb/suite.opt b/mysql-test/suite/tidesdb/suite.opt index fd6682af..2ffb1bd7 100644 --- a/mysql-test/suite/tidesdb/suite.opt +++ b/mysql-test/suite/tidesdb/suite.opt @@ -1 +1,3 @@ ---plugin-load-add=$HA_TIDESDB_SO \ No newline at end of file +--plugin-load-add=$HA_TIDESDB_SO +--character-set-server=utf8mb4 +--collation-server=utf8mb4_general_ci \ No newline at end of file diff --git a/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.test b/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.test index e5a67af0..1a12baf7 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --echo # --echo # Issue #70: ALTER TABLE sometimes crashes MariaDB --echo # The crash was caused by unbounded read-set growth during diff --git a/mysql-test/suite/tidesdb/t/tidesdb_analyze.test b/mysql-test/suite/tidesdb/t/tidesdb_analyze.test index 3c7bd3f9..5b36aee1 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_analyze.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_analyze.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --echo # --echo # ANALYZE TABLE for TidesDB -- verifies CF stats output --echo # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_backup.test b/mysql-test/suite/tidesdb/t/tidesdb_backup.test index 2fe0441f..1386eadf 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_backup.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_backup.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc diff --git a/mysql-test/suite/tidesdb/t/tidesdb_fk_convert.opt b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt similarity index 100% rename from mysql-test/suite/tidesdb/t/tidesdb_fk_convert.opt rename to mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt diff --git a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.test b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.test new file mode 100644 index 00000000..6af6141e --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.test @@ -0,0 +1,73 @@ +--source include/have_tidesdb.inc +# +# Issue #77: Conflict detection between concurrent transactions. +# Verifies that the second committer gets ER_LOCK_DEADLOCK when +# two transactions modify the same row. +# + +call mtr.add_suppression("TIDESDB:.*TDB_ERR_CONFLICT"); + +--echo # +--echo # Issue #77: Concurrent conflict detection +--echo # + +CREATE TABLE t ( + i INT NOT NULL PRIMARY KEY, + x INT +) ENGINE=TidesDB; + +INSERT INTO t VALUES (1,10),(2,20),(3,30),(4,40),(5,50); + +connect (con1, localhost, root,,); +connect (con2, localhost, root,,); + +--echo # ---- TEST 1: Two UPDATEs on same row ---- +connection con1; +START TRANSACTION; +UPDATE t SET x = 999 WHERE i = 1; + +connection con2; +START TRANSACTION; +UPDATE t SET x = 888 WHERE i = 1; +COMMIT; + +connection con1; +--error ER_LOCK_DEADLOCK,ER_ERROR_DURING_COMMIT +COMMIT; + +connection default; +--echo # con2 wins: x should be 888 +SELECT * FROM t WHERE i = 1; + +--echo # ---- TEST 2: UPDATE vs DELETE on same row ---- +connection con1; +START TRANSACTION; +UPDATE t SET x = 777 WHERE i = 2; + +connection con2; +START TRANSACTION; +DELETE FROM t WHERE i = 2; +COMMIT; + +connection con1; +--error ER_LOCK_DEADLOCK,ER_ERROR_DURING_COMMIT +COMMIT; + +connection default; +--echo # con2 wins: row 2 should be gone +SELECT * FROM t WHERE i = 2; + +--echo # Remaining rows intact +SELECT * FROM t ORDER BY i; + +--echo # Cleanup +connection con1; +disconnect con1; +connection con2; +disconnect con2; +connection default; + +DROP TABLE t; + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.test b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.test index 0b2c806b..0bd35bbb 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # TidesDB concurrent error mapping test # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.test b/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.test index 7b6b9650..c4d4b97c 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --echo # --echo # Issue #64: WITH CONSISTENT SNAPSHOT doesn't work --echo # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_crud.test b/mysql-test/suite/tidesdb/t/tidesdb_crud.test index 99230a23..4f60b735 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_crud.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_crud.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # Test suite for the TIDESDB storage engine. # Exercises every CRUD capability and edge case. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt b/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.test b/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.test new file mode 100644 index 00000000..0dbd9ed9 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.test @@ -0,0 +1,16 @@ +--source include/have_tidesdb.inc +# +# Issue #76: tidesdb_data_home_dir system variable +# + +--echo # +--echo # Verify tidesdb_data_home_dir is visible and read-only +--echo # + +SHOW VARIABLES LIKE 'tidesdb_data_home_dir'; + +--error ER_INCORRECT_GLOBAL_LOCAL_VAR +SET GLOBAL tidesdb_data_home_dir = '/tmp/test'; + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_drop_create.test b/mysql-test/suite/tidesdb/t/tidesdb_drop_create.test index ccfbd4bb..f890a029 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_drop_create.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_drop_create.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --echo # --echo # Issue #57: Data survives DROP + CREATE --echo # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_encryption.test b/mysql-test/suite/tidesdb/t/tidesdb_encryption.test index bb4290da..20e0b1a7 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_encryption.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_encryption.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc --source include/have_file_key_management.inc diff --git a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test new file mode 100644 index 00000000..b83eed87 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test @@ -0,0 +1,20 @@ +--source include/have_tidesdb.inc +# +# Issue #73: SHOW ENGINE TIDESDB STATUS +# + +--echo # +--echo # SHOW ENGINE TIDESDB STATUS should return output +--echo # + +CREATE TABLE t1 (id INT PRIMARY KEY, val INT) ENGINE=TidesDB; +INSERT INTO t1 VALUES (1,10),(2,20),(3,30); + +# Mask volatile numbers in the output +--replace_regex /[0-9]+/N/ +SHOW ENGINE TIDESDB STATUS; + +DROP TABLE t1; + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_fk_convert.test b/mysql-test/suite/tidesdb/t/tidesdb_fk_convert.test deleted file mode 100644 index a16e6a1f..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_fk_convert.test +++ /dev/null @@ -1,81 +0,0 @@ ---echo # ---echo # Issue #61: Converting InnoDB tables with foreign keys to TidesDB ---echo # Requires a server-side patch to sql/sql_table.cc that honours ---echo # FOREIGN_KEY_CHECKS=0 during can_switch_engines(). If the patch ---echo # is absent (upstream MariaDB), the test is skipped. ---echo # - ---source include/have_innodb.inc - ---echo # Create an InnoDB table with a self-referencing foreign key -CREATE TABLE t_fk61 ( - a INT, - b INT NOT NULL, - INDEX idx_a (a), - CONSTRAINT `fk_self` FOREIGN KEY (b) REFERENCES t_fk61 (a) - ON DELETE CASCADE - ON UPDATE RESTRICT -) ENGINE=InnoDB; - -INSERT INTO t_fk61 (a, b) VALUES (1, 1), (2, 1), (3, 2); - -SHOW CREATE TABLE t_fk61; - ---echo # Without FOREIGN_KEY_CHECKS=0, the conversion should fail ---error ER_ROW_IS_REFERENCED -ALTER TABLE t_fk61 ENGINE=TidesDB; - ---echo # With FOREIGN_KEY_CHECKS=0, the conversion should succeed ---echo # (requires server patch; skip if absent) -SET FOREIGN_KEY_CHECKS=0; ---error 0,ER_ROW_IS_REFERENCED -ALTER TABLE t_fk61 ENGINE=TidesDB; -SET FOREIGN_KEY_CHECKS=1; - -if (`SELECT ENGINE != 'TidesDB' FROM information_schema.TABLES WHERE TABLE_NAME='t_fk61'`) -{ - --echo # Server does not have the sql/sql_table.cc FK patch -- skipping rest of test - DROP TABLE t_fk61; - --skip Server lacks sql/sql_table.cc FK conversion patch (issue #61) -} - ---echo # Verify data survived the conversion -SELECT * FROM t_fk61 ORDER BY a; - ---echo # Verify the table is now TidesDB and FKs are gone -SHOW CREATE TABLE t_fk61; - ---echo # Verify we can still do DML -INSERT INTO t_fk61 (a, b) VALUES (4, 99); -SELECT * FROM t_fk61 ORDER BY a; - -DROP TABLE t_fk61; - ---echo # Test with parent-child FK relationship (two tables) -CREATE TABLE t_parent61 ( - id INT PRIMARY KEY -) ENGINE=InnoDB; - -CREATE TABLE t_child61 ( - id INT PRIMARY KEY, - parent_id INT, - CONSTRAINT `fk_parent` FOREIGN KEY (parent_id) REFERENCES t_parent61 (id) -) ENGINE=InnoDB; - -INSERT INTO t_parent61 VALUES (1), (2), (3); -INSERT INTO t_child61 VALUES (10, 1), (20, 2); - ---echo # Convert parent table with FK_CHECKS=0 -SET FOREIGN_KEY_CHECKS=0; -ALTER TABLE t_parent61 ENGINE=TidesDB; -ALTER TABLE t_child61 ENGINE=TidesDB; -SET FOREIGN_KEY_CHECKS=1; - -SELECT * FROM t_parent61 ORDER BY id; -SELECT * FROM t_child61 ORDER BY id; - -DROP TABLE t_child61; -DROP TABLE t_parent61; - ---echo # ---echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt b/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_index_stats.test b/mysql-test/suite/tidesdb/t/tidesdb_index_stats.test new file mode 100644 index 00000000..8f383a88 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_index_stats.test @@ -0,0 +1,127 @@ +--source include/have_tidesdb.inc +# +# Tests for issue #78 (index_type reporting) and issue #74 (wrong statistics). +# + +--echo # +--echo # ============================================ +--echo # TEST 1: Index type reporting (issue #78) +--echo # LSM tables should show LSM, not BTREE +--echo # ============================================ +--echo # + +CREATE TABLE t_lsm ( + i INT NOT NULL PRIMARY KEY, + y INT, + KEY idx_y (y) +) ENGINE=TIDESDB USE_BTREE=0; + +SHOW KEYS FROM t_lsm; + +DROP TABLE t_lsm; + + +--echo # +--echo # ============================================ +--echo # TEST 2: BTREE tables should show BTREE +--echo # ============================================ +--echo # + +CREATE TABLE t_btree ( + i INT NOT NULL PRIMARY KEY, + y INT, + KEY idx_y (y) +) ENGINE=TIDESDB USE_BTREE=1; + +SHOW KEYS FROM t_btree; + +DROP TABLE t_btree; + + +--echo # +--echo # ============================================ +--echo # TEST 3: Default (USE_BTREE=0) shows LSM +--echo # ============================================ +--echo # + +CREATE TABLE t_default ( + i INT NOT NULL PRIMARY KEY, + y INT, + KEY idx_y (y) +) ENGINE=TIDESDB; + +SHOW KEYS FROM t_default; + +DROP TABLE t_default; + + +--echo # +--echo # ============================================ +--echo # TEST 4: ANALYZE TABLE updates rec_per_key +--echo # for non-unique secondary indexes (issue #74) +--echo # ============================================ +--echo # + +CREATE TABLE t_stats ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + k INT NOT NULL, + val VARCHAR(50), + KEY k_idx (k) +) ENGINE=TIDESDB; + +--echo # Insert 200 rows with only 2 distinct values for k +--disable_query_log +let $i = 1; +while ($i <= 200) +{ + eval INSERT INTO t_stats (k, val) VALUES ($i % 2, REPEAT('x', 20)); + inc $i; +} +--enable_query_log + +SELECT COUNT(*) AS total_rows FROM t_stats; + +--echo # Before ANALYZE, optimizer may not estimate well +EXPLAIN SELECT * FROM t_stats WHERE k = 0; + +--replace_regex /total_keys=[0-9]+/total_keys=N/ /data_size=[0-9]+/data_size=N/ /memtable=[0-9]+/memtable=N/ /read_amp=[0-9.]+/read_amp=N/ /cache_hit=[0-9.]+/cache_hit=N/ /avg_key=[0-9.]+/avg_key=N/ /avg_value=[0-9.]+/avg_value=N/ /sstables=[0-9]+/sstables=N/ /size=[0-9]+/size=N/ /keys=[0-9]+/keys=N/ /sampled=[0-9]+/sampled=N/ /distinct=[0-9]+/distinct=N/ /rec_per_key=[0-9]+/rec_per_key=N/ +ANALYZE TABLE t_stats; + +--echo # After ANALYZE, the optimizer should estimate ~100 rows for k=0 +EXPLAIN SELECT * FROM t_stats WHERE k = 0; + +DROP TABLE t_stats; + + +--echo # +--echo # ============================================ +--echo # TEST 5: ANALYZE with highly selective index +--echo # ============================================ +--echo # + +CREATE TABLE t_stats2 ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + code INT NOT NULL, + KEY code_idx (code) +) ENGINE=TIDESDB; + +--disable_query_log +let $i = 1; +while ($i <= 100) +{ + eval INSERT INTO t_stats2 (code) VALUES ($i); + inc $i; +} +--enable_query_log + +--replace_regex /total_keys=[0-9]+/total_keys=N/ /data_size=[0-9]+/data_size=N/ /memtable=[0-9]+/memtable=N/ /read_amp=[0-9.]+/read_amp=N/ /cache_hit=[0-9.]+/cache_hit=N/ /avg_key=[0-9.]+/avg_key=N/ /avg_value=[0-9.]+/avg_value=N/ /sstables=[0-9]+/sstables=N/ /size=[0-9]+/size=N/ /keys=[0-9]+/keys=N/ /sampled=[0-9]+/sampled=N/ /distinct=[0-9]+/distinct=N/ /rec_per_key=[0-9]+/rec_per_key=N/ +ANALYZE TABLE t_stats2; + +--echo # With 100 distinct values in 100 rows, rec_per_key should be ~1 +EXPLAIN SELECT * FROM t_stats2 WHERE code = 50; + +DROP TABLE t_stats2; + + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_info_schema.test b/mysql-test/suite/tidesdb/t/tidesdb_info_schema.test index fab15bd5..7fb0cd51 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_info_schema.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_info_schema.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # TidesDB information_schema.TABLES size reporting # Verify DATA_LENGTH and INDEX_LENGTH are non-zero after inserts diff --git a/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt b/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.test b/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.test new file mode 100644 index 00000000..8dcff293 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.test @@ -0,0 +1,54 @@ +--source include/have_tidesdb.inc +# +# Issue #83: INSERT vs INSERT conflict detection. +# Two concurrent transactions inserting the same PK should conflict. +# The second committer should get ER_LOCK_DEADLOCK (TDB_ERR_CONFLICT). +# +# NOTE: This test requires TidesDB library fix for INSERT-INSERT +# conflict detection. If it fails, the library may need updating. +# + +call mtr.add_suppression("TIDESDB:.*TDB_ERR_CONFLICT"); + +--echo # +--echo # Issue #83: INSERT vs INSERT conflict detection +--echo # + +CREATE TABLE t ( + a INT NOT NULL PRIMARY KEY, + b INT +) ENGINE=TidesDB; + +connect (con1, localhost, root,,); +connect (con2, localhost, root,,); + +--echo # ---- TEST: Two INSERTs with same PK ---- +connection con1; +START TRANSACTION; +INSERT INTO t VALUES (1, 10); + +connection con2; +START TRANSACTION; +INSERT INTO t VALUES (1, 500); +COMMIT; + +connection con1; +--echo # con1 should get conflict error -- con2 committed first +--error ER_LOCK_DEADLOCK,ER_ERROR_DURING_COMMIT +COMMIT; + +connection default; +--echo # con2 wins: b should be 500 +SELECT * FROM t; + +--echo # Cleanup +connection con1; +disconnect con1; +connection con2; +disconnect con2; +connection default; + +DROP TABLE t; + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt b/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_isolation.test b/mysql-test/suite/tidesdb/t/tidesdb_isolation.test new file mode 100644 index 00000000..f9f885b0 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_isolation.test @@ -0,0 +1,125 @@ +--source include/have_tidesdb.inc +# +# Tests for session-level isolation level mapping. +# Verifies that SET TRANSACTION ISOLATION LEVEL is properly +# respected by the TidesDB engine (resolve_effective_isolation). +# + +--echo # +--echo # ============================================ +--echo # TEST 1: READ COMMITTED — sees committed data +--echo # ============================================ +--echo # + +CREATE TABLE t_iso ( + id INT NOT NULL PRIMARY KEY, + val INT +) ENGINE=TIDESDB; + +INSERT INTO t_iso VALUES (1, 10); + +connect (con1, localhost, root,,); +connection con1; +SET TRANSACTION ISOLATION LEVEL READ COMMITTED; +BEGIN; +SELECT * FROM t_iso ORDER BY id; + +connection default; +INSERT INTO t_iso VALUES (2, 20); + +--echo # con1 at READ COMMITTED should see newly committed row +connection con1; +SELECT * FROM t_iso ORDER BY id; +COMMIT; + +disconnect con1; +connection default; + + +--echo # +--echo # ============================================ +--echo # TEST 2: REPEATABLE READ — snapshot isolation +--echo # ============================================ +--echo # + +connect (con2, localhost, root,,); +connection con2; +SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; +BEGIN; +SELECT * FROM t_iso ORDER BY id; + +connection default; +INSERT INTO t_iso VALUES (3, 30); + +--echo # con2 at REPEATABLE READ should NOT see row 3 +connection con2; +SELECT * FROM t_iso ORDER BY id; +COMMIT; + +--echo # After COMMIT, new transaction should see row 3 +SELECT * FROM t_iso ORDER BY id; + +disconnect con2; +connection default; + + +--echo # +--echo # ============================================ +--echo # TEST 3: Basic DML at each isolation level +--echo # (verifies the mapping doesn't crash) +--echo # ============================================ +--echo # + +SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; +INSERT INTO t_iso VALUES (4, 40); +SELECT * FROM t_iso WHERE id = 4; + +SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; +UPDATE t_iso SET val = 41 WHERE id = 4; +SELECT * FROM t_iso WHERE id = 4; + +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; +UPDATE t_iso SET val = 42 WHERE id = 4; +SELECT * FROM t_iso WHERE id = 4; + +SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE; +DELETE FROM t_iso WHERE id = 4; +SELECT * FROM t_iso ORDER BY id; + +--echo # Reset to default +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; + +DROP TABLE t_iso; + + +--echo # +--echo # ============================================ +--echo # TEST 4: SNAPSHOT isolation via table option +--echo # (table uses ISOLATION_LEVEL=SNAPSHOT, session +--echo # at REPEATABLE READ should activate SNAPSHOT) +--echo # ============================================ +--echo # + +CREATE TABLE t_snap ( + id INT NOT NULL PRIMARY KEY, + val INT +) ENGINE=TIDESDB ISOLATION_LEVEL='SNAPSHOT'; + +INSERT INTO t_snap VALUES (1, 100); + +SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; +BEGIN; +SELECT * FROM t_snap ORDER BY id; + +# Insert from same connection (different statement in same txn) +# The BEGIN already took a snapshot, so this tests +# that writes within the txn are visible to reads +INSERT INTO t_snap VALUES (2, 200); +SELECT * FROM t_snap ORDER BY id; +COMMIT; + +DROP TABLE t_snap; + + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_json.test b/mysql-test/suite/tidesdb/t/tidesdb_json.test index 59aa41aa..02d0a9a2 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_json.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_json.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc --echo # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.test b/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.test index 2c12d966..bd46800c 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # TidesDB Online DDL tests # Tests INSTANT, INPLACE (add/drop index), and COPY fallback @@ -57,17 +58,23 @@ SHOW INDEX FROM t_ddl; EXPLAIN SELECT id, a FROM t_ddl WHERE a = 20; SELECT id, a FROM t_ddl WHERE a = 20; ---echo # ---- COPY fallback: add column ---- -ALTER TABLE t_ddl ADD COLUMN d INT DEFAULT 0; +--echo # ---- INSTANT: add column (NOT NULL DEFAULT) ---- +ALTER TABLE t_ddl ADD COLUMN d INT NOT NULL DEFAULT 0, ALGORITHM=INSTANT; SELECT id, d FROM t_ddl WHERE id = 1; ---echo # ---- COPY fallback: drop column ---- -ALTER TABLE t_ddl DROP COLUMN d; +--echo # ---- Verify old rows readable after ADD COLUMN ---- +SELECT id, a, b_name, c, d FROM t_ddl ORDER BY id; + +--echo # ---- Insert with new schema and verify ---- +INSERT INTO t_ddl VALUES (7, 70, 'eta', 700, 42); +SELECT id, d FROM t_ddl WHERE id IN (1, 7) ORDER BY id; + +--echo # ---- INSTANT: drop column ---- +ALTER TABLE t_ddl DROP COLUMN d, ALGORITHM=INSTANT; SELECT * FROM t_ddl WHERE id = 1; ---echo # ---- Verify ALGORITHM=INPLACE rejected for column changes ---- ---error ER_ALTER_OPERATION_NOT_SUPPORTED -ALTER TABLE t_ddl ADD COLUMN e INT, ALGORITHM=INPLACE; +--echo # ---- Verify all rows readable after DROP COLUMN ---- +SELECT id, a, b_name, c FROM t_ddl ORDER BY id; --echo # ---- Cleanup ---- DROP TABLE t_ddl; diff --git a/mysql-test/suite/tidesdb/t/tidesdb_options.test b/mysql-test/suite/tidesdb/t/tidesdb_options.test index f2dd3603..77110063 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_options.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_options.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # Test suite for TIDESDB storage engine options. # Exercises system variables and per-table CREATE TABLE options. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_partition.test b/mysql-test/suite/tidesdb/t/tidesdb_partition.test index 164ce015..ff56755c 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_partition.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_partition.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc --source include/have_partition.inc diff --git a/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt b/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.test b/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.test new file mode 100644 index 00000000..a3183ac7 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.test @@ -0,0 +1,46 @@ +--source include/have_tidesdb.inc +# +# Issue #79: Per-index USE_BTREE option +# + +--echo # +--echo # TEST 1: Per-index USE_BTREE on secondary index +--echo # + +CREATE TABLE t1 ( + id INT NOT NULL PRIMARY KEY, + a INT, + b INT, + KEY idx_a (a) USE_BTREE=1, + KEY idx_b (b) +) ENGINE=TidesDB; + +INSERT INTO t1 VALUES (1,10,100),(2,20,200),(3,30,300); + +--echo # idx_a should show BTREE, idx_b should show LSM +SHOW KEYS FROM t1; + +SELECT * FROM t1 WHERE a = 20; +SELECT * FROM t1 WHERE b = 200; + +DROP TABLE t1; + +--echo # +--echo # TEST 2: Table-level USE_BTREE=1 with per-index override +--echo # + +CREATE TABLE t2 ( + id INT NOT NULL PRIMARY KEY, + x INT, + KEY idx_x (x) USE_BTREE=0 +) ENGINE=TidesDB USE_BTREE=1; + +--echo # PK and idx_x should both show BTREE (table default), but idx_x USE_BTREE=0 +--echo # Note: per-index USE_BTREE=0 does NOT override table-level to LSM -- it just +--echo # means the index itself didn't request BTREE; the table default still applies. +SHOW KEYS FROM t2; + +DROP TABLE t2; + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_pk_index.test b/mysql-test/suite/tidesdb/t/tidesdb_pk_index.test index 24002d84..d416823b 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_pk_index.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_pk_index.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --disable_warnings DROP TABLE IF EXISTS t_pk, t_autoinc, t_secidx, t_combined; --enable_warnings diff --git a/mysql-test/suite/tidesdb/t/tidesdb_rename.test b/mysql-test/suite/tidesdb/t/tidesdb_rename.test index 8e9e74b4..39893c5a 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_rename.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_rename.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # Test suite for TIDESDB rename_table functionality. # Covers: RENAME TABLE, ALTER TABLE (table copy), ALTER TABLE with option changes, diff --git a/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt b/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt new file mode 100644 index 00000000..2f9ea4e3 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt @@ -0,0 +1,2 @@ +--plugin-maturity=unknown +--plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.test b/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.test new file mode 100644 index 00000000..0984ca88 --- /dev/null +++ b/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.test @@ -0,0 +1,177 @@ +--source include/have_tidesdb.inc +# +# Tests for REPLACE INTO and INSERT ON DUPLICATE KEY UPDATE. +# These exercise the dup_ref / HA_ERR_FOUND_DUPP_KEY path in the handler. +# + +--echo # +--echo # ============================================ +--echo # TEST 1: REPLACE INTO — PK only table +--echo # ============================================ +--echo # + +CREATE TABLE t_rep ( + id INT NOT NULL PRIMARY KEY, + val VARCHAR(50) +) ENGINE=TIDESDB; + +INSERT INTO t_rep VALUES (1, 'one'), (2, 'two'), (3, 'three'); +SELECT * FROM t_rep ORDER BY id; + +--echo # REPLACE existing row (id=2) +REPLACE INTO t_rep VALUES (2, 'TWO-replaced'); +SELECT * FROM t_rep ORDER BY id; + +--echo # REPLACE non-existing row (id=4) +REPLACE INTO t_rep VALUES (4, 'four-new'); +SELECT * FROM t_rep ORDER BY id; + +--echo # REPLACE multiple rows at once +REPLACE INTO t_rep VALUES (1, 'ONE-replaced'), (3, 'THREE-replaced'), (5, 'five-new'); +SELECT * FROM t_rep ORDER BY id; + +DROP TABLE t_rep; + + +--echo # +--echo # ============================================ +--echo # TEST 2: REPLACE INTO — PK + secondary index +--echo # (verifies old secondary index entries are +--echo # properly cleaned up) +--echo # ============================================ +--echo # + +CREATE TABLE t_rep_idx ( + id INT NOT NULL PRIMARY KEY, + k INT NOT NULL, + val VARCHAR(50), + KEY k_idx (k) +) ENGINE=TIDESDB; + +INSERT INTO t_rep_idx VALUES (1, 100, 'a'), (2, 200, 'b'), (3, 100, 'c'); + +--echo # Before REPLACE: k=100 has 2 rows +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; + +--echo # REPLACE id=1, changing k from 100 to 999 +REPLACE INTO t_rep_idx VALUES (1, 999, 'a-replaced'); +SELECT * FROM t_rep_idx ORDER BY id; + +--echo # After REPLACE: k=100 should have only 1 row (id=3) +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; +--echo # k=999 should have 1 row (id=1) +SELECT * FROM t_rep_idx WHERE k = 999; + +--echo # REPLACE id=3, keeping k=100 +REPLACE INTO t_rep_idx VALUES (3, 100, 'c-replaced'); +SELECT * FROM t_rep_idx WHERE k = 100 ORDER BY id; + +DROP TABLE t_rep_idx; + + +--echo # +--echo # ============================================ +--echo # TEST 3: INSERT ON DUPLICATE KEY UPDATE — PK +--echo # ============================================ +--echo # + +CREATE TABLE t_iodku ( + id INT NOT NULL PRIMARY KEY, + val INT NOT NULL DEFAULT 0 +) ENGINE=TIDESDB; + +INSERT INTO t_iodku VALUES (1, 100), (2, 200), (3, 300); +SELECT * FROM t_iodku ORDER BY id; + +--echo # IODKU: duplicate on id=2 => update val +INSERT INTO t_iodku VALUES (2, 0) ON DUPLICATE KEY UPDATE val = val + 1; +SELECT * FROM t_iodku ORDER BY id; + +--echo # IODKU: no duplicate on id=4 => insert +INSERT INTO t_iodku VALUES (4, 400) ON DUPLICATE KEY UPDATE val = val + 1; +SELECT * FROM t_iodku ORDER BY id; + +--echo # IODKU: multiple rows (some dups, some new) +INSERT INTO t_iodku VALUES (1, 0), (5, 500), (3, 0) + ON DUPLICATE KEY UPDATE val = val + 10; +SELECT * FROM t_iodku ORDER BY id; + +DROP TABLE t_iodku; + + +--echo # +--echo # ============================================ +--echo # TEST 4: IODKU with secondary index +--echo # ============================================ +--echo # + +CREATE TABLE t_iodku_idx ( + id INT NOT NULL PRIMARY KEY, + k INT NOT NULL, + val VARCHAR(50), + KEY k_idx (k) +) ENGINE=TIDESDB; + +INSERT INTO t_iodku_idx VALUES (1, 10, 'orig-1'), (2, 20, 'orig-2'); + +--echo # IODKU duplicate on PK, changes indexed column k +INSERT INTO t_iodku_idx VALUES (1, 99, 'new-1') + ON DUPLICATE KEY UPDATE k = VALUES(k), val = VALUES(val); +SELECT * FROM t_iodku_idx ORDER BY id; +--echo # Old k=10 should be gone, k=99 should have id=1 +SELECT * FROM t_iodku_idx WHERE k = 10; +SELECT * FROM t_iodku_idx WHERE k = 99; + +DROP TABLE t_iodku_idx; + + +--echo # +--echo # ============================================ +--echo # TEST 5: IODKU with unique secondary index +--echo # ============================================ +--echo # + +CREATE TABLE t_iodku_uniq ( + id INT NOT NULL PRIMARY KEY, + email VARCHAR(100) NOT NULL, + cnt INT NOT NULL DEFAULT 0, + UNIQUE KEY uk_email (email) +) ENGINE=TIDESDB; + +INSERT INTO t_iodku_uniq VALUES (1, 'alice@test.com', 1); +INSERT INTO t_iodku_uniq VALUES (2, 'bob@test.com', 1); + +--echo # IODKU conflict on unique secondary index (email) +INSERT INTO t_iodku_uniq VALUES (3, 'alice@test.com', 1) + ON DUPLICATE KEY UPDATE cnt = cnt + 1; +SELECT * FROM t_iodku_uniq ORDER BY id; + +DROP TABLE t_iodku_uniq; + + +--echo # +--echo # ============================================ +--echo # TEST 6: REPLACE with AUTO_INCREMENT +--echo # ============================================ +--echo # + +CREATE TABLE t_rep_auto ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + val VARCHAR(50) +) ENGINE=TIDESDB; + +INSERT INTO t_rep_auto (val) VALUES ('first'), ('second'), ('third'); +SELECT * FROM t_rep_auto ORDER BY id; + +REPLACE INTO t_rep_auto VALUES (2, 'second-replaced'); +SELECT * FROM t_rep_auto ORDER BY id; + +--echo # Next auto_inc should be > 3 +INSERT INTO t_rep_auto (val) VALUES ('fourth'); +SELECT * FROM t_rep_auto ORDER BY id; + +DROP TABLE t_rep_auto; + + +--echo # +--echo # Done. diff --git a/mysql-test/suite/tidesdb/t/tidesdb_savepoint.test b/mysql-test/suite/tidesdb/t/tidesdb_savepoint.test index cde24b47..a0cea275 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_savepoint.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_savepoint.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc --echo # diff --git a/mysql-test/suite/tidesdb/t/tidesdb_sql.test b/mysql-test/suite/tidesdb/t/tidesdb_sql.test index 097ed951..55865a54 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_sql.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_sql.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # Comprehensive SQL coverage test for the TIDESDB storage engine. # Exercises aggregates, joins, subqueries, GROUP BY, HAVING, UNION, diff --git a/mysql-test/suite/tidesdb/t/tidesdb_stress.test b/mysql-test/suite/tidesdb/t/tidesdb_stress.test index 42b3b86e..7db00bdf 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_stress.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_stress.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # TidesDB stress test -- concurrent operations, transaction paths, iterator # reuse, rollback, TRUNCATE races, secondary index maintenance, and large diff --git a/mysql-test/suite/tidesdb/t/tidesdb_ttl.test b/mysql-test/suite/tidesdb/t/tidesdb_ttl.test index 64914268..21fe8cc7 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_ttl.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_ttl.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc diff --git a/mysql-test/suite/tidesdb/t/tidesdb_vcol.test b/mysql-test/suite/tidesdb/t/tidesdb_vcol.test index a74e691e..1b3d33fb 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_vcol.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_vcol.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc --source include/not_embedded.inc diff --git a/mysql-test/suite/tidesdb/t/tidesdb_write_pressure.test b/mysql-test/suite/tidesdb/t/tidesdb_write_pressure.test index 1b124f51..bff1f40a 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_write_pressure.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_write_pressure.test @@ -1,3 +1,4 @@ +--source include/have_tidesdb.inc # # TidesDB write-pressure stress test # diff --git a/tidesdb/ha_tidesdb.cc b/tidesdb/ha_tidesdb.cc index 32987c10..b5ab54e4 100644 --- a/tidesdb/ha_tidesdb.cc +++ b/tidesdb/ha_tidesdb.cc @@ -19,31 +19,15 @@ #include #include -#include +#include #include +#include #include #include "key.h" #include "sql_class.h" #include "sql_priv.h" -/* - Lightweight trace macro. The srv_debug_trace check compiles to a single - branch on a static bool -- essentially free when disabled. - When enabled, logs function + message + elapsed microseconds. -*/ -#define TDB_TRACE(fmt, ...) \ - do \ - { \ - if (unlikely(srv_debug_trace)) \ - sql_print_information("TDB_TRACE %s " fmt, __func__, ##__VA_ARGS__); \ - } while (0) - -static inline long long tdb_now_us() -{ - return (long long)microsecond_interval_timer(); -} - /* MariaDB 12.3 moved option_struct from TABLE_SHARE to TABLE (MDEV-37815). We provide a compat macro so the same code compiles on 11.x / 12.0-12.2 / 12.3+. */ #if MYSQL_VERSION_ID >= 120300 @@ -52,9 +36,10 @@ static inline long long tdb_now_us() #define TDB_TABLE_OPTIONS(tbl) ((tbl)->s->option_struct) #endif -/* Declared early so tdb_rc_to_ha() can reference it; default 0, - toggled at runtime via SET GLOBAL tidesdb_debug_trace. */ -static my_bool srv_debug_trace = 0; +/* Forward-declared for tdb_rc_to_ha(); defined with sysvars below */ +static my_bool srv_print_all_conflicts = 0; +static mysql_mutex_t last_conflict_mutex; +static char last_conflict_info[1024] = ""; /* Map TidesDB library error codes to MariaDB handler error codes. @@ -73,20 +58,21 @@ static int tdb_rc_to_ha(int rc, const char *ctx) so MariaDB retries automatically. Only log under debug_trace to avoid flooding the error log at high concurrency. */ case TDB_ERR_CONFLICT: - if (unlikely(srv_debug_trace)) - sql_print_information("TIDESDB: %s: TDB_ERR_CONFLICT (-7), mapped to deadlock", - ctx); + if (unlikely(srv_print_all_conflicts)) + { + sql_print_warning( + "TIDESDB CONFLICT: %s: transaction aborted due to write-write " + "conflict (TDB_ERR_CONFLICT)", + ctx); + mysql_mutex_lock(&last_conflict_mutex); + snprintf(last_conflict_info, sizeof(last_conflict_info), "Last conflict: %s at %ld", + ctx, (long)time(NULL)); + mysql_mutex_unlock(&last_conflict_mutex); + } return HA_ERR_LOCK_DEADLOCK; case TDB_ERR_LOCKED: - if (unlikely(srv_debug_trace)) - sql_print_information( - "TIDESDB: %s: TDB_ERR_LOCKED (-12), mapped to deadlock (backpressure)", ctx); return HA_ERR_LOCK_DEADLOCK; case TDB_ERR_MEMORY_LIMIT: - if (unlikely(srv_debug_trace)) - sql_print_information( - "TIDESDB: %s: TDB_ERR_MEMORY_LIMIT (-9), mapped to deadlock (memory pressure)", - ctx); return HA_ERR_LOCK_DEADLOCK; case TDB_ERR_MEMORY: sql_print_error("TIDESDB: %s: TDB_ERR_MEMORY (-1)", ctx); @@ -101,17 +87,6 @@ static int tdb_rc_to_ha(int rc, const char *ctx) } } -/* Hex-dump helper for trace logging (up to 32 bytes) */ -static inline void tdb_hex(const uchar *data, uint len, char *out, uint out_sz) -{ - uint p = 0; - uint lim = len > 32 ? 32 : len; - for (uint i = 0; i < lim && p + 4 < out_sz; i++) - p += snprintf(out + p, out_sz - p, "%02X ", data[i]); - if (len > 32 && p + 4 < out_sz) p += snprintf(out + p, out_sz - p, "..."); - if (p > 0 && out[p - 1] == ' ') out[p - 1] = '\0'; -} - /* MariaDB data directory */ extern MYSQL_PLUGIN_IMPORT char mysql_real_data_home[]; @@ -131,7 +106,6 @@ static const char *ha_tidesdb_exts[] = {NullS}; static ulong srv_flush_threads = 4; static ulong srv_compaction_threads = 4; static ulong srv_log_level = 0; /* TDB_LOG_DEBUG */ -/* srv_debug_trace declared earlier for tdb_rc_to_ha() */ /* per-op trace logging */ static ulonglong srv_block_cache_size = TIDESDB_DEFAULT_BLOCK_CACHE; /* 256MB */ static ulong srv_max_open_sstables = 256; static ulonglong srv_max_memory_usage = 0; /* 0 = auto (library decides) */ @@ -144,14 +118,46 @@ static MYSQL_THDVAR_ULONGLONG(ttl, PLUGIN_VAR_RQCMDARG, "SET STATEMENT tidesdb_ttl=N FOR INSERT", NULL, NULL, 0, 0, ULONGLONG_MAX, 0); +/* Per-session skip unique check (for bulk loads where PK duplicates + are known impossible). Same pattern as MyRocks rocksdb_skip_unique_check. */ +static MYSQL_THDVAR_BOOL(skip_unique_check, PLUGIN_VAR_RQCMDARG, + "Skip uniqueness check on primary key and unique secondary indexes " + "during INSERT. Only safe when the application guarantees no " + "duplicates (e.g. bulk loads with monotonic PKs). " + "SET SESSION tidesdb_skip_unique_check=1", + NULL, NULL, 0); + +/* Session-level defaults for table options. + These are used by HA_TOPTION_SYSVAR so that CREATE TABLE without + explicit options inherits the session/global default. Dynamic and + session-scoped, matching InnoDB's innodb_default_* pattern. */ + +static const char *compression_names[] = {"NONE", "SNAPPY", "LZ4", "ZSTD", "LZ4_FAST", NullS}; +static TYPELIB compression_typelib = {array_elements(compression_names) - 1, "compression_typelib", + compression_names, NULL, NULL}; + +static MYSQL_THDVAR_ENUM(default_compression, PLUGIN_VAR_RQCMDARG, + "Default compression algorithm for new tables " + "(NONE, SNAPPY, LZ4, ZSTD, LZ4_FAST)", + NULL, NULL, 2 /* LZ4 */, &compression_typelib); + +static MYSQL_THDVAR_ULONGLONG(default_write_buffer_size, PLUGIN_VAR_RQCMDARG, + "Default write buffer size in bytes for new tables", NULL, NULL, + 32ULL * 1024 * 1024, 1024, ULONGLONG_MAX, 1024); + +static MYSQL_THDVAR_BOOL(default_bloom_filter, PLUGIN_VAR_RQCMDARG, + "Default bloom filter setting for new tables", NULL, NULL, 1); + +static MYSQL_THDVAR_BOOL(default_use_btree, PLUGIN_VAR_RQCMDARG, + "Default USE_BTREE setting for new tables (0=LSM, 1=B-tree)", NULL, NULL, + 0); + +static MYSQL_THDVAR_BOOL(default_block_indexes, PLUGIN_VAR_RQCMDARG, + "Default block indexes setting for new tables", NULL, NULL, 1); + static const char *log_level_names[] = {"DEBUG", "INFO", "WARN", "ERROR", "FATAL", "NONE", NullS}; -#if MYSQL_VERSION_ID >= 110800 static TYPELIB log_level_typelib = {array_elements(log_level_names) - 1, "log_level_typelib", log_level_names, NULL, NULL}; -#else -static TYPELIB log_level_typelib = {array_elements(log_level_names) - 1, "log_level_typelib", - log_level_names, NULL}; -#endif static MYSQL_SYSVAR_ULONG(flush_threads, srv_flush_threads, PLUGIN_VAR_RQCMDARG | PLUGIN_VAR_READONLY, @@ -165,8 +171,14 @@ static MYSQL_SYSVAR_ENUM(log_level, srv_log_level, PLUGIN_VAR_RQCMDARG | PLUGIN_ "TidesDB log level (DEBUG, INFO, WARN, ERROR, FATAL, NONE)", NULL, NULL, 0, &log_level_typelib); -static MYSQL_SYSVAR_BOOL(debug_trace, srv_debug_trace, PLUGIN_VAR_RQCMDARG, - "Enable per-operation trace logging to error log (expensive, debug only)", +/* Conflict information logging. + Similar to innodb_print_all_deadlocks -- logs all TDB_ERR_CONFLICT + events to the error log with transaction and table details. + (srv_print_all_conflicts, last_conflict_mutex, last_conflict_info + are forward-declared near tdb_rc_to_ha().) */ +static MYSQL_SYSVAR_BOOL(print_all_conflicts, srv_print_all_conflicts, PLUGIN_VAR_RQCMDARG, + "Log all TidesDB conflict errors to the error log " + "(similar to innodb_print_all_deadlocks)", NULL, NULL, 0); static MYSQL_SYSVAR_ULONGLONG(block_cache_size, srv_block_cache_size, @@ -184,6 +196,17 @@ static MYSQL_SYSVAR_ULONGLONG(max_memory_usage, srv_max_memory_usage, "TidesDB global memory limit in bytes (0 = auto, ~80%% system RAM)", NULL, NULL, 0, 0, ULONGLONG_MAX, 0); +/* Configurable data directory. + Defaults to NULL which means the plugin computes a sibling directory + of mysql_real_data_home. Setting this overrides the auto-computed path. */ +static char *srv_data_home_dir = NULL; + +static MYSQL_SYSVAR_STR(data_home_dir, srv_data_home_dir, PLUGIN_VAR_RQCMDARG | PLUGIN_VAR_READONLY, + "Directory where TidesDB stores its data files; " + "defaults to /../tidesdb_data; " + "must be set before server startup (read-only)", + NULL, NULL, NULL); + /* ******************** Online backup via system variable ******************** */ static char *srv_backup_dir = NULL; @@ -206,21 +229,56 @@ static void tidesdb_backup_dir_update(THD *thd, struct st_mysql_sys_var *, void return; } - sql_print_information("TIDESDB: Starting online backup to '%s'", new_dir); + /* Free the calling connection's TidesDB transaction before backup. + tidesdb_backup() waits for all open transactions to drain. The + connection may still hold an open txn (created in external_lock + but not yet committed). If we don't free it here, the backup + self-deadlocks waiting for our own txn. */ + { + tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, tidesdb_hton); + if (trx && trx->txn) + { + tidesdb_txn_rollback(trx->txn); + tidesdb_txn_free(trx->txn); + trx->txn = NULL; + trx->dirty = false; + trx->txn_generation++; + } + } + + /* Copy the path before releasing the sysvar lock -- the save pointer + is only valid while LOCK_global_system_variables is held. */ + std::string backup_path(new_dir); + + /* tidesdb_backup() spins waiting for all CF flushes to complete. + The library's flush threads call sql_print_information() which + internally acquires LOCK_global_system_variables. This sysvar + update callback is called WITH that mutex held, so tidesdb_backup() + deadlocks (flush thread waits for lock, we wait for flush thread). + Release the mutex around the blocking backup call. */ + mysql_mutex_unlock(&LOCK_global_system_variables); + + sql_print_information("TIDESDB: Starting online backup to '%s'", backup_path.c_str()); + + char *backup_path_c = const_cast(backup_path.c_str()); + int rc = tidesdb_backup(tdb_global, backup_path_c); + + mysql_mutex_lock(&LOCK_global_system_variables); - int rc = tidesdb_backup(tdb_global, const_cast(new_dir)); if (rc != TDB_SUCCESS) { - sql_print_error("TIDESDB: Backup to '%s' failed (err=%d)", new_dir, rc); + sql_print_error("TIDESDB: Backup to '%s' failed (err=%d)", backup_path.c_str(), rc); my_printf_error(ER_UNKNOWN_ERROR, "TIDESDB: Backup to '%s' failed (err=%d)", MYF(0), - new_dir, rc); + backup_path.c_str(), rc); /* We leave variable unchanged on failure */ return; } - sql_print_information("TIDESDB: Online backup to '%s' completed successfully", new_dir); + sql_print_information("TIDESDB: Online backup to '%s' completed successfully", + backup_path.c_str()); - /* We store the path so SHOW VARIABLES reflects the last successful backup */ + /* For PLUGIN_VAR_MEMALLOC strings, the framework manages memory. + We set var_ptr to the save value so the framework copies it. */ *static_cast(var_ptr) = new_dir; } @@ -230,16 +288,24 @@ static MYSQL_SYSVAR_STR(backup_dir, srv_backup_dir, PLUGIN_VAR_RQCMDARG | PLUGIN "Example: SET GLOBAL tidesdb_backup_dir = '/path/to/backup'", NULL, tidesdb_backup_dir_update, NULL); -static struct st_mysql_sys_var *tidesdb_system_variables[] = {MYSQL_SYSVAR(flush_threads), - MYSQL_SYSVAR(compaction_threads), - MYSQL_SYSVAR(log_level), - MYSQL_SYSVAR(block_cache_size), - MYSQL_SYSVAR(max_open_sstables), - MYSQL_SYSVAR(max_memory_usage), - MYSQL_SYSVAR(backup_dir), - MYSQL_SYSVAR(debug_trace), - MYSQL_SYSVAR(ttl), - NULL}; +static struct st_mysql_sys_var *tidesdb_system_variables[] = { + MYSQL_SYSVAR(flush_threads), + MYSQL_SYSVAR(compaction_threads), + MYSQL_SYSVAR(log_level), + MYSQL_SYSVAR(block_cache_size), + MYSQL_SYSVAR(max_open_sstables), + MYSQL_SYSVAR(max_memory_usage), + MYSQL_SYSVAR(backup_dir), + MYSQL_SYSVAR(print_all_conflicts), + MYSQL_SYSVAR(data_home_dir), + MYSQL_SYSVAR(ttl), + MYSQL_SYSVAR(skip_unique_check), + MYSQL_SYSVAR(default_compression), + MYSQL_SYSVAR(default_write_buffer_size), + MYSQL_SYSVAR(default_bloom_filter), + MYSQL_SYSVAR(default_use_btree), + MYSQL_SYSVAR(default_block_indexes), + NULL}; /* ******************** Table options (per-table CF config) ******************** */ @@ -271,8 +337,10 @@ struct ha_table_option_struct }; ha_create_table_option tidesdb_table_option_list[] = { - HA_TOPTION_NUMBER("WRITE_BUFFER_SIZE", write_buffer_size, 32 * 1024 * 1024, 1024, ULONGLONG_MAX, - 1024), + /* Options with SYSVAR defaults inherit from session variables + (e.g. SET SESSION tidesdb_default_write_buffer_size=64*1024*1024). + When not explicitly set in CREATE TABLE, the session default is used. */ + HA_TOPTION_SYSVAR("WRITE_BUFFER_SIZE", write_buffer_size, default_write_buffer_size), HA_TOPTION_NUMBER("MIN_DISK_SPACE", min_disk_space, 100ULL * 1024 * 1024, 0, ULONGLONG_MAX, 1024), HA_TOPTION_NUMBER("KLOG_VALUE_THRESHOLD", klog_value_threshold, 512, 0, ULONGLONG_MAX, 1), @@ -287,13 +355,13 @@ ha_create_table_option tidesdb_table_option_list[] = { HA_TOPTION_NUMBER("BLOOM_FPR", bloom_fpr, 100, 1, 10000, 1), HA_TOPTION_NUMBER("L1_FILE_COUNT_TRIGGER", l1_file_count_trigger, 4, 1, 1024, 1), HA_TOPTION_NUMBER("L0_QUEUE_STALL_THRESHOLD", l0_queue_stall_threshold, 4, 1, 1024, 1), - HA_TOPTION_ENUM("COMPRESSION", compression, "NONE,SNAPPY,LZ4,ZSTD,LZ4_FAST", 2), + HA_TOPTION_SYSVAR("COMPRESSION", compression, default_compression), HA_TOPTION_ENUM("SYNC_MODE", sync_mode, "NONE,INTERVAL,FULL", 2), HA_TOPTION_ENUM("ISOLATION_LEVEL", isolation_level, "READ_UNCOMMITTED,READ_COMMITTED,REPEATABLE_READ,SNAPSHOT,SERIALIZABLE", 2), - HA_TOPTION_BOOL("BLOOM_FILTER", bloom_filter, 1), - HA_TOPTION_BOOL("BLOCK_INDEXES", block_indexes, 1), - HA_TOPTION_BOOL("USE_BTREE", use_btree, 0), + HA_TOPTION_SYSVAR("BLOOM_FILTER", bloom_filter, default_bloom_filter), + HA_TOPTION_SYSVAR("BLOCK_INDEXES", block_indexes, default_block_indexes), + HA_TOPTION_SYSVAR("USE_BTREE", use_btree, default_use_btree), HA_TOPTION_NUMBER("TTL", ttl, 0, 0, ULONGLONG_MAX, 1), HA_TOPTION_BOOL("ENCRYPTED", encrypted, 0), HA_TOPTION_NUMBER("ENCRYPTION_KEY_ID", encryption_key_id, 1, 1, 255, 1), @@ -309,6 +377,16 @@ struct ha_field_option_struct ha_create_table_option tidesdb_field_option_list[] = {HA_FOPTION_BOOL("TTL", ttl, 0), HA_FOPTION_END}; +/* ******************** Index options (per-index) ******************** */ + +struct ha_index_option_struct +{ + bool use_btree; /* per-index B-tree override; -1 = inherit from table */ +}; + +ha_create_table_option tidesdb_index_option_list[] = {HA_IOPTION_BOOL("USE_BTREE", use_btree, 0), + HA_IOPTION_END}; + /* ******************** Big-endian helpers for hidden PK ******************** */ static void encode_be64(uint64_t id, uint8_t *buf) @@ -349,6 +427,51 @@ static const int tdb_isolation_map[] = {TDB_ISOLATION_READ_UNCOMMITTED, TDB_ISOLATION_READ_COMMITTED, TDB_ISOLATION_REPEATABLE_READ, TDB_ISOLATION_SNAPSHOT, TDB_ISOLATION_SERIALIZABLE}; +/* + Map MariaDB session isolation level (from SET TRANSACTION ISOLATION LEVEL) + to TidesDB isolation level. Falls back to table-level ISOLATION_LEVEL + option only when the session has the default (REPEATABLE READ) and the + table overrides it. + + MariaDB enum_tx_isolation: + ISO_READ_UNCOMMITTED = 0 + ISO_READ_COMMITTED = 1 + ISO_REPEATABLE_READ = 2 + ISO_SERIALIZABLE = 3 + + TidesDB has a 5th level (SNAPSHOT) that has no SQL equivalent. + It can only be selected via the table option. When the session + isolation is REPEATABLE READ and the table option specifies SNAPSHOT, + we honor the table-level SNAPSHOT setting. +*/ +static tidesdb_isolation_level_t resolve_effective_isolation(THD *thd, + tidesdb_isolation_level_t table_iso) +{ + int session_iso = thd_tx_isolation(thd); + + switch (session_iso) + { + case ISO_READ_UNCOMMITTED: + return TDB_ISOLATION_READ_UNCOMMITTED; + case ISO_READ_COMMITTED: + return TDB_ISOLATION_READ_COMMITTED; + case ISO_REPEATABLE_READ: + /* InnoDB's REPEATABLE_READ is MVCC snapshot reads with + pessimistic row locks -- no read-set conflict detection. + TidesDB's closest equivalent is SNAPSHOT isolation: + consistent read snapshot + write-write conflict only. + TidesDB's REPEATABLE_READ is stricter (tracks read-set, + detects read-write conflicts at commit) and causes + excessive TDB_ERR_CONFLICT under normal OLTP concurrency. + Map MariaDB RR -> TidesDB SNAPSHOT for InnoDB parity. */ + return TDB_ISOLATION_SNAPSHOT; + case ISO_SERIALIZABLE: + return TDB_ISOLATION_SERIALIZABLE; + default: + return TDB_ISOLATION_READ_COMMITTED; + } +} + /* Single-byte placeholder value for secondary index entries (all info is in the key) */ static const uint8_t tdb_empty_val = 0; @@ -414,6 +537,7 @@ TidesDB_share::TidesDB_share() num_secondary_indexes(0) { memset(idx_comp_key_len, 0, sizeof(idx_comp_key_len)); + for (uint i = 0; i < MAX_KEY; i++) cached_rec_per_key[i].store(0, std::memory_order_relaxed); } TidesDB_share::~TidesDB_share() @@ -444,6 +568,9 @@ static tidesdb_trx_t *get_or_create_trx(THD *thd, handlerton *hton, tidesdb_isol trx->isolation_level = iso; trx->txn_generation++; } + else + { + } return trx; } @@ -471,7 +598,11 @@ struct tidesdb_savepoint_t char name[32]; }; +#if MYSQL_VERSION_ID >= 110800 static int tidesdb_savepoint_set(THD *thd, void *sv) +#else +static int tidesdb_savepoint_set(handlerton *, THD *thd, void *sv) +#endif { tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, tidesdb_hton); if (!trx || !trx->txn || !sv) return 0; @@ -484,7 +615,11 @@ static int tidesdb_savepoint_set(THD *thd, void *sv) return tdb_rc_to_ha(rc, "savepoint_set"); } +#if MYSQL_VERSION_ID >= 110800 static int tidesdb_savepoint_rollback(THD *thd, void *sv) +#else +static int tidesdb_savepoint_rollback(handlerton *, THD *thd, void *sv) +#endif { tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, tidesdb_hton); if (!trx || !trx->txn || !sv) return 0; @@ -505,12 +640,20 @@ static int tidesdb_savepoint_rollback(THD *thd, void *sv) return tdb_rc_to_ha(rc, "savepoint_rollback"); } +#if MYSQL_VERSION_ID >= 110800 static bool tidesdb_savepoint_rollback_can_release_mdl(THD *) +#else +static bool tidesdb_savepoint_rollback_can_release_mdl(handlerton *, THD *) +#endif { return true; } +#if MYSQL_VERSION_ID >= 110800 static int tidesdb_savepoint_release(THD *thd, void *sv) +#else +static int tidesdb_savepoint_release(handlerton *, THD *thd, void *sv) +#endif { tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, tidesdb_hton); if (!trx || !trx->txn || !sv) return 0; @@ -531,7 +674,10 @@ static int tidesdb_commit(handlerton *, THD *thd, bool all) #endif { tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, tidesdb_hton); - if (!trx || !trx->txn) return 0; + if (!trx || !trx->txn) + { + return 0; + } /* We determine whether this is the final commit for the transaction. all=true -> explicit COMMIT or transaction-level end @@ -570,7 +716,13 @@ static int tidesdb_commit(handlerton *, THD *thd, bool all) trx->stmt_savepoint_active = false; } - /* Real commit -- flush to storage */ + /* Real commit -- flush to storage. + After commit (or for read-only txns), free the txn instead of + reusing via tidesdb_txn_reset(). Reset takes the snapshot at + reset-time, not at next-statement-start, causing stale reads: + another connection's commit between our reset and our next SELECT + would be invisible. Freeing and lazily recreating in + get_or_create_trx() ensures each statement gets a current snapshot. */ if (trx->dirty) { int rc = tidesdb_txn_commit(trx->txn); @@ -579,26 +731,22 @@ static int tidesdb_commit(handlerton *, THD *thd, bool all) tidesdb_txn_rollback(trx->txn); tidesdb_txn_free(trx->txn); trx->txn = NULL; + trx->txn_generation++; trx->dirty = false; trx->stmt_savepoint_active = false; return tdb_rc_to_ha(rc, "hton_commit"); } - /* We free the txn so that the next get_or_create_trx() starts - a fresh transaction with a new read snapshot and a distinct - pointer -- allowing cached iterators to detect staleness via - scan_iter_txn_ != stmt_txn and invalidate themselves. */ tidesdb_txn_free(trx->txn); trx->txn = NULL; + trx->txn_generation++; } else { - /* Read-only transaction -- rollback and free. Like the dirty - path, we must free so the next get_or_create_trx() begins a - fresh txn with a current read snapshot and a new pointer for - stale-iterator detection. */ + /* Read-only transaction -- free so next statement gets fresh snapshot */ tidesdb_txn_rollback(trx->txn); tidesdb_txn_free(trx->txn); trx->txn = NULL; + trx->txn_generation++; } trx->dirty = false; trx->stmt_savepoint_active = false; @@ -640,10 +788,13 @@ static int tidesdb_rollback(handlerton *, THD *thd, bool all) } /* Full rollback -- real transaction end, autocommit, or first - statement failure with no savepoint to restore to. */ + statement failure with no savepoint to restore to. + Free the txn so the next statement gets a fresh snapshot + (same rationale as tidesdb_commit -- avoid stale reads). */ tidesdb_txn_rollback(trx->txn); tidesdb_txn_free(trx->txn); trx->txn = NULL; + trx->txn_generation++; trx->dirty = false; trx->stmt_savepoint_active = false; return 0; @@ -671,14 +822,25 @@ static int tidesdb_close_connection(handlerton *, THD *thd) /* START TRANSACTION WITH CONSISTENT SNAPSHOT callback. - Eagerly creates a TidesDB transaction with REPEATABLE_READ isolation - so the snapshot sequence number is captured now, not lazily at first - data access. Without this, rows committed by other connections between - START TRANSACTION and the first SELECT would be visible. + Eagerly creates a TidesDB transaction so the snapshot sequence number + is captured now, not lazily at first data access. Without this, rows + committed by other connections between START TRANSACTION and the first + SELECT would be visible. + + Uses the session's isolation level (SET TRANSACTION ISOLATION LEVEL) + rather than hard-coding REPEATABLE_READ. Falls back to RR if the + session is at the default. */ +#if MYSQL_VERSION_ID >= 110800 static int tidesdb_start_consistent_snapshot(THD *thd) +#else +static int tidesdb_start_consistent_snapshot(handlerton *, THD *thd) +#endif { - tidesdb_trx_t *trx = get_or_create_trx(thd, tidesdb_hton, TDB_ISOLATION_REPEATABLE_READ); + /* Respect the session isolation level. We pass TDB_ISOLATION_REPEATABLE_READ + as the table-level fallback since we have no table context here. */ + tidesdb_isolation_level_t iso = resolve_effective_isolation(thd, TDB_ISOLATION_REPEATABLE_READ); + tidesdb_trx_t *trx = get_or_create_trx(thd, tidesdb_hton, iso); if (!trx) return 1; /* Register at both statement and transaction level so the server @@ -688,6 +850,82 @@ static int tidesdb_start_consistent_snapshot(THD *thd) return 0; } +/* ******************** SHOW ENGINE TIDESDB STATUS ******************** */ + +static bool tidesdb_show_status(handlerton *hton, THD *thd, stat_print_fn *print, + enum ha_stat_type stat) +{ + if (stat != HA_ENGINE_STATUS) return false; + if (!tdb_global) return false; + + /* Database-level stats */ + tidesdb_db_stats_t db_st; + memset(&db_st, 0, sizeof(db_st)); + tidesdb_get_db_stats(tdb_global, &db_st); + + /* Cache stats */ + tidesdb_cache_stats_t cache_st; + memset(&cache_st, 0, sizeof(cache_st)); + tidesdb_get_cache_stats(tdb_global, &cache_st); + + char buf[4096]; + int pos = 0; + + pos += snprintf(buf + pos, sizeof(buf) - pos, + "================== TidesDB Engine Status ==================\n"); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Data directory: %s\n", tdb_path.c_str()); + pos += + snprintf(buf + pos, sizeof(buf) - pos, "Column families: %d\n", db_st.num_column_families); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Global sequence: %lu\n", + (unsigned long)db_st.global_seq); + pos += snprintf(buf + pos, sizeof(buf) - pos, "\n--- Memory ---\n"); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Total system memory: %lu MB\n", + (unsigned long)(db_st.total_memory / (1024 * 1024))); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Resolved memory limit: %lu MB\n", + (unsigned long)(db_st.resolved_memory_limit / (1024 * 1024))); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Memory pressure level: %d\n", + db_st.memory_pressure_level); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Total memtable bytes: %ld\n", + (long)db_st.total_memtable_bytes); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Transaction memory bytes: %ld\n", + (long)db_st.txn_memory_bytes); + pos += snprintf(buf + pos, sizeof(buf) - pos, "\n--- Storage ---\n"); + pos += + snprintf(buf + pos, sizeof(buf) - pos, "Total SSTables: %d\n", db_st.total_sstable_count); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Open SSTable handles: %d\n", + db_st.num_open_sstables); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Total data size: %lu bytes\n", + (unsigned long)db_st.total_data_size_bytes); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Immutable memtables: %d\n", + db_st.total_immutable_count); + pos += snprintf(buf + pos, sizeof(buf) - pos, "\n--- Background ---\n"); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Flush pending: %d\n", db_st.flush_pending_count); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Flush queue size: %lu\n", + (unsigned long)db_st.flush_queue_size); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Compaction queue size: %lu\n", + (unsigned long)db_st.compaction_queue_size); + pos += snprintf(buf + pos, sizeof(buf) - pos, "\n--- Block Cache ---\n"); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Enabled: %s\n", cache_st.enabled ? "YES" : "NO"); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Entries: %lu\n", + (unsigned long)cache_st.total_entries); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Size: %lu bytes\n", + (unsigned long)cache_st.total_bytes); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Hits: %lu\n", (unsigned long)cache_st.hits); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Misses: %lu\n", (unsigned long)cache_st.misses); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Hit rate: %.1f%%\n", cache_st.hit_rate * 100.0); + pos += snprintf(buf + pos, sizeof(buf) - pos, "Partitions: %lu\n", + (unsigned long)cache_st.num_partitions); + + /* Last conflict info */ + mysql_mutex_lock(&last_conflict_mutex); + if (last_conflict_info[0]) + pos += + snprintf(buf + pos, sizeof(buf) - pos, "\n--- Conflicts ---\n%s\n", last_conflict_info); + mysql_mutex_unlock(&last_conflict_mutex); + + return print(thd, "TIDESDB", 7, "", 0, buf, (size_t)pos); +} + /* ******************** Plugin init / deinit ******************** */ static int tidesdb_hton_drop_table(handlerton *, const char *path); @@ -703,6 +941,7 @@ static int tidesdb_init_func(void *p) tidesdb_hton->tablefile_extensions = ha_tidesdb_exts; tidesdb_hton->table_options = tidesdb_table_option_list; tidesdb_hton->field_options = tidesdb_field_option_list; + tidesdb_hton->index_options = tidesdb_index_option_list; tidesdb_hton->drop_table = tidesdb_hton_drop_table; /* Handlerton transaction callbacks -- one TidesDB txn per BEGIN..COMMIT */ @@ -715,19 +954,27 @@ static int tidesdb_init_func(void *p) tidesdb_hton->savepoint_rollback_can_release_mdl = tidesdb_savepoint_rollback_can_release_mdl; tidesdb_hton->savepoint_release = tidesdb_savepoint_release; tidesdb_hton->start_consistent_snapshot = tidesdb_start_consistent_snapshot; + tidesdb_hton->show_status = tidesdb_show_status; - tidesdb_init(NULL, NULL, NULL, NULL); + mysql_mutex_init(0, &last_conflict_mutex, MY_MUTEX_INIT_FAST); - /* We place TidesDB data as a sibling of the MariaDB data directory, - e.g. /path/to/tidesdb_data alongside /path/to/data/ - This avoids MariaDB's schema discovery detecting it. */ - std::string data_home(mysql_real_data_home); - while (!data_home.empty() && data_home.back() == '/') data_home.pop_back(); - size_t slash_pos = data_home.rfind('/'); - if (slash_pos != std::string::npos) - tdb_path = data_home.substr(0, slash_pos + 1) + "tidesdb_data"; + /* Use tidesdb_data_home_dir if set, otherwise compute + a sibling directory of the MariaDB data directory. */ + if (srv_data_home_dir && srv_data_home_dir[0]) + { + tdb_path = srv_data_home_dir; + while (!tdb_path.empty() && tdb_path.back() == '/') tdb_path.pop_back(); + } else - tdb_path = "tidesdb_data"; + { + std::string data_home(mysql_real_data_home); + while (!data_home.empty() && data_home.back() == '/') data_home.pop_back(); + size_t slash_pos = data_home.rfind('/'); + if (slash_pos != std::string::npos) + tdb_path = data_home.substr(0, slash_pos + 1) + "tidesdb_data"; + else + tdb_path = "tidesdb_data"; + } /* We map log level enum index to TidesDB constants */ static const int log_level_map[] = {TDB_LOG_DEBUG, TDB_LOG_INFO, TDB_LOG_WARN, @@ -748,7 +995,6 @@ static int tidesdb_init_func(void *p) if (rc != TDB_SUCCESS) { sql_print_error("TIDESDB: Failed to open TidesDB at %s (err=%d)", tdb_path.c_str(), rc); - tidesdb_finalize(); DBUG_RETURN(1); } @@ -766,7 +1012,8 @@ static int tidesdb_deinit_func(void *p) tidesdb_close(tdb_global); tdb_global = NULL; } - tidesdb_finalize(); + + mysql_mutex_destroy(&last_conflict_mutex); sql_print_information("TIDESDB: TidesDB closed"); DBUG_RETURN(0); @@ -822,11 +1069,39 @@ ha_tidesdb::ha_tidesdb(handlerton *hton, TABLE_SHARE *table_arg) scan_dir_(DIR_NONE), current_pk_len_(0), idx_search_comp_len_(0), + dup_iter_count_(0), + cached_enc_key_ver_(0), + enc_key_ver_valid_(false), + cached_time_(0), + cached_time_valid_(false), + cached_sess_ttl_(0), + cached_skip_unique_(false), + cached_thdvars_valid_(false), in_bulk_insert_(false), bulk_insert_ops_(0), keyread_only_(false), write_can_replace_(false) { + memset(dup_iter_cache_, 0, sizeof(dup_iter_cache_)); + memset(dup_iter_txn_, 0, sizeof(dup_iter_txn_)); + memset(dup_iter_txn_gen_, 0, sizeof(dup_iter_txn_gen_)); +} + +/* ******************** free_dup_iter_cache ******************** */ + +void ha_tidesdb::free_dup_iter_cache() +{ + for (uint i = 0; i < MAX_KEY; i++) + { + if (dup_iter_cache_[i]) + { + tidesdb_iter_free(dup_iter_cache_[i]); + dup_iter_cache_[i] = NULL; + dup_iter_txn_[i] = NULL; + dup_iter_txn_gen_[i] = 0; + } + } + dup_iter_count_ = 0; } /* ******************** get_share ******************** */ @@ -982,9 +1257,9 @@ uint ha_tidesdb::sec_idx_key(uint idx, const uchar *record, uchar *out) The secondary index key layout is: [comparable_idx_cols | comparable_pk] - We reverse the sort-key encoding (big-endian + sign-flip) back to - native record format. Only supports integer types where the - transformation is bijective. Returns true on success. + Uses decode_sort_key_part() which supports integers, DATE, DATETIME, + TIMESTAMP, YEAR, and fixed-length CHAR/BINARY (binary/latin1). + Returns true on success. */ bool ha_tidesdb::try_keyread_from_index(const uint8_t *ik, size_t iks, uint idx, uchar *buf) { @@ -994,12 +1269,11 @@ bool ha_tidesdb::try_keyread_from_index(const uint8_t *ik, size_t iks, uint idx, KEY *idx_key = &table->key_info[idx]; uint idx_col_len = share->idx_comp_key_len[idx]; - /* We check every column in read_set -- it must be a PK part or an index - part, and must be an integer type we can reverse-decode. */ + /* We check every column in read_set -- it must be a PK part or an + index part that decode_sort_key_part() can reverse-decode. */ for (uint c = bitmap_get_first_set(table->read_set); c != MY_BIT_NONE; c = bitmap_get_next_set(table->read_set, c)) { - Field *f = table->field[c]; bool found = false; for (uint p = 0; p < pk_key->user_defined_key_parts; p++) if ((uint)(pk_key->key_part[p].fieldnr - 1) == c) @@ -1015,61 +1289,8 @@ bool ha_tidesdb::try_keyread_from_index(const uint8_t *ik, size_t iks, uint idx, break; } if (!found) return false; - - switch (f->real_type()) - { - case MYSQL_TYPE_TINY: - case MYSQL_TYPE_SHORT: - case MYSQL_TYPE_INT24: - case MYSQL_TYPE_LONG: - case MYSQL_TYPE_LONGLONG: - break; - default: - return false; - } } - /* Helper lambda -- reverse sort_string for an integer key part. - big-endian -> little-endian, un-flip sign bit for signed types. */ - auto decode_int_part = [&](const uint8_t *src, uint sort_len, Field *f, uchar *buf_base) -> bool - { - uchar *to = buf_base + (uintptr_t)(f->ptr - table->record[0]); - bool is_signed = !f->is_unsigned(); - switch (sort_len) - { - case 1: - to[0] = is_signed ? (src[0] ^ 0x80) : src[0]; - return true; - case 2: - to[0] = src[1]; - to[1] = is_signed ? (src[0] ^ 0x80) : src[0]; - return true; - case 3: - to[0] = src[2]; - to[1] = src[1]; - to[2] = is_signed ? (src[0] ^ 0x80) : src[0]; - return true; - case 4: - to[0] = src[3]; - to[1] = src[2]; - to[2] = src[1]; - to[3] = is_signed ? (src[0] ^ 0x80) : src[0]; - return true; - case 8: - to[0] = src[7]; - to[1] = src[6]; - to[2] = src[5]; - to[3] = src[4]; - to[4] = src[3]; - to[5] = src[2]; - to[6] = src[1]; - to[7] = is_signed ? (src[0] ^ 0x80) : src[0]; - return true; - default: - return false; - } - }; - /* We decode index column parts from the head of the key */ const uint8_t *pos = ik; for (uint p = 0; p < idx_key->user_defined_key_parts; p++) @@ -1091,7 +1312,7 @@ bool ha_tidesdb::try_keyread_from_index(const uint8_t *ik, size_t iks, uint idx, if (pos + kp->length > ik + iks) return false; if (bitmap_is_set(table->read_set, kp->fieldnr - 1)) { - if (!decode_int_part(pos, kp->length, f, buf)) return false; + if (!decode_sort_key_part(pos, kp->length, f, buf)) return false; } pos += kp->length; } @@ -1118,7 +1339,7 @@ bool ha_tidesdb::try_keyread_from_index(const uint8_t *ik, size_t iks, uint idx, if (pos + kp->length > ik + iks) return false; if (bitmap_is_set(table->read_set, kp->fieldnr - 1)) { - if (!decode_int_part(pos, kp->length, f, buf)) return false; + if (!decode_sort_key_part(pos, kp->length, f, buf)) return false; } pos += kp->length; } @@ -1177,6 +1398,100 @@ bool ha_tidesdb::decode_int_sort_key(const uint8_t *src, uint sort_len, Field *f } } +/* + Extended sort-key decoder -- handles integers (via decode_int_sort_key), + DATE (3 bytes big-endian), DATETIME/TIMESTAMP (4-8 bytes big-endian), + YEAR (1 byte), and fixed-length CHAR/BINARY (direct memcpy of sort key). + + For integer types, delegates to decode_int_sort_key which handles the + sign-bit-flip + endian reversal. + + For DATE/DATETIME/TIMESTAMP/YEAR, the sort key is big-endian unsigned; + we reverse the byte order to native little-endian without sign-flip + (these types are always unsigned internally). + + For CHAR/BINARY (MYSQL_TYPE_STRING), the sort key produced by + Field_string::sort_string is the charset's sort weight sequence. + For binary/latin1 charsets this is identical to the field content + (padded with spaces to kp->length). We copy it directly. + For multi-byte charsets (utf8) the sort weights differ from the + stored bytes, so we cannot reverse -- return false. + + Returns true on success, false for unsupported types. +*/ +bool ha_tidesdb::decode_sort_key_part(const uint8_t *src, uint sort_len, Field *f, uchar *buf) +{ + switch (f->real_type()) + { + case MYSQL_TYPE_TINY: + case MYSQL_TYPE_SHORT: + case MYSQL_TYPE_INT24: + case MYSQL_TYPE_LONG: + case MYSQL_TYPE_LONGLONG: + return decode_int_sort_key(src, sort_len, f, buf); + + case MYSQL_TYPE_YEAR: + { + /* YEAR is 1 byte unsigned, sort key is identity */ + uchar *to = buf + (uintptr_t)(f->ptr - f->table->record[0]); + to[0] = src[0]; + return true; + } + + case MYSQL_TYPE_DATE: + case MYSQL_TYPE_NEWDATE: + { + /* DATE is 3 bytes, sort key is big-endian unsigned. + Reverse to native little-endian. */ + uchar *to = buf + (uintptr_t)(f->ptr - f->table->record[0]); + if (sort_len == 3) + { + to[0] = src[2]; + to[1] = src[1]; + to[2] = src[0]; + return true; + } + return false; + } + + case MYSQL_TYPE_DATETIME: + case MYSQL_TYPE_DATETIME2: + case MYSQL_TYPE_TIMESTAMP: + case MYSQL_TYPE_TIMESTAMP2: + { + /* DATETIME/TIMESTAMP sort keys are big-endian unsigned. + Reverse to native little-endian. */ + uchar *to = buf + (uintptr_t)(f->ptr - f->table->record[0]); + if (sort_len <= 8) + { + for (uint b = 0; b < sort_len; b++) to[b] = src[sort_len - 1 - b]; + return true; + } + return false; + } + + case MYSQL_TYPE_STRING: + { + /* Fixed-length CHAR/BINARY. For binary/latin1 charsets the + sort key is identical to the stored content (space-padded). + For multi-byte charsets we cannot reverse. */ + if (f->charset() == &my_charset_bin || f->charset() == &my_charset_latin1) + { + uchar *to = buf + (uintptr_t)(f->ptr - f->table->record[0]); + uint flen = f->pack_length(); + uint copy_len = (sort_len < flen) ? sort_len : flen; + memcpy(to, src, copy_len); + if (copy_len < flen) memset(to + copy_len, ' ', flen - copy_len); + return true; + } + return false; + } + + default: + return false; + } +} + /* Evaluate pushed index condition on a secondary-index entry before the expensive PK point-lookup (InnoDB pattern). @@ -1186,9 +1501,9 @@ bool ha_tidesdb::decode_int_sort_key(const uint8_t *src, uint sort_len, Field *f handler_index_cond_check() which evaluates the pushed condition, checks end_range, and handles THD kill signals. - Only supports integer column types for which the comparable encoding - (big-endian + sign-bit flip) is bijectively reversible. If any - index or PK key part is a non-integer type, ICP is skipped and + Supports integer types, DATE, DATETIME, TIMESTAMP, YEAR, and + fixed-length CHAR/BINARY (binary/latin1 charset) via + decode_sort_key_part(). For unsupported types, ICP is skipped and CHECK_POS is returned so the caller falls through to the PK lookup. */ check_result_t ha_tidesdb::icp_check_secondary(const uint8_t *ik, size_t iks, uint idx, uchar *buf) @@ -1198,26 +1513,15 @@ check_result_t ha_tidesdb::icp_check_secondary(const uint8_t *ik, size_t iks, ui KEY *idx_key = &table->key_info[idx]; uint idx_col_len = share->idx_comp_key_len[idx]; - /* We decode index column parts from the head of the key */ + /* We decode index column parts from the head of the key using the + extended decoder that supports integers, DATE, DATETIME, TIMESTAMP, + YEAR, and fixed-length CHAR/BINARY (binary/latin1). */ const uint8_t *pos = ik; for (uint p = 0; p < idx_key->user_defined_key_parts; p++) { KEY_PART_INFO *kp = &idx_key->key_part[p]; Field *f = kp->field; - /* We verify type is a reversible integer */ - switch (f->real_type()) - { - case MYSQL_TYPE_TINY: - case MYSQL_TYPE_SHORT: - case MYSQL_TYPE_INT24: - case MYSQL_TYPE_LONG: - case MYSQL_TYPE_LONGLONG: - break; - default: - return CHECK_POS; /* unsupported type -- skip ICP */ - } - if (f->real_maybe_null()) { if (pos >= ik + iks) return CHECK_POS; @@ -1231,7 +1535,7 @@ check_result_t ha_tidesdb::icp_check_secondary(const uint8_t *ik, size_t iks, ui pos++; } if (pos + kp->length > ik + iks) return CHECK_POS; - if (!decode_int_sort_key(pos, kp->length, f, buf)) return CHECK_POS; + if (!decode_sort_key_part(pos, kp->length, f, buf)) return CHECK_POS; pos += kp->length; } @@ -1247,18 +1551,6 @@ check_result_t ha_tidesdb::icp_check_secondary(const uint8_t *ik, size_t iks, ui KEY_PART_INFO *kp = &pk_key->key_part[p]; Field *f = kp->field; - switch (f->real_type()) - { - case MYSQL_TYPE_TINY: - case MYSQL_TYPE_SHORT: - case MYSQL_TYPE_INT24: - case MYSQL_TYPE_LONG: - case MYSQL_TYPE_LONGLONG: - break; - default: - return CHECK_POS; /* PK has non-integer column -- skip ICP */ - } - if (f->real_maybe_null()) { if (pos >= ik + iks) return CHECK_POS; @@ -1272,7 +1564,7 @@ check_result_t ha_tidesdb::icp_check_secondary(const uint8_t *ik, size_t iks, ui pos++; } if (pos + kp->length > ik + iks) return CHECK_POS; - if (!decode_int_sort_key(pos, kp->length, f, buf)) return CHECK_POS; + if (!decode_sort_key_part(pos, kp->length, f, buf)) return CHECK_POS; pos += kp->length; } } @@ -1356,8 +1648,6 @@ void ha_tidesdb::recover_counters() int ha_tidesdb::open(const char *name, int mode, uint test_if_locked) { DBUG_ENTER("ha_tidesdb::open"); - long long t_open0 = 0; - if (unlikely(srv_debug_trace)) t_open0 = tdb_now_us(); if (!(share = get_share())) DBUG_RETURN(1); @@ -1486,8 +1776,6 @@ int ha_tidesdb::open(const char *name, int mode, uint test_if_locked) /* We set ref_length for position()/rnd_pos() */ ref_length = share->pk_key_len; - if (unlikely(srv_debug_trace)) - TDB_TRACE("table=%s took=%lldus", share->cf_name.c_str(), tdb_now_us() - t_open0); DBUG_RETURN(0); } @@ -1501,6 +1789,7 @@ int ha_tidesdb::close(void) scan_iter_cf_ = NULL; scan_iter_txn_ = NULL; } + free_dup_iter_cache(); /* stmt_txn is a borrowed pointer into the per-connection trx->txn. We do not free it here -- the txn is owned by the per-connection trx and will be freed in tidesdb_close_connection(). */ @@ -1531,7 +1820,8 @@ int ha_tidesdb::create(const char *name, TABLE *table_arg, HA_CREATE_INFO *creat } } - /* We create one CF per secondary index (named by key name for stability) */ + /* We create one CF per secondary index (named by key name for stability). + Per-index USE_BTREE overrides the table-level setting. */ for (uint i = 0; i < table_arg->s->keys; i++) { if (table_arg->s->primary_key != MAX_KEY && i == table_arg->s->primary_key) continue; @@ -1539,7 +1829,11 @@ int ha_tidesdb::create(const char *name, TABLE *table_arg, HA_CREATE_INFO *creat std::string idx_cf = cf_name + CF_INDEX_INFIX + table_arg->key_info[i].name.str; if (!tidesdb_get_column_family(tdb_global, idx_cf.c_str())) { - int rc = tidesdb_create_column_family(tdb_global, idx_cf.c_str(), &cfg); + tidesdb_column_family_config_t idx_cfg = cfg; + ha_index_option_struct *iopts = table_arg->key_info[i].option_struct; + if (iopts && iopts->use_btree) idx_cfg.use_btree = 1; + + int rc = tidesdb_create_column_family(tdb_global, idx_cf.c_str(), &idx_cfg); if (rc != TDB_SUCCESS) { sql_print_error("TIDESDB: Failed to create index CF '%s' (err=%d)", idx_cf.c_str(), @@ -1623,19 +1917,30 @@ static std::string tidesdb_decrypt_row(const char *data, size_t len, uint key_id /* ******************** serialize / deserialize (BLOB deep-copy) ******************** */ +/* Row format header magic byte. Rows starting with this byte use the + versioned format: [0xFE] [null_bytes_stored (2 LE)] [field_count (2 LE)]. + Old-format rows (written before instant DDL support) lack this header; + deserialize_row detects them by checking the first byte != 0xFE. + 0xFE is safe because the old format starts with the null bitmap whose + first byte encodes null flags for the first 8 columns (bit values 0-7) + and the table-level null bits -- in practice it is never exactly 0xFE + for tables with < 7 nullable columns. For robustness, tables that + have never done an instant DDL always write old-format rows (no header) + until the first instant schema change sets share->needs_row_header. */ +static constexpr uchar ROW_HEADER_MAGIC = 0xFE; +static constexpr uint ROW_HEADER_SIZE = 5; /* magic(1) + null_bytes(2) + field_count(2) */ + const std::string &ha_tidesdb::serialize_row(const uchar *buf) { my_ptrdiff_t ptrdiff = (my_ptrdiff_t)(buf - table->record[0]); - /* Upper-bound packed size -- null_bytes + reclength covers field data. + /* Upper-bound packed size -- header + null_bytes + reclength + overhead. Add 2 bytes per field for length-prefix overhead-- Field_string::pack() (CHAR columns) prepends a 1-2 byte length that is not included in reclength. - - Without this margin the buffer overflows by up to - 2 * num_char_fields bytes, silently corrupting the heap. For BLOBs, add actual data sizes since Field_blob::pack() inlines data. */ - size_t est = table->s->null_bytes + table->s->reclength + 2 * table->s->fields; + size_t est = + ROW_HEADER_SIZE + table->s->null_bytes + table->s->reclength + 2 * table->s->fields; if (share->has_blobs) { for (uint i = 0; i < table->s->fields; i++) @@ -1651,6 +1956,16 @@ const std::string &ha_tidesdb::serialize_row(const uchar *buf) uchar *start = (uchar *)&row_buf_[0]; uchar *pos = start; + /* Row header -- enables instant ADD/DROP COLUMN by recording the + null bitmap size and field count at write time. */ + *pos++ = ROW_HEADER_MAGIC; + uint16 nb = (uint16)table->s->null_bytes; + uint16 fc = (uint16)table->s->fields; + int2store(pos, nb); + pos += 2; + int2store(pos, fc); + pos += 2; + /* Null bitmap */ memcpy(pos, buf, table->s->null_bytes); pos += table->s->null_bytes; @@ -1671,13 +1986,29 @@ const std::string &ha_tidesdb::serialize_row(const uchar *buf) if (share->encrypted) { - /* We re-fetch the latest key version on each write so key rotation - is picked up without requiring table close/reopen (Fix #6). */ - uint cur_ver = encryption_key_get_latest_version(share->encryption_key_id); - if (cur_ver != ENCRYPTION_KEY_VERSION_INVALID) share->encryption_key_version = cur_ver; - row_buf_ = - tidesdb_encrypt_row(row_buf_, share->encryption_key_id, share->encryption_key_version); + /* We cache the encryption key version per-statement to avoid the + expensive encryption_key_get_latest_version() syscall on every + single row. The cache is invalidated at statement start + (enc_key_ver_valid_ = false in external_lock). */ + if (!enc_key_ver_valid_) + { + uint cur_ver = encryption_key_get_latest_version(share->encryption_key_id); + if (cur_ver != ENCRYPTION_KEY_VERSION_INVALID) + { + share->encryption_key_version = cur_ver; + cached_enc_key_ver_ = cur_ver; + } + else + { + cached_enc_key_ver_ = share->encryption_key_version; + } + enc_key_ver_valid_ = true; + } + /* We encrypt into enc_buf_ instead of replacing row_buf_, so that + row_buf_'s heap capacity is preserved across calls. */ + enc_buf_ = tidesdb_encrypt_row(row_buf_, share->encryption_key_id, cached_enc_key_ver_); /* Empty string signals encryption failure -- caller must check */ + return enc_buf_; } return row_buf_; @@ -1688,18 +2019,61 @@ void ha_tidesdb::deserialize_row(uchar *buf, const uchar *data, size_t len) const uchar *from = data; const uchar *from_end = data + len; - /* Null bitmap */ - if (len < table->s->null_bytes) return; - memcpy(buf, from, table->s->null_bytes); - from += table->s->null_bytes; + /* Detect row format: new format starts with ROW_HEADER_MAGIC (0xFE), + old format starts with the null bitmap (pre-instant-DDL rows). */ + uint stored_null_bytes = table->s->null_bytes; + uint stored_fields = table->s->fields; + + if (len >= ROW_HEADER_SIZE && data[0] == ROW_HEADER_MAGIC) + { + /* New format: [magic(1)] [null_bytes(2)] [field_count(2)] [null_bitmap] [fields...] */ + from++; + stored_null_bytes = uint2korr(from); + from += 2; + stored_fields = uint2korr(from); + from += 2; + } + + /* Null bitmap -- copy the smaller of stored vs current. + When columns were added (stored_null_bytes < table->s->null_bytes), + fill the extra null bitmap bytes from the table's default record + so that new columns inherit their correct DEFAULT / NOT NULL state + rather than blindly marking them NULL. */ + if ((size_t)(from_end - from) < stored_null_bytes) return; + uint copy_nb = MY_MIN(stored_null_bytes, table->s->null_bytes); + memcpy(buf, from, copy_nb); + if (copy_nb < table->s->null_bytes) + memcpy(buf + copy_nb, table->s->default_values + copy_nb, table->s->null_bytes - copy_nb); + from += stored_null_bytes; + + /* Unpack fields. Only unpack up to MIN(stored_fields, current_fields). + If the row has more fields than the current schema (DROP COLUMN), + the extra packed data is simply skipped. + If the row has fewer fields (ADD COLUMN), fill the missing fields + from the table's default record so they get their DEFAULT value. */ + uint unpack_count = MY_MIN(stored_fields, table->s->fields); + + /* Pre-fill default values for columns added after this row was written. + Copy each new field's bytes from default_values into buf so that + they have the correct DEFAULT even when the field is NOT NULL. */ + if (stored_fields < table->s->fields) + { + my_ptrdiff_t def_off = (my_ptrdiff_t)(table->s->default_values - table->record[0]); + for (uint i = stored_fields; i < table->s->fields; i++) + { + Field *f = table->field[i]; + uchar *to = buf + (uintptr_t)(f->ptr - table->record[0]); + const uchar *def_src = (const uchar *)f->ptr + def_off; + memcpy(to, def_src, f->pack_length()); + } + } - /* We unpack each non-null field using Field::unpack(). - Field::unpack() returns pointer past consumed bytes. */ my_ptrdiff_t ptrdiff = (my_ptrdiff_t)(buf - table->record[0]); - for (uint i = 0; i < table->s->fields; i++) + for (uint i = 0; i < unpack_count; i++) { Field *f = table->field[i]; if (f->is_real_null(ptrdiff)) continue; + if (from >= from_end) break; uchar *to = buf + (uintptr_t)(f->ptr - table->record[0]); const uchar *next = f->unpack(to, from, from_end); if (!next) break; @@ -1737,9 +2111,6 @@ void ha_tidesdb::deserialize_row(uchar *buf, const std::string &row) */ int ha_tidesdb::fetch_row_by_pk(tidesdb_txn_t *txn, const uchar *pk, uint pk_len, uchar *buf) { - long long t0 = 0; - if (unlikely(srv_debug_trace)) t0 = tdb_now_us(); - uchar dk[MAX_KEY_LENGTH + 2]; uint dk_len = build_data_key(pk, pk_len, dk); @@ -1748,27 +2119,25 @@ int ha_tidesdb::fetch_row_by_pk(tidesdb_txn_t *txn, const uchar *pk, uint pk_len int rc = tidesdb_txn_get(txn, share->cf, dk, dk_len, &value, &value_size); if (rc != TDB_SUCCESS) return HA_ERR_KEY_NOT_FOUND; - long long t1 = 0; - if (unlikely(srv_debug_trace)) t1 = tdb_now_us(); - if (!share->has_blobs && !share->encrypted) { + /* Zero-copy path -- deserialize directly from API buffer */ deserialize_row(buf, (const uchar *)value, value_size); tidesdb_free(value); } else { - /* We copy to last_row so BLOB data pointers remain valid */ - last_row.assign((const char *)value, value_size); + /* Copy into reusable get_val_buf_ (retains heap capacity across + calls) then free the API buffer. For BLOBs, last_row must + hold the data so Field_blob pointers remain valid. */ + get_val_buf_.assign((const char *)value, value_size); tidesdb_free(value); + last_row = get_val_buf_; deserialize_row(buf, last_row); } memcpy(current_pk_buf_, pk, pk_len); current_pk_len_ = pk_len; - if (unlikely(srv_debug_trace)) - TDB_TRACE("GET hit txn_get=%lld deser=%lldus val_sz=%zu", t1 - t0, tdb_now_us() - t1, - value_size); return 0; } @@ -1795,19 +2164,27 @@ time_t ha_tidesdb::compute_row_ttl(const uchar *buf) } } - /* Session TTL override (SET SESSION tidesdb_ttl=N) takes precedence - over the table-level default but not over per-row TTL_COL. */ + /* Session TTL override -- use cached value to avoid THDVAR + ha_thd() + on every row. The cache is populated once per statement in write_row + / update_row and invalidated in external_lock(F_UNLCK). */ if (ttl_seconds <= 0) { - ulonglong sess = THDVAR(ha_thd(), ttl); - if (sess > 0) ttl_seconds = (long long)sess; + if (cached_sess_ttl_ > 0) ttl_seconds = (long long)cached_sess_ttl_; } if (ttl_seconds <= 0 && share->default_ttl > 0) ttl_seconds = (long long)share->default_ttl; if (ttl_seconds <= 0) return TIDESDB_TTL_NONE; - return (time_t)(time(NULL) + ttl_seconds); + /* Use cached time(NULL) to avoid the vDSO/syscall per row. + 1-second granularity is more than sufficient for TTL. */ + if (!cached_time_valid_) + { + cached_time_ = time(NULL); + cached_time_valid_ = true; + } + + return (time_t)(cached_time_ + ttl_seconds); } /* ******************** iter_read_current ******************** */ @@ -1861,8 +2238,6 @@ int ha_tidesdb::iter_read_current(uchar *buf) int ha_tidesdb::write_row(const uchar *buf) { DBUG_ENTER("ha_tidesdb::write_row"); - long long t0 = 0, t1 = 0, t2 = 0, t3 = 0, t4 = 0, t5 = 0; - if (unlikely(srv_debug_trace)) t0 = tdb_now_us(); /* We need all columns readable for PK extraction, secondary index key building, serialization, and TTL computation. */ @@ -1914,10 +2289,6 @@ int ha_tidesdb::write_row(const uchar *buf) } const uint8_t *row_ptr = (const uint8_t *)row_data.data(); size_t row_len = row_data.size(); - if (unlikely(srv_debug_trace)) - { - t1 = tdb_now_us(); - } /* Lazy txn -- we ensure stmt_txn exists on first data access */ { @@ -1930,23 +2301,37 @@ int ha_tidesdb::write_row(const uchar *buf) } tidesdb_txn_t *txn = stmt_txn; stmt_txn_dirty = true; + + /* Cache THD and trx once -- avoids repeated ha_thd() virtual call + and thd_get_ha_data() indirect lookup on every row. */ + THD *thd = ha_thd(); + tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, ht); + if (trx) { - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(ha_thd(), ht); - if (trx) - { - trx->dirty = true; - trx->stmt_was_dirty = true; - } + trx->dirty = true; + trx->stmt_was_dirty = true; } - if (unlikely(srv_debug_trace)) + + /* Cache THDVAR lookups once per statement -- avoids repeated + thd + offset computation on every row. */ + if (!cached_thdvars_valid_) { - t2 = tdb_now_us(); + cached_skip_unique_ = THDVAR(thd, skip_unique_check); + cached_sess_ttl_ = THDVAR(thd, ttl); + cached_thdvars_valid_ = true; } /* We check PK uniqueness before inserting (TidesDB put overwrites silently). - Skip when REPLACE INTO or INSERT ON DUPLICATE KEY UPDATE -- the server - handles the conflict by deleting the old row first (HA_EXTRA_WRITE_CAN_REPLACE). */ - if (share->has_user_pk && !write_can_replace_) + IODKU needs HA_ERR_FOUND_DUPP_KEY so the server can run the UPDATE clause. + REPLACE INTO also needs it when secondary indexes exist (old index entries + must be cleaned up via delete+reinsert). When write_can_replace_ is set + and the table has no secondary indexes, we skip the dup check entirely -- + tidesdb_txn_put will overwrite the old value, which is exactly what REPLACE + wants, saving a full point-lookup per row. + SET SESSION tidesdb_skip_unique_check=1 (bulk load) also bypasses this. */ + bool skip_unique = cached_skip_unique_; + if (share->has_user_pk && !skip_unique && + !(write_can_replace_ && share->num_secondary_indexes == 0)) { uint8_t *dup_val = NULL; size_t dup_len = 0; @@ -1955,6 +2340,8 @@ int ha_tidesdb::write_row(const uchar *buf) { tidesdb_free(dup_val); errkey = lookup_errkey = share->pk_index; + /* Populate dup_ref so rnd_pos() can find the conflicting row */ + memcpy(dup_ref, pk, pk_len); tmp_restore_column_map(&table->read_set, old_map); DBUG_RETURN(HA_ERR_FOUND_DUPP_KEY); } @@ -1965,9 +2352,16 @@ int ha_tidesdb::write_row(const uchar *buf) } } - /* We check UNIQUE secondary index uniqueness via prefix scan */ - if (share->num_secondary_indexes > 0 && !write_can_replace_) + /* We check UNIQUE secondary index uniqueness. + Cached dup-check iterators avoid the catastrophically expensive + tidesdb_iter_new() (O(num_sstables) merge-heap construction) on + every single INSERT. The iterator per unique index is created + once and reused via seek() across rows within the same txn. */ + if (share->num_secondary_indexes > 0 && !skip_unique) { + /* trx already cached at top of write_row */ + uint64_t cur_gen = trx ? trx->txn_generation : 0; + for (uint i = 0; i < table->s->keys; i++) { if (share->has_user_pk && i == share->pk_index) continue; @@ -1977,9 +2371,27 @@ int ha_tidesdb::write_row(const uchar *buf) uchar idx_prefix[MAX_KEY_LENGTH]; uint idx_prefix_len = make_comparable_key( &table->key_info[i], buf, table->key_info[i].user_defined_key_parts, idx_prefix); - tidesdb_iter_t *dup_iter = NULL; - if (tidesdb_iter_new(txn, share->idx_cfs[i], &dup_iter) != TDB_SUCCESS || !dup_iter) - continue; + + /* Get or create cached dup-check iterator for this index. + Invalidate if the txn changed (commit/reset frees txn ops + that the iterator's MERGE_SOURCE_TXN_OPS depends on). */ + tidesdb_iter_t *dup_iter = dup_iter_cache_[i]; + if (dup_iter && (dup_iter_txn_[i] != txn || dup_iter_txn_gen_[i] != cur_gen)) + { + tidesdb_iter_free(dup_iter); + dup_iter = NULL; + dup_iter_cache_[i] = NULL; + } + if (!dup_iter) + { + if (tidesdb_iter_new(txn, share->idx_cfs[i], &dup_iter) != TDB_SUCCESS || !dup_iter) + continue; + dup_iter_cache_[i] = dup_iter; + dup_iter_txn_[i] = txn; + dup_iter_txn_gen_[i] = cur_gen; + dup_iter_count_++; + } + tidesdb_iter_seek(dup_iter, idx_prefix, idx_prefix_len); if (tidesdb_iter_valid(dup_iter)) { @@ -1988,27 +2400,26 @@ int ha_tidesdb::write_row(const uchar *buf) if (tidesdb_iter_key(dup_iter, &fk, &fks) == TDB_SUCCESS && fks >= idx_prefix_len && memcmp(fk, idx_prefix, idx_prefix_len) == 0) { - tidesdb_iter_free(dup_iter); + /* Extract PK suffix from the index key for dup_ref */ + size_t dup_pk_len = fks - idx_prefix_len; + if (dup_pk_len > 0 && dup_pk_len <= ref_length) + memcpy(dup_ref, fk + idx_prefix_len, dup_pk_len); errkey = lookup_errkey = i; tmp_restore_column_map(&table->read_set, old_map); DBUG_RETURN(HA_ERR_FOUND_DUPP_KEY); } } - tidesdb_iter_free(dup_iter); } } - /* We compute TTL when the table has TTL configured or the session overrides it */ + /* We compute TTL when the table has TTL configured or the session overrides it. + Uses cached_sess_ttl_ to avoid THDVAR + ha_thd() per row. */ time_t row_ttl = - (share->has_ttl || THDVAR(ha_thd(), ttl) > 0) ? compute_row_ttl(buf) : TIDESDB_TTL_NONE; + (share->has_ttl || cached_sess_ttl_ > 0) ? compute_row_ttl(buf) : TIDESDB_TTL_NONE; /* We insert data row */ int rc = tidesdb_txn_put(txn, share->cf, dk, dk_len, row_ptr, row_len, row_ttl); if (rc != TDB_SUCCESS) goto err; - if (unlikely(srv_debug_trace)) - { - t3 = tdb_now_us(); - } /* We maintain secondary indexes */ memcpy(current_pk_buf_, pk, pk_len); @@ -2026,17 +2437,6 @@ int ha_tidesdb::write_row(const uchar *buf) rc = tidesdb_txn_put(txn, share->idx_cfs[i], ik, ik_len, &tdb_empty_val, 1, row_ttl); if (rc != TDB_SUCCESS) goto err; } - if (unlikely(srv_debug_trace)) - { - t4 = tdb_now_us(); - } - - if (unlikely(srv_debug_trace)) - { - t5 = tdb_now_us(); - TDB_TRACE("pk+ser=%lld ensure_txn=%lld txn_put=%lld sec_idx=%lld total=%lldus row_len=%zu", - t1 - t0, t2 - t1, t3 - t2, t4 - t3, t5 - t0, row_len); - } /* We track ops for bulk insert batching (1 data + N secondary index puts) */ if (in_bulk_insert_) @@ -2044,24 +2444,45 @@ int ha_tidesdb::write_row(const uchar *buf) bulk_insert_ops_ += 1 + share->num_secondary_indexes; if (bulk_insert_ops_ >= TIDESDB_BULK_INSERT_BATCH_OPS) { - /* Mid-txn commit to stay under TDB_MAX_TXN_OPS and bound memory */ - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(ha_thd(), ht); + /* Mid-txn commit to stay under TDB_MAX_TXN_OPS and bound memory. + Use tidesdb_txn_reset() instead of free+recreate to preserve + the txn's internal buffers (ops array, arenas, CF arrays). + trx already cached at top of write_row. */ if (trx && trx->txn) { int crc = tidesdb_txn_commit(trx->txn); if (crc != TDB_SUCCESS) sql_print_warning("TIDESDB: bulk insert mid-commit failed rc=%d", crc); - tidesdb_txn_free(trx->txn); - trx->txn = NULL; - /* Begin a fresh txn for the remaining rows */ - crc = - tidesdb_txn_begin_with_isolation(tdb_global, share->isolation_level, &trx->txn); - if (crc != TDB_SUCCESS) + /* Reset reuses the txn with READ_COMMITTED -- bulk inserts + don't need snapshot consistency across batches and higher + levels would cause unbounded read-set growth. */ + int rrc = tidesdb_txn_reset(trx->txn, TDB_ISOLATION_READ_COMMITTED); + if (rrc != TDB_SUCCESS) { - tmp_restore_column_map(&table->read_set, old_map); - DBUG_RETURN(tdb_rc_to_ha(crc, "bulk_insert txn_begin")); + /* Reset failed -- fall back to free+recreate */ + tidesdb_txn_free(trx->txn); + trx->txn = NULL; + crc = tidesdb_txn_begin_with_isolation(tdb_global, TDB_ISOLATION_READ_COMMITTED, + &trx->txn); + if (crc != TDB_SUCCESS) + { + tmp_restore_column_map(&table->read_set, old_map); + DBUG_RETURN(tdb_rc_to_ha(crc, "bulk_insert txn_begin")); + } } stmt_txn = trx->txn; + trx->txn_generation++; + /* Iterators depend on MERGE_SOURCE_TXN_OPS which are cleared + by reset -- invalidate all cached iterators. */ + if (scan_iter) + { + tidesdb_iter_free(scan_iter); + scan_iter = NULL; + scan_iter_cf_ = NULL; + scan_iter_txn_ = NULL; + } + free_dup_iter_cache(); + scan_txn = trx->txn; } bulk_insert_ops_ = 0; } @@ -2115,8 +2536,6 @@ void ha_tidesdb::get_auto_increment(ulonglong offset, ulonglong increment, int ha_tidesdb::rnd_init(bool scan) { DBUG_ENTER("ha_tidesdb::rnd_init"); - long long ri_t0 = 0; - if (unlikely(srv_debug_trace)) ri_t0 = tdb_now_us(); current_pk_len_ = 0; @@ -2127,7 +2546,8 @@ int ha_tidesdb::rnd_init(bool scan) } scan_txn = stmt_txn; - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(ha_thd(), ht); + THD *thd = ha_thd(); + tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, ht); uint64_t cur_gen = trx ? trx->txn_generation : 0; if (scan_iter && @@ -2151,12 +2571,14 @@ int ha_tidesdb::rnd_init(bool scan) scan_iter_txn_ = scan_txn; scan_iter_txn_gen_ = cur_gen; } + else + { + } /* We seek past meta keys to the first data key */ uint8_t data_prefix = KEY_NS_DATA; tidesdb_iter_seek(scan_iter, &data_prefix, 1); - if (unlikely(srv_debug_trace)) TDB_TRACE("iter_new+seek took %lldus", tdb_now_us() - ri_t0); DBUG_RETURN(0); } @@ -2174,13 +2596,13 @@ int ha_tidesdb::rnd_end() int ha_tidesdb::rnd_next(uchar *buf) { DBUG_ENTER("ha_tidesdb::rnd_next"); - long long rn_t0 = 0; - if (unlikely(srv_debug_trace)) rn_t0 = tdb_now_us(); int ret = iter_read_current(buf); - if (ret == 0) tidesdb_iter_next(scan_iter); + if (ret == 0) + { + tidesdb_iter_next(scan_iter); + } - if (unlikely(srv_debug_trace)) TDB_TRACE("ret=%d took=%lldus", ret, tdb_now_us() - rn_t0); DBUG_RETURN(ret); } @@ -2213,6 +2635,7 @@ int ha_tidesdb::rnd_pos(uchar *buf, uchar *pos) int ha_tidesdb::index_init(uint idx, bool sorted) { DBUG_ENTER("ha_tidesdb::index_init"); + THD *thd = ha_thd(); active_index = idx; idx_pk_exact_done_ = false; scan_dir_ = DIR_NONE; @@ -2248,15 +2671,12 @@ int ha_tidesdb::index_init(uint idx, bool sorted) iterator holds a stale txn pointer and must be recreated. We compare both the pointer and a monotonic generation counter because the allocator can reuse the same address for a new txn. */ - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(ha_thd(), ht); + tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, ht); uint64_t cur_gen = trx ? trx->txn_generation : 0; if (scan_iter && (scan_iter_cf_ != target_cf || scan_iter_txn_ != scan_txn || scan_iter_txn_gen_ != cur_gen)) { - TDB_TRACE("idx=%u iter INVALIDATED (cf %p->%p txn %p->%p gen %lu->%lu)", idx, scan_iter_cf_, - target_cf, scan_iter_txn_, scan_txn, (unsigned long)scan_iter_txn_gen_, - (unsigned long)cur_gen); tidesdb_iter_free(scan_iter); scan_iter = NULL; scan_iter_cf_ = NULL; @@ -2284,7 +2704,8 @@ int ha_tidesdb::ensure_scan_iter() { scan_iter_cf_ = scan_cf_; scan_iter_txn_ = scan_txn; - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(ha_thd(), ht); + THD *thd = ha_thd(); + tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, ht); scan_iter_txn_gen_ = trx ? trx->txn_generation : 0; return 0; } @@ -2322,14 +2743,6 @@ int ha_tidesdb::index_read_map(uchar *buf, const uchar *key, key_part_map keypar memcpy(idx_search_comp_, comp_key, comp_len); idx_search_comp_len_ = comp_len; - if (unlikely(srv_debug_trace)) - { - char hx[128]; - tdb_hex(comp_key, comp_len, hx, sizeof(hx)); - TDB_TRACE("idx=%u flag=%d comp_len=%u comp_key=%s", active_index, (int)find_flag, comp_len, - hx); - } - bool is_pk = share->has_user_pk && active_index == share->pk_index; if (is_pk) @@ -2354,7 +2767,6 @@ int ha_tidesdb::index_read_map(uchar *buf, const uchar *key, key_part_map keypar We need an iterator-based prefix scan -- seek to the first matching data key and let index_next_same iterate through all entries sharing this prefix. */ - TDB_TRACE("partial PK prefix scan comp_len=%u full=%u", comp_len, full_pk_comp_len); { int irc = ensure_scan_iter(); if (irc) DBUG_RETURN(irc); @@ -2491,13 +2903,6 @@ int ha_tidesdb::index_read_map(uchar *buf, const uchar *key, key_part_map keypar if (iks <= idx_col_len) DBUG_RETURN(HA_ERR_KEY_NOT_FOUND); - if (unlikely(srv_debug_trace)) - { - char hx[128]; - tdb_hex(ik, (uint)iks, hx, sizeof(hx)); - TDB_TRACE("sec found iks=%zu idx_col_len=%u ik=%s", iks, idx_col_len, hx); - } - /* ICP -- we evaluate pushed condition on index columns before PK lookup */ check_result_t icp = icp_check_secondary(ik, iks, active_index, buf); if (icp == CHECK_NEG) @@ -2807,24 +3212,9 @@ int ha_tidesdb::index_next_same(uchar *buf, const uchar *key, uint keylen) if (iks < idx_search_comp_len_ || memcmp(ik, idx_search_comp_, idx_search_comp_len_) != 0) { - if (unlikely(srv_debug_trace)) - { - char hx1[128], hx2[128]; - tdb_hex(ik, (uint)iks, hx1, sizeof(hx1)); - tdb_hex(idx_search_comp_, idx_search_comp_len_, hx2, sizeof(hx2)); - TDB_TRACE("prefix MISMATCH ik(%zu)=%s search(%u)=%s", iks, hx1, - idx_search_comp_len_, hx2); - } DBUG_RETURN(HA_ERR_END_OF_FILE); } - if (unlikely(srv_debug_trace)) - { - char hx[128]; - tdb_hex(ik, (uint)iks, hx, sizeof(hx)); - TDB_TRACE("prefix MATCH ik(%zu)=%s", iks, hx); - } - if (iks <= idx_col_len) DBUG_RETURN(HA_ERR_END_OF_FILE); /* ICP -- we evaluate pushed condition before PK lookup */ @@ -2852,8 +3242,6 @@ int ha_tidesdb::index_next_same(uchar *buf, const uchar *key, uint keylen) int ha_tidesdb::update_row(const uchar *old_data, const uchar *new_data) { DBUG_ENTER("ha_tidesdb::update_row"); - long long ur_t0 = 0; - if (unlikely(srv_debug_trace)) ur_t0 = tdb_now_us(); MY_BITMAP *old_map = tmp_use_all_columns(table, &table->read_set); @@ -2897,12 +3285,22 @@ int ha_tidesdb::update_row(const uchar *old_data, const uchar *new_data) } } + /* Populate THDVAR cache if not yet done this statement */ + if (!cached_thdvars_valid_) + { + THD *thd = ha_thd(); + cached_skip_unique_ = THDVAR(thd, skip_unique_check); + cached_sess_ttl_ = THDVAR(thd, ttl); + cached_thdvars_valid_ = true; + } + int rc; bool pk_changed = (old_pk_len != new_pk_len || memcmp(old_pk, new_pk, old_pk_len) != 0); - /* We compute TTL when the table has TTL configured or the session overrides it */ - time_t row_ttl = (share->has_ttl || THDVAR(ha_thd(), ttl) > 0) ? compute_row_ttl(new_data) - : TIDESDB_TTL_NONE; + /* We compute TTL when the table has TTL configured or the session overrides it. + Uses cached_sess_ttl_ to avoid THDVAR + ha_thd() per row. */ + time_t row_ttl = + (share->has_ttl || cached_sess_ttl_ > 0) ? compute_row_ttl(new_data) : TIDESDB_TTL_NONE; /* If PK changed, we delete old entry and insert new */ if (pk_changed) @@ -2924,8 +3322,10 @@ int ha_tidesdb::update_row(const uchar *old_data, const uchar *new_data) redundant txn_delete + txn_put pairs. */ if (share->num_secondary_indexes > 0) { - uchar old_ik[MAX_KEY_LENGTH * 2 + 2]; - uchar new_ik[MAX_KEY_LENGTH * 2 + 2]; + /* Use handler-owned buffers to avoid per-row heap allocation + and keep the stack frame within -Wframe-larger-than limits. */ + uchar *old_ik = upd_old_ik_; + uchar *new_ik = upd_new_ik_; for (uint i = 0; i < table->s->keys; i++) { if (share->has_user_pk && i == share->pk_index) continue; @@ -2958,8 +3358,6 @@ int ha_tidesdb::update_row(const uchar *old_data, const uchar *new_data) memcpy(current_pk_buf_, new_pk, new_pk_len); current_pk_len_ = new_pk_len; - if (unlikely(srv_debug_trace)) - TDB_TRACE("pk_changed=%d took=%lldus", (int)pk_changed, tdb_now_us() - ur_t0); /* Commit happens in external_lock(F_UNLCK). */ tmp_restore_column_map(&table->read_set, old_map); DBUG_RETURN(0); @@ -2974,8 +3372,6 @@ int ha_tidesdb::update_row(const uchar *old_data, const uchar *new_data) int ha_tidesdb::delete_row(const uchar *buf) { DBUG_ENTER("ha_tidesdb::delete_row"); - long long dr_t0 = 0; - if (unlikely(srv_debug_trace)) dr_t0 = tdb_now_us(); MY_BITMAP *old_map = tmp_use_all_columns(table, &table->read_set); @@ -3025,8 +3421,6 @@ int ha_tidesdb::delete_row(const uchar *buf) } } - if (unlikely(srv_debug_trace)) TDB_TRACE("took=%lldus", tdb_now_us() - dr_t0); - tmp_restore_column_map(&table->read_set, old_map); DBUG_RETURN(0); } @@ -3037,8 +3431,8 @@ int ha_tidesdb::delete_all_rows(void) { DBUG_ENTER("ha_tidesdb::delete_all_rows"); - /* We free cached iterator before dropping/recreating CFs. - The iterator holds refs to SSTables in the CF being dropped. */ + /* We free cached iterators before dropping/recreating CFs. + The iterators hold refs to SSTables in the CFs being dropped. */ if (scan_iter) { tidesdb_iter_free(scan_iter); @@ -3046,6 +3440,7 @@ int ha_tidesdb::delete_all_rows(void) scan_iter_cf_ = NULL; scan_iter_txn_ = NULL; } + free_dup_iter_cache(); /* We discard the connection txn before drop/recreate. The txn may have buffered INSERT/UPDATE ops from earlier statements; committing them @@ -3122,7 +3517,6 @@ int ha_tidesdb::delete_all_rows(void) void ha_tidesdb::start_bulk_insert(ha_rows rows, uint flags) { - TDB_TRACE("rows=%llu flags=%u", (unsigned long long)rows, flags); in_bulk_insert_ = true; bulk_insert_ops_ = 0; } @@ -3156,14 +3550,12 @@ Item *ha_tidesdb::idx_cond_push(uint keyno, Item *idx_cond) int ha_tidesdb::info(uint flag) { DBUG_ENTER("ha_tidesdb::info"); - long long ti0 = 0; - if (unlikely(srv_debug_trace)) ti0 = tdb_now_us(); if (share) ref_length = share->pk_key_len; if ((flag & (HA_STATUS_VARIABLE | HA_STATUS_CONST)) && share && share->cf) { - long long now = tdb_now_us(); + long long now = (long long)microsecond_interval_timer(); long long last = share->stats_refresh_us.load(std::memory_order_relaxed); if (now - last > TIDESDB_STATS_REFRESH_US && share->stats_refresh_us.compare_exchange_weak(last, now, std::memory_order_relaxed)) @@ -3228,33 +3620,42 @@ int ha_tidesdb::info(uint flag) stats.update_time = share->update_time.load(std::memory_order_relaxed); } - /* HA_STATUS_CONST -- set rec_per_key for index selectivity estimates. - For PK (unique) -- rec_per_key = 1. - For secondary indexes -- we estimate from total_keys / distinct count. - Without per-index distinct-key stats from TidesDB, we approximate - using total_keys (worst case = every key is unique = 1). */ + /* HA_STATUS_CONST -- set rec_per_key for index selectivity estimates. + PK and UNIQUE indexes: rec_per_key = 1. + Non-unique secondary indexes: use cached_rec_per_key if populated + by ANALYZE TABLE, else use a heuristic (total_keys / 10). */ if ((flag & HA_STATUS_CONST) && share) { for (uint i = 0; i < table->s->keys; i++) { KEY *key = &table->key_info[i]; bool is_pk = share->has_user_pk && i == share->pk_index; + bool is_unique = (key->flags & HA_NOSAME); + ulong cached_rpk = + (i < MAX_KEY) ? share->cached_rec_per_key[i].load(std::memory_order_relaxed) : 0; for (uint j = 0; j < key->ext_key_parts; j++) { - if (is_pk || (j + 1 == key->user_defined_key_parts)) - key->rec_per_key[j] = 1; /* unique or last part -- 1 */ + if (is_pk || is_unique) + { + key->rec_per_key[j] = 1; + } + else if (j + 1 == key->user_defined_key_parts) + { + /* Last user key part of a non-unique index. + Use ANALYZE-sampled value if available, else heuristic. */ + if (cached_rpk > 0) + key->rec_per_key[j] = cached_rpk; + else + key->rec_per_key[j] = (ulong)MY_MAX(stats.records / 10 + 1, 1); + } else + { key->rec_per_key[j] = (ulong)MY_MIN(stats.records / 4 + 1, stats.records); + } } } } - if (unlikely(srv_debug_trace)) - TDB_TRACE("flag=0x%x records=%llu data=%llu idx=%llu mrl=%lu took=%lldus", flag, - (unsigned long long)stats.records, (unsigned long long)stats.data_file_length, - (unsigned long long)stats.index_file_length, stats.mean_rec_length, - tdb_now_us() - ti0); - DBUG_RETURN(0); } @@ -3318,14 +3719,27 @@ int ha_tidesdb::analyze(THD *thd, HA_CHECK_OPT *check_opt) tidesdb_free_stats(st); - /* Secondary index CF stats */ - for (uint i = 0; i < share->idx_cfs.size(); i++) + /* Secondary index CF stats + cardinality sampling. + We iterate each secondary index CF, counting distinct index-column + prefixes (everything before the PK suffix) to compute rec_per_key. */ { - if (!share->idx_cfs[i]) continue; + int erc = ensure_stmt_txn(); + if (erc) + { + DBUG_RETURN(HA_ADMIN_OK); /* non-fatal -- stats just won't be updated */ + } + } + for (uint i = 0; i < table->s->keys; i++) + { + if (share->has_user_pk && i == share->pk_index) continue; + if (i >= share->idx_cfs.size() || !share->idx_cfs[i]) continue; + KEY *ki = &table->key_info[i]; tidesdb_stats_t *ist = NULL; + uint64_t idx_total_keys = 0; if (tidesdb_get_stats(share->idx_cfs[i], &ist) == TDB_SUCCESS && ist) { + idx_total_keys = ist->total_keys; push_warning_printf(thd, Sql_condition::WARN_LEVEL_NOTE, ER_UNKNOWN_ERROR, "TIDESDB: idx CF '%s' keys=%llu data_size=%llu bytes" " levels=%d", @@ -3333,8 +3747,81 @@ int ha_tidesdb::analyze(THD *thd, HA_CHECK_OPT *check_opt) (unsigned long long)ist->total_data_size, ist->num_levels); tidesdb_free_stats(ist); } + + /* Sample the index to estimate distinct prefix count. + For unique indexes rec_per_key is always 1. + For non-unique indexes, scan up to ANALYZE_SAMPLE_LIMIT entries + and count distinct index-column prefixes. */ + if (ki->flags & HA_NOSAME) + { + share->cached_rec_per_key[i].store(1, std::memory_order_relaxed); + continue; + } + + uint idx_prefix_len = share->idx_comp_key_len[i]; + if (idx_prefix_len == 0) continue; + + tidesdb_iter_t *ait = NULL; + if (tidesdb_iter_new(stmt_txn, share->idx_cfs[i], &ait) != TDB_SUCCESS || !ait) continue; + + tidesdb_iter_seek_to_first(ait); + + static constexpr uint64_t ANALYZE_SAMPLE_LIMIT = 100000; + uint64_t sampled = 0, distinct = 0; + uchar prev_prefix[MAX_KEY_LENGTH]; + uint prev_len = 0; + + while (tidesdb_iter_valid(ait) && sampled < ANALYZE_SAMPLE_LIMIT) + { + uint8_t *ik = NULL; + size_t iks = 0; + if (tidesdb_iter_key(ait, &ik, &iks) != TDB_SUCCESS) break; + + uint cmp_len = (iks >= idx_prefix_len) ? idx_prefix_len : (uint)iks; + if (sampled == 0 || cmp_len != prev_len || memcmp(ik, prev_prefix, cmp_len) != 0) + { + distinct++; + prev_len = cmp_len; + memcpy(prev_prefix, ik, cmp_len); + } + sampled++; + tidesdb_iter_next(ait); + } + tidesdb_iter_free(ait); + + if (distinct > 0) + { + /* Use sampled ratio to extrapolate for the full index */ + uint64_t total = (idx_total_keys > 0) ? idx_total_keys : sampled; + if (sampled < total) + { + /* Extrapolate: distinct_full ≈ distinct * (total / sampled) */ + double ratio = (double)total / (double)sampled; + uint64_t est_distinct = (uint64_t)(distinct * ratio); + if (est_distinct == 0) est_distinct = 1; + ulong rpk = (ulong)(total / est_distinct); + if (rpk == 0) rpk = 1; + share->cached_rec_per_key[i].store(rpk, std::memory_order_relaxed); + } + else + { + /* We sampled everything */ + ulong rpk = (ulong)(sampled / distinct); + if (rpk == 0) rpk = 1; + share->cached_rec_per_key[i].store(rpk, std::memory_order_relaxed); + } + + push_warning_printf(thd, Sql_condition::WARN_LEVEL_NOTE, ER_UNKNOWN_ERROR, + "TIDESDB: idx '%s' sampled=%llu distinct=%llu rec_per_key=%lu", + ki->name.str, (unsigned long long)sampled, + (unsigned long long)distinct, + share->cached_rec_per_key[i].load(std::memory_order_relaxed)); + } } + /* Re-run info to propagate the new rec_per_key values */ + info(HA_STATUS_CONST); + DBUG_RETURN(HA_ADMIN_OK); } @@ -3536,14 +4023,30 @@ ha_rows ha_tidesdb::records_in_range(uint inx, const key_range *min_key, const k ha_rows est = (ha_rows)(total * fraction); if (est == 0) est = 1; /* never return 0 -- optimizer treats it as "empty" */ - if (unlikely(srv_debug_trace)) - TDB_TRACE("idx=%u range_cost=%.2f full_cost=%.2f fraction=%.4f est=%llu total=%llu", inx, - range_cost, full_cost, fraction, (unsigned long long)est, - (unsigned long long)total); - return est; } +ulong ha_tidesdb::index_flags(uint idx, uint part, bool all_parts) const +{ + ulong flags = HA_READ_NEXT | HA_READ_PREV | HA_READ_ORDER | HA_READ_RANGE; + if (table_share && table_share->primary_key != MAX_KEY && idx == table_share->primary_key) + flags |= HA_CLUSTERED_INDEX; + else + flags |= HA_KEYREAD_ONLY; + return flags; +} + +const char *ha_tidesdb::index_type(uint key_number) +{ + if (key_number < table->s->keys) + { + ha_index_option_struct *iopts = table->key_info[key_number].option_struct; + if (iopts && iopts->use_btree) return "BTREE"; + } + ha_table_option_struct *opts = TDB_TABLE_OPTIONS(table); + return (opts && opts->use_btree) ? "BTREE" : "LSM"; +} + int ha_tidesdb::extra(enum ha_extra_function operation) { switch (operation) @@ -3555,9 +4058,14 @@ int ha_tidesdb::extra(enum ha_extra_function operation) keyread_only_ = false; break; case HA_EXTRA_WRITE_CAN_REPLACE: - case HA_EXTRA_INSERT_WITH_UPDATE: + /* REPLACE INTO -- skip uniqueness check, overwrite silently */ write_can_replace_ = true; break; + case HA_EXTRA_INSERT_WITH_UPDATE: + /* INSERT ON DUPLICATE KEY UPDATE -- the server needs write_row + to return HA_ERR_FOUND_DUPP_KEY so it can switch to update_row. + Do NOT set write_can_replace_ here. */ + break; case HA_EXTRA_WRITE_CANNOT_REPLACE: write_can_replace_ = false; break; @@ -3567,9 +4075,6 @@ int ha_tidesdb::extra(enum ha_extra_function operation) default: break; } - if (unlikely(srv_debug_trace) && - (operation == HA_EXTRA_KEYREAD || operation == HA_EXTRA_NO_KEYREAD)) - TDB_TRACE("op=%d keyread=%d", (int)operation, (int)keyread_only_); return 0; } @@ -3582,26 +4087,29 @@ int ha_tidesdb::extra(enum ha_extra_function operation) */ int ha_tidesdb::ensure_stmt_txn() { - if (stmt_txn) return 0; - long long t0 = 0; - if (unlikely(srv_debug_trace)) t0 = tdb_now_us(); + if (stmt_txn) + { + return 0; + } THD *thd = ha_thd(); - bool in_multi_stmt_txn = thd_test_options(thd, OPTION_NOT_AUTOCOMMIT | OPTION_BEGIN); - tidesdb_isolation_level_t effective_iso = - in_multi_stmt_txn ? share->isolation_level : TDB_ISOLATION_READ_COMMITTED; - /* Force READ_COMMITTED for DDL to avoid unbounded read-set growth */ + /* Resolve isolation level from MariaDB session (SET TRANSACTION ISOLATION + LEVEL / tx_isolation), falling back to table-level TidesDB SNAPSHOT + when the session uses the default REPEATABLE_READ. + Force READ_COMMITTED for DDL to avoid unbounded read-set growth. */ int sql_cmd = thd_sql_command(thd); + tidesdb_isolation_level_t effective_iso; if (sql_cmd == SQLCOM_ALTER_TABLE || sql_cmd == SQLCOM_CREATE_INDEX || sql_cmd == SQLCOM_DROP_INDEX || sql_cmd == SQLCOM_TRUNCATE || sql_cmd == SQLCOM_OPTIMIZE || sql_cmd == SQLCOM_CREATE_TABLE || sql_cmd == SQLCOM_DROP_TABLE) effective_iso = TDB_ISOLATION_READ_COMMITTED; + else + effective_iso = resolve_effective_isolation(thd, share->isolation_level); tidesdb_trx_t *trx = get_or_create_trx(thd, ht, effective_iso); if (!trx) return HA_ERR_OUT_OF_MEM; stmt_txn = trx->txn; - if (unlikely(srv_debug_trace)) TDB_TRACE("txn from conn ctx took %lldus", tdb_now_us() - t0); return 0; } @@ -3615,36 +4123,32 @@ int ha_tidesdb::external_lock(THD *thd, int lock_type) Get or create the per-connection txn and register with the server's transaction coordinator (InnoDB pattern). - For autocommit single-statement transactions we downgrade - the isolation level to READ_COMMITTED. At READ_COMMITTED - the library skips write-set and read-set conflict checks at - commit time, eliminating the ER_ERROR_DURING_COMMIT (1180) - errors that ORMs and applications cannot easily retry. - A single autocommit statement has no multi-statement - consistency window to protect, so READ_COMMITTED is safe. - - For multi-statement transactions (BEGIN...COMMIT) we keep - the table's configured isolation level so that application- - level retry logic handles the (rare) OCC conflicts. + The isolation level is resolved from the MariaDB session + (SET TRANSACTION ISOLATION LEVEL / tx_isolation), with + special handling for TidesDB's SNAPSHOT level (which has + no SQL equivalent -- selected via the table option + ISOLATION_LEVEL=SNAPSHOT and activated when the session + is at the default REPEATABLE_READ). DDL operations (ALTER TABLE, CREATE/DROP INDEX, TRUNCATE, OPTIMIZE, etc.) always use READ_COMMITTED regardless of - transaction context. The copy-based ALTER TABLE scan can - read hundreds of thousands of rows; REPEATABLE_READ would - add each key to the read-set for conflict detection, - causing unbounded memory growth and heap corruption - (issue #70). DDL never needs OCC conflict checks. */ - bool in_multi_stmt_txn = thd_test_options(thd, OPTION_NOT_AUTOCOMMIT | OPTION_BEGIN); - tidesdb_isolation_level_t effective_iso = - in_multi_stmt_txn ? share->isolation_level : TDB_ISOLATION_READ_COMMITTED; - - /* Force READ_COMMITTED for DDL to avoid unbounded read-set growth */ + session setting. The copy-based ALTER TABLE scan can + read hundreds of thousands of rows; higher isolation + levels would track each key in the read-set for conflict + detection, causing unbounded memory growth */ + + /* Resolve isolation from the MariaDB session (SET TRANSACTION ISOLATION + LEVEL / tx_isolation), honoring table-level SNAPSHOT when the session + uses the default REPEATABLE_READ. */ int sql_cmd = thd_sql_command(thd); + tidesdb_isolation_level_t effective_iso; if (sql_cmd == SQLCOM_ALTER_TABLE || sql_cmd == SQLCOM_CREATE_INDEX || sql_cmd == SQLCOM_DROP_INDEX || sql_cmd == SQLCOM_TRUNCATE || sql_cmd == SQLCOM_OPTIMIZE || sql_cmd == SQLCOM_CREATE_TABLE || sql_cmd == SQLCOM_DROP_TABLE) effective_iso = TDB_ISOLATION_READ_COMMITTED; + else + effective_iso = resolve_effective_isolation(thd, share->isolation_level); tidesdb_trx_t *trx = get_or_create_trx(thd, ht, effective_iso); if (!trx) DBUG_RETURN(HA_ERR_OUT_OF_MEM); @@ -3665,45 +4169,27 @@ int ha_tidesdb::external_lock(THD *thd, int lock_type) } else { - /* Statement end (F_UNLCK). - The iterator holds a live pointer into the txn (iter->txn is - dereferenced by seek/next). In autocommit mode the txn will - be freed by the upcoming hton->commit(all=true), so we must - free the iterator here. - - Inside a multi-statement transaction (BEGIN...COMMIT) the txn - survives across statements. Keeping the iterator alive avoids - the catastrophically expensive tidesdb_iter_new() on every - statement -- iter_seek() reuses the cached merge heap and is - orders of magnitude cheaper. The iterator will be: - -- reused by the next statement on the same CF/txn, - -- invalidated by rnd_init/index_init if the txn changes, - -- freed in close() when the handler is destroyed. */ - bool in_multi_stmt_txn = thd_test_options(thd, OPTION_NOT_AUTOCOMMIT | OPTION_BEGIN); - tidesdb_trx_t *trx = (tidesdb_trx_t *)thd_get_ha_data(thd, ht); - bool txn_has_writes = trx && trx->dirty; - if (scan_iter && (!in_multi_stmt_txn || stmt_txn_dirty || txn_has_writes)) + if (scan_iter) { - /* Free the iterator when: - (a) autocommit -- txn will be freed by hton->commit(all=true), or - (b) this statement had writes -- the cached merge heap was - built from a snapshot of the txn write buffer at iter_new() - time (tidesdb_merge_source_from_txn_ops); subsequent writes - are invisible to the old heap, so the next scan must get a - fresh iterator to see its own writes, or - (c) the transaction had any prior writes -- the iterator's merge - heap includes MERGE_SOURCE_TXN_OPS from the txn; when COMMIT - frees the txn, those ops become dangling pointers. If the - allocator returns the same address for the next txn, the - pointer comparison in rnd_init/index_init would falsely pass - and reuse the stale iterator (use-after-free). */ tidesdb_iter_free(scan_iter); scan_iter = NULL; scan_iter_cf_ = NULL; scan_iter_txn_ = NULL; } - /* We bump update_time once per write-statement for information_schema */ - if (stmt_txn_dirty && share) share->update_time.store(time(0), std::memory_order_relaxed); + if (dup_iter_count_ > 0) free_dup_iter_cache(); + + /* We bump update_time once per write-statement for information_schema. + Use cached_time_ if available to avoid another time() syscall. */ + if (stmt_txn_dirty && share) + share->update_time.store(cached_time_valid_ ? cached_time_ : time(0), + std::memory_order_relaxed); + + /* Invalidate all per-statement caches so the next statement + picks up any changes (key rotation, session variable changes, + clock advance). */ + enc_key_ver_valid_ = false; + cached_time_valid_ = false; + cached_thdvars_valid_ = false; stmt_txn = NULL; stmt_txn_dirty = false; @@ -3726,9 +4212,11 @@ THR_LOCK_DATA **ha_tidesdb::store_lock(THD *thd, THR_LOCK_DATA **to, enum thr_lo Classify ALTER TABLE operations into INSTANT / INPLACE / COPY. INSTANT -- metadata-only changes (.frm rewrite, no engine work): - rename column/index, change default, change table options + rename column/index, change default, change table options, + ADD COLUMN, DROP COLUMN (row format is self-describing via + the ROW_HEADER_MAGIC header written by serialize_row) INPLACE -- add/drop secondary indexes (create/drop CFs, populate) - COPY -- everything else (add/drop columns, change PK, type changes) + COPY -- column type changes, PK changes */ enum_alter_inplace_result ha_tidesdb::check_if_supported_inplace_alter( TABLE *altered_table, Alter_inplace_info *ha_alter_info) @@ -3737,12 +4225,16 @@ enum_alter_inplace_result ha_tidesdb::check_if_supported_inplace_alter( alter_table_operations flags = ha_alter_info->handler_flags; - /* Operations that are pure metadata (INSTANT) */ + /* Operations that are pure metadata (INSTANT). + ADD/DROP COLUMN is instant because the packed row format includes + a header with the stored null_bytes and field_count, so + deserialize_row adapts to rows written with any prior schema. */ static const alter_table_operations TIDESDB_INSTANT = ALTER_COLUMN_NAME | ALTER_RENAME_COLUMN | ALTER_CHANGE_COLUMN_DEFAULT | ALTER_COLUMN_DEFAULT | ALTER_COLUMN_OPTION | ALTER_CHANGE_CREATE_OPTION | ALTER_DROP_CHECK_CONSTRAINT | ALTER_VIRTUAL_GCOL_EXPR | ALTER_RENAME | ALTER_RENAME_INDEX | - ALTER_INDEX_IGNORABILITY; + ALTER_INDEX_IGNORABILITY | ALTER_ADD_COLUMN | ALTER_DROP_COLUMN | + ALTER_STORED_COLUMN_ORDER | ALTER_VIRTUAL_COLUMN_ORDER; /* Operations we can do inplace (add/drop secondary indexes) */ static const alter_table_operations TIDESDB_INPLACE_INDEX = @@ -3815,7 +4307,11 @@ bool ha_tidesdb::prepare_inplace_alter_table(TABLE *altered_table, /* We drop stale CF if it exists from a previous failed ALTER */ tidesdb_drop_column_family(tdb_global, idx_cf.c_str()); - int rc = tidesdb_create_column_family(tdb_global, idx_cf.c_str(), &cfg); + tidesdb_column_family_config_t idx_cfg = cfg; + ha_index_option_struct *iopts = new_key->option_struct; + if (iopts && iopts->use_btree) idx_cfg.use_btree = 1; + + int rc = tidesdb_create_column_family(tdb_global, idx_cf.c_str(), &idx_cfg); if (rc != TDB_SUCCESS) { sql_print_error("TIDESDB: inplace ADD INDEX: failed to create CF '%s' (err=%d)", @@ -3875,14 +4371,16 @@ bool ha_tidesdb::inplace_alter_table(TABLE *altered_table, Alter_inplace_info *h fields via make_sort_key_part during index key construction. */ MY_BITMAP *old_map = tmp_use_all_columns(altered_table, &altered_table->read_set); - TDB_TRACE("ENTER add_cfs=%u", (uint)ctx->add_cfs.size()); - /* We do a full table scan to populate the new secondary indexes. We use the altered_table's key_info for building index keys, since that matches the new key numbering. */ + /* Always use READ_COMMITTED for index population. The scan reads + potentially millions of rows; higher isolation levels would track + each key in the read-set, causing unbounded memory growth. Index + builds are DDL and never need OCC conflict detection. */ tidesdb_txn_t *txn = NULL; - int rc = tidesdb_txn_begin_with_isolation(tdb_global, share->isolation_level, &txn); + int rc = tidesdb_txn_begin_with_isolation(tdb_global, TDB_ISOLATION_READ_COMMITTED, &txn); if (rc != TDB_SUCCESS || !txn) { sql_print_error("TIDESDB: inplace ADD INDEX: txn_begin failed (err=%d)", rc); @@ -3906,9 +4404,11 @@ bool ha_tidesdb::inplace_alter_table(TABLE *altered_table, Alter_inplace_info *h ha_rows rows_processed = 0; /* For UNIQUE indexes, we track seen index-column prefixes to detect - duplicates. If a duplicate is found we must abort the ALTER. */ + duplicates. If a duplicate is found we must abort the ALTER. + unordered_set gives O(1) amortized lookup vs O(log n) for std::set, + which matters for tables with millions of rows. */ std::vector idx_is_unique(ctx->add_cfs.size(), false); - std::vector> idx_seen(ctx->add_cfs.size()); + std::vector> idx_seen(ctx->add_cfs.size()); for (uint a = 0; a < ctx->add_cfs.size(); a++) { uint key_num = ctx->add_key_nums[a]; @@ -3966,12 +4466,11 @@ bool ha_tidesdb::inplace_alter_table(TABLE *altered_table, Alter_inplace_info *h deserialize_row(table->record[0], (const uchar *)val_data, val_size); } - /* For each newly added index, we build and insert the index entry. + /* For each newly added index, build the index entry key. altered_table->key_info fields have ptr into altered_table->record[0], - but data is in table->record[0]. - - We use move_field_offset with ptdiff = table->record[0] -- altered_table->record[0] - to temporarily rebase field pointers (same pattern as make_comparable_key). */ + but the data lives in table->record[0]. We compute ptdiff to + rebase field pointers to read from the correct buffer. + Key format matches make_comparable_key(): [null_byte] + sort_string. */ my_ptrdiff_t ptdiff = (my_ptrdiff_t)(table->record[0] - altered_table->record[0]); for (uint a = 0; a < ctx->add_cfs.size(); a++) @@ -3986,14 +4485,24 @@ bool ha_tidesdb::inplace_alter_table(TABLE *altered_table, Alter_inplace_info *h KEY_PART_INFO *kp = &ki->key_part[p]; Field *field = kp->field; - /* make_sort_key_part handles nullable fields internally: - writes 1-byte null indicator + kp->length sort bytes. */ field->move_field_offset(ptdiff); - field->make_sort_key_part(ik + pos, kp->length); + if (field->real_maybe_null()) + { + if (field->is_null()) + { + ik[pos++] = 0; + bzero(ik + pos, kp->length); + pos += kp->length; + field->move_field_offset(-ptdiff); + continue; + } + ik[pos++] = 1; + } + field->sort_string(ik + pos, kp->length); field->move_field_offset(-ptdiff); pos += kp->length; - if (field->real_maybe_null()) pos++; } + /* Check UNIQUE constraint before inserting */ if (idx_is_unique[a]) { @@ -4051,7 +4560,7 @@ bool ha_tidesdb::inplace_alter_table(TABLE *altered_table, Alter_inplace_info *h tidesdb_iter_free(iter); txn = NULL; - rc = tidesdb_txn_begin_with_isolation(tdb_global, share->isolation_level, &txn); + rc = tidesdb_txn_begin_with_isolation(tdb_global, TDB_ISOLATION_READ_COMMITTED, &txn); if (rc != TDB_SUCCESS || !txn) { sql_print_error("TIDESDB: inplace ADD INDEX: batch txn_begin failed"); @@ -4117,6 +4626,19 @@ bool ha_tidesdb::commit_inplace_alter_table(TABLE *altered_table, Alter_inplace_ if (!ctx) DBUG_RETURN(false); + /* Free any cached iterators before dropping CFs. The connection's + scan_iter and dup_iter_cache_ may hold merge-heap references to + SSTables in CFs about to be dropped -- freeing them first avoids + use-after-free / heap corruption. */ + if (scan_iter) + { + tidesdb_iter_free(scan_iter); + scan_iter = NULL; + scan_iter_cf_ = NULL; + scan_iter_txn_ = NULL; + } + free_dup_iter_cache(); + if (!commit) { /* Rollback -- we drop any CFs we created for new indexes */ @@ -4140,9 +4662,6 @@ bool ha_tidesdb::commit_inplace_alter_table(TABLE *altered_table, Alter_inplace_ share->idx_cfs.clear(); share->idx_cf_names.clear(); - TDB_TRACE("COMMIT rebuild idx_cfs: altered keys=%u pk=%u", altered_table->s->keys, - altered_table->s->primary_key); - uint new_pk = altered_table->s->primary_key; for (uint i = 0; i < altered_table->s->keys; i++) { @@ -4150,7 +4669,6 @@ bool ha_tidesdb::commit_inplace_alter_table(TABLE *altered_table, Alter_inplace_ { share->idx_cfs.push_back(NULL); share->idx_cf_names.push_back(""); - TDB_TRACE(" key[%u] = PRIMARY (NULL cf)", i); continue; } std::string idx_name; @@ -4158,8 +4676,6 @@ bool ha_tidesdb::commit_inplace_alter_table(TABLE *altered_table, Alter_inplace_ tdb_global, share->cf_name, altered_table->key_info[i].name.str, idx_name); share->idx_cfs.push_back(icf); share->idx_cf_names.push_back(idx_name); - TDB_TRACE(" key[%u] = %s cf=%p cf_name=%s", i, altered_table->key_info[i].name.str, - (void *)icf, idx_name.c_str()); } /* We recompute cached index metadata for the new table shape */ @@ -4264,8 +4780,6 @@ static void force_remove_cf_dir(const std::string &cf_name) (handles Windows, symlinks, read-only attrs, etc.). */ if (my_rmtree(dir, MYF(0)) != 0) sql_print_warning("TIDESDB: force_remove_cf_dir failed for %s", dir); - else - TDB_TRACE("force-removed stale CF dir %s", dir); } /* @@ -4349,8 +4863,8 @@ maria_declare_plugin(tidesdb){ PLUGIN_LICENSE_GPL, tidesdb_init_func, tidesdb_deinit_func, - 0x30306, + 0x30400, NULL, tidesdb_system_variables, - "3.3.6", + "3.4.0", MariaDB_PLUGIN_MATURITY_EXPERIMENTAL} maria_declare_plugin_end; diff --git a/tidesdb/ha_tidesdb.h b/tidesdb/ha_tidesdb.h index 14c1789d..fab5c042 100644 --- a/tidesdb/ha_tidesdb.h +++ b/tidesdb/ha_tidesdb.h @@ -133,6 +133,10 @@ class TidesDB_share : public Handler_share /* Precomputed comparable key length per index (avoids per-row recomputation) */ uint idx_comp_key_len[MAX_KEY]; + /* Cached rec_per_key for secondary indexes (populated by ANALYZE TABLE). + 0 = not yet computed, use heuristic; >0 = sampled value. */ + std::atomic cached_rec_per_key[MAX_KEY]; + /* Secondary index CFs (one per secondary key) */ std::vector idx_cfs; std::vector idx_cf_names; @@ -217,13 +221,50 @@ class ha_tidesdb : public handler uchar idx_search_comp_[MAX_KEY_LENGTH]; uint idx_search_comp_len_; + /* Reusable buffers for secondary index key construction in update_row. + Avoids heap allocation per row and keeps the stack frame small. */ + uchar upd_old_ik_[MAX_KEY_LENGTH * 2 + 2]; + uchar upd_new_ik_[MAX_KEY_LENGTH * 2 + 2]; + + /* Cached dup-check iterators for UNIQUE secondary indexes. + tidesdb_iter_new() is O(num_sstables) -- caching avoids rebuilding + the merge heap on every INSERT for tables with unique indexes. */ + tidesdb_iter_t *dup_iter_cache_[MAX_KEY]; + tidesdb_txn_t *dup_iter_txn_[MAX_KEY]; /* txn each was created on */ + uint64_t dup_iter_txn_gen_[MAX_KEY]; /* txn_generation when created */ + uint dup_iter_count_; /* number of slots populated */ + + /* Reusable buffer for tidesdb_txn_get values -- avoids malloc/free per + point-lookup. Retains heap capacity across calls. */ + std::string get_val_buf_; + + /* Separate encryption output buffer so row_buf_ retains its capacity + across calls (tidesdb_encrypt_row used to replace row_buf_). */ + std::string enc_buf_; + + /* Per-statement cached encryption key version -- avoids calling + encryption_key_get_latest_version() on every single row write. */ + uint cached_enc_key_ver_; + bool enc_key_ver_valid_; + + /* Per-statement cached time(NULL) -- avoids the vDSO/syscall on every + row for TTL computation. 1-second granularity is sufficient for TTL. */ + time_t cached_time_; + bool cached_time_valid_; + + /* Per-statement cached THDVAR lookups -- avoids the indirect + thd_get_ha_data + offset computation on every row. */ + ulonglong cached_sess_ttl_; + bool cached_skip_unique_; + bool cached_thdvars_valid_; + /* Bulk insert state */ bool in_bulk_insert_; ha_rows bulk_insert_ops_; /* ops buffered since last mid-txn commit */ /* Covering-index mode (HA_EXTRA_KEYREAD) */ bool keyread_only_; - bool write_can_replace_; /* true during REPLACE INTO / INSERT ON DUPLICATE KEY UPDATE */ + bool write_can_replace_; /* true during REPLACE INTO (HA_EXTRA_WRITE_CAN_REPLACE) */ /* ----- private helpers ---------------------------------------------------------------------- */ @@ -283,6 +324,15 @@ class ha_tidesdb : public handler field format. Returns true on success, false for unsupported types. */ static bool decode_int_sort_key(const uint8_t *src, uint sort_len, Field *f, uchar *buf); + /* Extended sort-key decoder -- handles integers, DATE, DATETIME, + TIMESTAMP, YEAR, and fixed-length CHAR/BINARY. Returns true on + success, false for unsupported types. Used by covering index + reads and ICP evaluation to avoid PK point-lookups. */ + static bool decode_sort_key_part(const uint8_t *src, uint sort_len, Field *f, uchar *buf); + + /* Free all cached dup-check iterators */ + void free_dup_iter_cache(); + /* Recover hidden-PK counter by scanning the CF */ void recover_counters(); @@ -300,10 +350,9 @@ class ha_tidesdb : public handler HA_CAN_TABLES_WITHOUT_ROLLBACK; } - ulong index_flags(uint idx, uint part, bool all_parts) const override - { - return HA_READ_NEXT | HA_READ_PREV | HA_READ_ORDER | HA_READ_RANGE | HA_KEYREAD_ONLY; - } + ulong index_flags(uint idx, uint part, bool all_parts) const override; + + const char *index_type(uint key_number) override; uint max_supported_record_length() const override { From a39fdb66015eac3db1a94bccad19fe8184ff040d Mon Sep 17 00:00:00 2001 From: Alex Gaetano Padula Date: Wed, 11 Mar 2026 03:28:48 -0400 Subject: [PATCH 2/3] use 12.2.2 mdb --- .github/workflows/mariadb-test.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/mariadb-test.yml b/.github/workflows/mariadb-test.yml index fd516cee..c9f9b424 100644 --- a/.github/workflows/mariadb-test.yml +++ b/.github/workflows/mariadb-test.yml @@ -152,8 +152,8 @@ jobs: id: mariadb-release shell: bash run: | - echo "tag=mariadb-12.1.2" >> $GITHUB_OUTPUT - echo "MariaDB version: mariadb-12.1.2" + echo "tag=mariadb-12.2.2" >> $GITHUB_OUTPUT + echo "MariaDB version: mariadb-12.2.2" - name: Clone MariaDB server shell: bash From af99f97f36abb39fa9393d912e0673f6cb364f42 Mon Sep 17 00:00:00 2001 From: Alex Gaetano Padula Date: Wed, 11 Mar 2026 04:00:13 -0400 Subject: [PATCH 3/3] * Removed --character-set-server and --collation-server from suite.opt * Deleted 20 redundant .opt files that only contained --plugin-load-add=ha_tidesdb + --plugin-maturity=unknown * Stripped redundant lines from the 7 remaining .opt files that had actual unique options (encryption keys, loose test vars) --- .../suite/tidesdb/r/tidesdb_data_home_dir.reject | 10 ---------- .../suite/tidesdb/r/tidesdb_engine_status.result | 2 +- mysql-test/suite/tidesdb/suite.opt | 3 +-- mysql-test/suite/tidesdb/t/tidesdb_alter_crash.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_analyze.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_backup.opt | 2 -- .../suite/tidesdb/t/tidesdb_concurrent_conflict.opt | 2 -- .../suite/tidesdb/t/tidesdb_concurrent_errors.opt | 1 - .../suite/tidesdb/t/tidesdb_consistent_snapshot.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_crud.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_drop_create.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_encryption.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_engine_status.test | 4 ++-- mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_info_schema.opt | 1 - mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_isolation.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_json.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_online_ddl.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_options.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_partition.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_pk_index.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_rename.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_savepoint.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_stress.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_ttl.opt | 2 -- mysql-test/suite/tidesdb/t/tidesdb_vcol.opt | 2 -- 31 files changed, 4 insertions(+), 67 deletions(-) delete mode 100644 mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_alter_crash.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_backup.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_drop_create.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_info_schema.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_isolation.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_options.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_partition.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_pk_index.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_rename.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_ttl.opt delete mode 100644 mysql-test/suite/tidesdb/t/tidesdb_vcol.opt diff --git a/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject b/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject deleted file mode 100644 index 0ffe0fcd..00000000 --- a/mysql-test/suite/tidesdb/r/tidesdb_data_home_dir.reject +++ /dev/null @@ -1,10 +0,0 @@ -# -# Verify tidesdb_data_home_dir is visible and read-only -# -SHOW VARIABLES LIKE 'tidesdb_data_home_dir'; -Variable_name Value -tidesdb_data_home_dir -SET GLOBAL tidesdb_data_home_dir = '/tmp/test'; -ERROR HY000: Variable 'tidesdb_data_home_dir' is a read only variable -# -# Done. diff --git a/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result b/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result index c7cd1498..753b67c1 100644 --- a/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result +++ b/mysql-test/suite/tidesdb/r/tidesdb_engine_status.result @@ -6,7 +6,7 @@ INSERT INTO t1 VALUES (1,10),(2,20),(3,30); SHOW ENGINE TIDESDB STATUS; Type Name Status TIDESDB ================== TidesDB Engine Status ================== -Data directory: /home/agpmastersystem/server-mariadb-N.N.N/builddir/mysql-test/var/mysqld.N/tidesdb_data +Data directory: TIDESDB_DATA_DIR Column families: N Global sequence: N diff --git a/mysql-test/suite/tidesdb/suite.opt b/mysql-test/suite/tidesdb/suite.opt index 2ffb1bd7..3027c6b0 100644 --- a/mysql-test/suite/tidesdb/suite.opt +++ b/mysql-test/suite/tidesdb/suite.opt @@ -1,3 +1,2 @@ --plugin-load-add=$HA_TIDESDB_SO ---character-set-server=utf8mb4 ---collation-server=utf8mb4_general_ci \ No newline at end of file +--plugin-maturity=unknown \ No newline at end of file diff --git a/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.opt b/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_alter_crash.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_analyze.opt b/mysql-test/suite/tidesdb/t/tidesdb_analyze.opt index 7d783cf3..83434125 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_analyze.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_analyze.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-online-ddl-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_backup.opt b/mysql-test/suite/tidesdb/t/tidesdb_backup.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_backup.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_conflict.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.opt b/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.opt deleted file mode 100644 index 8374626f..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_concurrent_errors.opt +++ /dev/null @@ -1 +0,0 @@ ---plugin-maturity=unknown diff --git a/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.opt b/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_consistent_snapshot.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_crud.opt b/mysql-test/suite/tidesdb/t/tidesdb_crud.opt index a44e81bd..468f3258 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_crud.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_crud.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-crud-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt b/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_data_home_dir.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_drop_create.opt b/mysql-test/suite/tidesdb/t/tidesdb_drop_create.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_drop_create.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_encryption.opt b/mysql-test/suite/tidesdb/t/tidesdb_encryption.opt index c350860a..5737dfca 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_encryption.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_encryption.opt @@ -1,4 +1,2 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --plugin-load-add=file_key_management --file-key-management-filename=$MYSQL_TEST_DIR/std_data/keys.txt diff --git a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test index b83eed87..9407bd71 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test +++ b/mysql-test/suite/tidesdb/t/tidesdb_engine_status.test @@ -10,8 +10,8 @@ CREATE TABLE t1 (id INT PRIMARY KEY, val INT) ENGINE=TidesDB; INSERT INTO t1 VALUES (1,10),(2,20),(3,30); -# Mask volatile numbers in the output ---replace_regex /[0-9]+/N/ +# Mask the data directory path (varies per build) and volatile numbers +--replace_regex /Data directory: [^\n]*/Data directory: TIDESDB_DATA_DIR/ /[0-9]+/N/ SHOW ENGINE TIDESDB STATUS; DROP TABLE t1; diff --git a/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt b/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_index_stats.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_info_schema.opt b/mysql-test/suite/tidesdb/t/tidesdb_info_schema.opt deleted file mode 100644 index 8374626f..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_info_schema.opt +++ /dev/null @@ -1 +0,0 @@ ---plugin-maturity=unknown diff --git a/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt b/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_insert_conflict.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt b/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_isolation.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_json.opt b/mysql-test/suite/tidesdb/t/tidesdb_json.opt index 6a099879..2082352d 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_json.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_json.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-json-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.opt b/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.opt index 7d783cf3..83434125 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_online_ddl.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-online-ddl-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_options.opt b/mysql-test/suite/tidesdb/t/tidesdb_options.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_options.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_partition.opt b/mysql-test/suite/tidesdb/t/tidesdb_partition.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_partition.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt b/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_per_index_btree.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_pk_index.opt b/mysql-test/suite/tidesdb/t/tidesdb_pk_index.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_pk_index.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_rename.opt b/mysql-test/suite/tidesdb/t/tidesdb_rename.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_rename.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt b/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_replace_iodku.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_savepoint.opt b/mysql-test/suite/tidesdb/t/tidesdb_savepoint.opt index b1e3f374..314429e2 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_savepoint.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_savepoint.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-savepoint-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_stress.opt b/mysql-test/suite/tidesdb/t/tidesdb_stress.opt index 0530ad5c..2c58a757 100644 --- a/mysql-test/suite/tidesdb/t/tidesdb_stress.opt +++ b/mysql-test/suite/tidesdb/t/tidesdb_stress.opt @@ -1,3 +1 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb --loose-tidesdb-stress-test=1 diff --git a/mysql-test/suite/tidesdb/t/tidesdb_ttl.opt b/mysql-test/suite/tidesdb/t/tidesdb_ttl.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_ttl.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb diff --git a/mysql-test/suite/tidesdb/t/tidesdb_vcol.opt b/mysql-test/suite/tidesdb/t/tidesdb_vcol.opt deleted file mode 100644 index 2f9ea4e3..00000000 --- a/mysql-test/suite/tidesdb/t/tidesdb_vcol.opt +++ /dev/null @@ -1,2 +0,0 @@ ---plugin-maturity=unknown ---plugin-load-add=ha_tidesdb