Conversation
🎯 Auto PR: stage → main
🎯 Auto PR: stage → main
📝 WalkthroughWalkthroughThis PR introduces comprehensive Microsoft SQL Server (MSSQL) support to the database management layer. It adds MSSQL dependencies, creates a complete suite of 13 data access objects for MSSQL-specific operations including CRUD, schema management, and transactions, extends the core db-manager with MSSQL pool lifecycle management, and includes data type mapping functions for SQL Server types. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (4)
packages/server/src/dao/mssql/add-record.mssql.dao.ts-21-36 (1)
21-36:⚠️ Potential issue | 🟡 MinorMissing JSON serialization for object values.
Unlike
updateRecordswhich serializes object values to JSON strings (line 60-62 in that file),addRecordpasses object values directly. This inconsistency may cause issues when inserting JSON/object data.🔧 Proposed fix to add JSON serialization
const values = Object.values(data).map((value, index) => { const columnName = columns[index]; + if (value !== null && typeof value === "object") { + return JSON.stringify(value); + } if (booleanColumns.has(columnName) && typeof value === "string") { return value === "true" ? 1 : 0; } return value; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/add-record.mssql.dao.ts` around lines 21 - 36, The values mapping in add-record.mssql.dao.ts currently returns object values raw which breaks consistency with updateRecords; update the values computation (the Object.values(data).map(...) block that produces `values` and references `booleanColumns`) to detect when a value is a plain object or array and JSON.stringify it (while keeping the existing boolean string-to-1/0 logic), so objects are inserted as JSON strings just like updateRecords does.packages/server/src/dao/mssql/database-list.mssql.dao.ts-83-87 (1)
83-87:⚠️ Potential issue | 🟡 Minor
parseDatabaseUrldefaults to PostgreSQL port 5432, not MSSQL 1433.The
parseDatabaseUrlutility returns port 5432 as its default (PostgreSQL), but MSSQL uses port 1433. Wheninfo.portis null, this will report an incorrect default port.🔧 Proposed fix
const info = result.recordset[0] as Record<string, string | number>; - const urlDefaults = parseDatabaseUrl(); + const MSSQL_DEFAULT_PORT = 1433; return { - host: String(info.host || urlDefaults.host), - port: Number(info.port || urlDefaults.port), + host: String(info.host || "localhost"), + port: Number(info.port || MSSQL_DEFAULT_PORT),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/database-list.mssql.dao.ts` around lines 83 - 87, The MSSQL DAO is using parseDatabaseUrl (which defaults to PostgreSQL port 5432), causing incorrect default ports; in the return object built in database-list.mssql.dao (the urlDefaults and host/port lines), ensure the port falls back to MSSQL's default 1433 when neither info.port nor urlDefaults.port provide a value: update the port expression used in the returned object (referencing urlDefaults and info.port) to use 1433 as the final default (e.g., Number(info.port || urlDefaults.port || 1433)) so MSSQL records get the correct default port.packages/server/src/dao/mssql/database-list.mssql.dao.ts-65-74 (1)
65-74:⚠️ Potential issue | 🟡 MinorDMV queries require VIEW SERVER STATE permission.
The queries against
sys.dm_exec_connectionsandsys.dm_exec_sessionsrequire theVIEW SERVER STATEpermission. If the connected user lacks this permission, these queries will fail. Consider wrapping in try-catch with a fallback for non-privileged connections.🔧 Proposed fallback handling
- const result = await pool.request().query(` - SELECT - @@VERSION AS version, - DB_NAME() AS database_name, - SUSER_NAME() AS [user], - @@SERVERNAME AS host, - (SELECT local_tcp_port FROM sys.dm_exec_connections WHERE session_id = @@SPID) AS port, - @@MAX_CONNECTIONS AS max_connections, - (SELECT COUNT(*) FROM sys.dm_exec_sessions WHERE is_user_process = 1) AS active_connections - `); + let result; + try { + result = await pool.request().query(` + SELECT + @@VERSION AS version, + DB_NAME() AS database_name, + SUSER_NAME() AS [user], + @@SERVERNAME AS host, + (SELECT local_tcp_port FROM sys.dm_exec_connections WHERE session_id = @@SPID) AS port, + @@MAX_CONNECTIONS AS max_connections, + (SELECT COUNT(*) FROM sys.dm_exec_sessions WHERE is_user_process = 1) AS active_connections + `); + } catch { + // Fallback for users without VIEW SERVER STATE permission + result = await pool.request().query(` + SELECT + @@VERSION AS version, + DB_NAME() AS database_name, + SUSER_NAME() AS [user], + @@SERVERNAME AS host, + NULL AS port, + @@MAX_CONNECTIONS AS max_connections, + NULL AS active_connections + `); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/database-list.mssql.dao.ts` around lines 65 - 74, The DMV subqueries (sys.dm_exec_connections and sys.dm_exec_sessions) inside the pool.request().query call may throw without VIEW SERVER STATE permission; modify the code around the query that builds `result` so you wrap execution of DMV-dependent parts in a try/catch: run the full query if permitted, but if the query fails due to permission errors catch the error and re-run a safer query that omits the DMV subqueries (or return fallback values like null/0 for port and active_connections), and ensure the catch logs the permission error; update the code that sets `result` (the query invocation using pool.request().query) to attempt the DMV version first and fall back to a non-DMV query or defaults on permission failure.packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts-71-77 (1)
71-77:⚠️ Potential issue | 🟡 MinorRollback may fail if transaction hasn't begun.
If an error occurs between lines 20-30 (before
transaction.begin()), callingtransaction.rollback()will throw because the transaction hasn't started. Consider movingtransaction.begin()earlier or checking if the transaction is active before rollback.🔧 Proposed fix
+ let transactionStarted = false; try { const columns = Object.keys(records[0]); const columnNames = columns.map((col) => `[${col}]`).join(", "); const tableColumns = await getTableColumns({ tableName, db }); const booleanColumns = new Set( tableColumns .filter((col) => col.dataTypeLabel === "boolean") .map((col) => col.columnName), ); await transaction.begin(); + transactionStarted = true; // ... rest of code } catch (error) { - await transaction.rollback(); + if (transactionStarted) { + await transaction.rollback(); + } if (error instanceof HTTPException) throw error;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts` around lines 71 - 77, The catch block currently always calls transaction.rollback() which can throw if transaction.begin() was never called; update the error handling in the function that performs the bulk insert (the scope using transaction.begin() / transaction.rollback(), referenced as transaction) to only call transaction.rollback() when a transaction is active (e.g., check a boolean flag like transactionStarted or use transaction.isActive()/transaction.finished if available) or move transaction.begin() to happen before any operations that can error, and ensure any thrown HTTPException is still rethrown unchanged; keep existing behavior of throwing a new HTTPException(500, { message: `Failed to bulk insert records into "${tableName}"` }) for other errors.
🧹 Nitpick comments (7)
packages/server/src/db-manager.ts (1)
302-309: UsePromise.allSettledfor best-effort pool teardown.
Promise.allrejects on first failure, which can skip full cleanup flow.♻️ Proposed resilient close-all behavior
-await Promise.all([...pgClosePromises, ...mysqlClosePromises, ...mssqlClosePromises]); +await Promise.allSettled([...pgClosePromises, ...mysqlClosePromises, ...mssqlClosePromises]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/db-manager.ts` around lines 302 - 309, Replace the current Promise.all usage with a best-effort teardown: use Promise.allSettled to await [...pgClosePromises, ...mysqlClosePromises, ...mssqlClosePromises], inspect the results and log any failures from the settled promises (including which connectionString/pool failed), and ensure you still clear the pools maps (e.g., this.pgPools.clear(), this.mssqlPools.clear(), etc.) after awaiting allSettled so cleanup proceeds even if some pool.close() calls reject; reference mssqlClosePromises, this.mssqlPools, pool.close, pgClosePromises, mysqlClosePromises, and this.pgPools.clear when updating the implementation.packages/server/src/dao/mssql/table-columns.mssql.dao.ts (1)
79-81: Consider parameterizing the schema name.The schema is hardcoded to
'dbo', which will exclude tables in other schemas. If multi-schema support is needed in the future, consider accepting schema as an optional parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/table-columns.mssql.dao.ts` around lines 79 - 81, The query currently hardcodes the schema as 'dbo'; update the DAO method that builds this query (e.g., getTableColumns / the function that uses `@tableName` in table-columns.mssql.dao.ts) to accept an optional schema parameter (schema?: string) with a default of 'dbo', replace the literal 'dbo' in the WHERE clause with a bound parameter (e.g., `@schema`) and add that parameter to the query bindings, and then update any call sites to pass a schema when needed so multi-schema tables are returned.packages/server/src/dao/mssql/create-table.mssql.dao.ts (1)
169-176: Add error handling for DDL execution.If table creation fails (e.g., table already exists, invalid column types), the raw MSSQL error propagates. Consider catching and wrapping the error with context.
🔧 Proposed error handling
- await pool.request().query(createTableSQL); + try { + await pool.request().query(createTableSQL); + } catch (error) { + throw new HTTPException(500, { + message: `Failed to create table "${tableName}": ${error instanceof Error ? error.message : String(error)}`, + }); + }Don't forget to add the import:
import { HTTPException } from "hono/http-exception";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/create-table.mssql.dao.ts` around lines 169 - 176, Wrap the DDL execution in a try/catch around the pool.request().query(createTableSQL) call so MSSQL errors are caught and re-thrown with context; on catch, import HTTPException from "hono/http-exception" and throw a new HTTPException (e.g., 500) that includes a descriptive message mentioning the target tableName and the original error message/stack, preserving the original error details for debugging while preventing raw DB errors from propagating.packages/server/src/dao/mssql/query.mssql.dao.ts (2)
27-40: Potential issue when SELECT returns empty recordset without column metadata.When
result.recordsetexists butresult.recordset.columnsis undefined androwsis empty,Object.keys(rows[0] ?? {})returns an empty array. This is likely acceptable, but consider documenting this behavior or explicitly handling empty results.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/query.mssql.dao.ts` around lines 27 - 40, When result.recordset exists but has no columns metadata and rows is empty, explicitly set columns to an empty array instead of deriving from rows[0]; update the logic around the variables result.recordset, rows and columns in query.mssql.dao.ts so it becomes: if result.recordset.columns is present use Object.keys(result.recordset.columns), else if rows.length > 0 use Object.keys(rows[0]), otherwise set columns = []; keep the rest of the returned shape (rows, rowCount, duration, message) unchanged and consider setting message for empty results if desired.
22-24: Consider wrapping query execution in try-catch for better error messages.If the query fails (syntax error, permissions, etc.), the raw MSSQL error will propagate. Wrapping in try-catch would allow returning a more user-friendly error with the query context.
🔧 Proposed error handling
const startTime = performance.now(); - const result = await pool.request().query(cleanedQuery); - const duration = performance.now() - startTime; + let result; + try { + result = await pool.request().query(cleanedQuery); + } catch (error) { + throw new HTTPException(400, { + message: `Query execution failed: ${error instanceof Error ? error.message : String(error)}`, + }); + } + const duration = performance.now() - startTime;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/query.mssql.dao.ts` around lines 22 - 24, Wrap the call to pool.request().query(cleanedQuery) in a try-catch inside the query execution block (the section using startTime, result, duration and the cleanedQuery variable) so any MSSQL errors are caught; in the catch, compute duration as performance.now() - startTime, log or throw a new error that includes a short, user-friendly message plus the cleanedQuery and original error details (or attach the original error as a cause) to preserve context for debugging; ensure the function (the DAO method performing the query) still returns or rethrows consistent error types expected by callers.packages/server/src/dao/mssql/tables-data.mssql.dao.ts (1)
16-16: Unusedfiltersparameter.The
filtersparameter is declared but never used in the implementation. Either implement filtering logic or remove the parameter to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/tables-data.mssql.dao.ts` at line 16, The declared optional parameter/field "filters?: any[]" is unused; either remove this declaration from the DAO signature/shape or implement filtering logic where data is selected (e.g., inside the methods that read table data in this file such as the table data retrieval function/class methods). If you choose to implement filtering, use the "filters" array to build the WHERE clause or apply in-memory filter predicates before returning results; if you choose to remove it, delete the "filters?: any[]" declaration and any references to it to avoid dead/unused API surface.packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts (1)
31-61: Performance: row-by-row inserts are inefficient for bulk operations.Each record executes a separate INSERT statement within the transaction. For large batches, this is significantly slower than using MSSQL's Table-Valued Parameters (TVPs) or batched multi-row INSERT statements (SQL Server supports up to 1000 rows per INSERT).
🚀 Proposed batched insert approach
+ const BATCH_SIZE = 1000; // SQL Server max rows per INSERT + await transaction.begin(); - for (let i = 0; i < records.length; i++) { - const record = records[i]; - const request = transaction.request(); - const values = columns.map((col) => { - const value = record[col]; - if (booleanColumns.has(col) && typeof value === "string") { - return value === "true" ? 1 : 0; - } - return value; - }); - - const paramNames = columns.map((_, idx) => `@p${i}_${idx}`).join(", "); - columns.forEach((_col, idx) => { - request.input(`p${i}_${idx}`, values[idx]); - }); - - const insertSQL = ` - INSERT INTO [${tableName}] (${columnNames}) - VALUES (${paramNames}) - `; - - try { - await request.query(insertSQL); - } catch (error) { - throw new HTTPException(500, { - message: `Failed to insert record ${i + 1}: ${error instanceof Error ? error.message : String(error)}`, - }); - } - } + for (let batchStart = 0; batchStart < records.length; batchStart += BATCH_SIZE) { + const batch = records.slice(batchStart, batchStart + BATCH_SIZE); + const request = transaction.request(); + const valuesClauses: string[] = []; + + batch.forEach((record, recordIdx) => { + const paramNames = columns.map((col, colIdx) => { + const paramName = `p${batchStart + recordIdx}_${colIdx}`; + let value = record[col]; + if (booleanColumns.has(col) && typeof value === "string") { + value = value === "true" ? 1 : 0; + } + request.input(paramName, value); + return `@${paramName}`; + }); + valuesClauses.push(`(${paramNames.join(", ")})`); + }); + + const insertSQL = `INSERT INTO [${tableName}] (${columnNames}) VALUES ${valuesClauses.join(", ")}`; + await request.query(insertSQL); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts` around lines 31 - 61, Replace the row-by-row insert loop in the transaction with a batched insert using either MSSQL Table-Valued Parameters (sql.Table + request.input with a TVP type) or multi-row INSERT statements limited to 1000 rows per batch; build batches from records and columns, convert booleanColumns string values to 1/0 into the batch payload, and execute one request.query per batch (reusing a single transaction.request per batch and using unique parameter names if you choose multi-row parameterization). Specifically modify the code that currently iterates over records (the for loop using transaction.request(), values, paramNames, and insertSQL) to instead (a) create an sql.Table matching columnNames, push each record row into that table (ensuring boolean conversion), then call transaction.request().input('tvp', tvpType, table).query('INSERT INTO [${tableName}] SELECT * FROM `@tvp`') or (b) group up to 1000 records, generate a single INSERT ... VALUES (...) with batched parameter names and supply all parameters once per batch; keep the existing error handling pattern around the batch execution to throw the same HTTPException on failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/server/src/dao/mssql/add-record.mssql.dao.ts`:
- Around line 30-36: The INSERT builds SQL by interpolating tableName and
columns which allows injection; before constructing
columnNames/paramNames/query, validate tableName and each column using the same
identifier whitelist used in update-records.mssql.dao.ts (e.g.,
isValidIdentifier or /^[A-Za-z_][A-Za-z0-9_]*$/) and throw on invalid
identifiers, then use the validated names when creating columnNames (still
wrapping with brackets) and the table name; keep parameterization for values
(param0..n) unchanged.
In `@packages/server/src/dao/mssql/create-table.mssql.dao.ts`:
- Around line 124-142: The DDL builder in create-table.mssql.dao.ts is
vulnerable because identifiers are interpolated directly when building
columnDefinitions from FieldDataType.field.columnName (and elsewhere for
tableName and constraint names); fix by validating or escaping every SQL
identifier before interpolation — reuse the repository's existing identifier
validation/escaping helper (e.g., validateIdentifier or similar used in other
files) when constructing column names, table names, and constraint names, and
ensure columnDefinitions generation (and any code that calls
mapColumnTypeToMssql or uses formatMssqlDefaultValue) only inserts identifiers
that pass validation or have been safely quoted/escaped.
In `@packages/server/src/dao/mssql/delete-column.mssql.dao.ts`:
- Around line 54-57: The ALTER TABLE call is vulnerable to malicious
identifiers; sanitize and escape identifiers before interpolation: replace any ]
with ]] for both tableName and columnName (and optionally validate they match an
allowed pattern/are non-empty) then use those escaped values in the ALTER TABLE
string used by pool.request().query; update the code around the
pool.request().query call (referencing variables tableName and columnName) to
use the escaped/validated identifiers to prevent bracket-injection.
In `@packages/server/src/dao/mssql/delete-records.mssql.dao.ts`:
- Around line 171-182: The recursion wrongly dedupes only by table name
(deletedTables) and by "table.column" (visited key) which lets different FK
columns or different sets of values be skipped; update the dedupe keys to
include the specific value set being deleted (e.g., include a stable
serialization/hash of the values array) so both deletedTables and visited track
per-(table,column,values) operations; modify deleteRelatedRecursively (and the
similar logic around lines 223-230) to compute a composite key like
`${targetTable}.${targetColumn}.${valuesHash}` and use that for visited and for
any table-level dedupe to ensure distinct value sets are processed instead of
being skipped.
- Around line 86-87: The SQL building code interpolates dynamic identifiers
(e.g., constraint.referencingTable, constraint.referencingColumn, tableName)
directly into queries which allows SQL injection; fix by validating/escaping
identifiers before interpolation — implement a helper (e.g.,
validateSqlIdentifier or escapeSqlIdentifier) that ensures names match a strict
whitelist/regex (e.g., /^[A-Za-z_][A-Za-z0-9_]*$/) and then wrap validated
identifiers in square brackets before using them in the query strings used in
functions/methods that build the DELETE/SELECT (references in
delete-records.mssql.dao.ts: uses of constraint.referencingTable,
constraint.referencingColumn, and tableName at the noted locations 86-87, 127,
217, 240); keep value inputs parameterized (reuse paramList) and throw or reject
if an identifier fails validation instead of interpolating it.
In `@packages/server/src/dao/mssql/delete-table.mssql.dao.ts`:
- Line 64: The dynamic SQL interpolates identifiers (tableName,
constraint.referencingTable, constraintName) directly into queries which is
unsafe; create and use a single helper (e.g. escapeMssqlIdentifier) that first
validates identifiers against a strict allowlist/regex (e.g. only letters,
numbers, underscores or explicit whitelist) and then escapes any closing bracket
by replacing ] with ]] before wrapping with square brackets, and replace all
direct uses of `[${constraint.referencingTable}]`, `[${tableName}]`, and
`[${constraintName}]` in the delete-table.mssql.dao.ts query builders with calls
to that helper so identifiers are validated/escaped consistently (do not try to
parameterize identifiers).
- Around line 31-35: The FK discovery selects only OBJECT_NAME(...) and later
emits unqualified DDL which mis-targets tables with the same name in other
schemas; update the SELECT that defines
constraint_name/referencing_table/referenced_table (the query producing fk.name
AS constraint_name, OBJECT_NAME(...) AS referencing_table, OBJECT_NAME(...) AS
referenced_table and related COL_NAME calls) to return schema-qualified
identifiers by using OBJECT_SCHEMA_NAME(...) (or OBJECT_SCHEMA_NAME(object_id))
and build schema-qualified names with QUOTENAME(OBJECT_SCHEMA_NAME(...)) + '.' +
QUOTENAME(OBJECT_NAME(...)), and ensure the code paths that generate the
DROP/ALTER statements (the code that consumes those columns) use those
schema-qualified names (and QUOTENAME) instead of the unqualified OBJECT_NAME
values so FK lookup and DDL target the correct schema-qualified objects; do the
same change for the other FK-related SELECTs referenced in this file (the other
occurrences that populate referencing/referenced table names).
- Around line 118-130: The cascade path in delete-table.mssql.dao.ts currently
drops FK constraints via getForeignKeyReferences and then runs DROP TABLE, which
can leave the schema partially modified if DROP TABLE fails; wrap the FK DROP
and the final DROP TABLE in a single database transaction (use the pool
transaction API you already use elsewhere), begin a transaction, execute the
ALTER TABLE DROP CONSTRAINT calls and the DROP TABLE inside that transaction,
commit on success and rollback on any error, and ensure the transaction is
always cleaned up in error paths (reference getForeignKeyReferences, cascade
flag, pool.request(), and tableName to locate where to apply the transaction).
In `@packages/server/src/dao/mssql/export-table.mssql.dao.ts`:
- Around line 13-23: The code is vulnerable because tableName is interpolated
directly into pool.request().query and it incorrectly throws a 404 for empty
tables; fix by validating/sanitizing the identifier and returning empty rows
instead of throwing. Specifically, in export-table.mssql.dao.ts validate
tableName against a strict whitelist/regex (e.g. only allow alphanumeric and
underscores) or resolve it via a safe metadata lookup
(INFORMATION_SCHEMA.TABLES) before using it in pool.request().query (identifiers
cannot be parameterized), then execute the SELECT using the validated
identifier, remove the HTTPException path for empty result.recordset (do not
throw on zero rows), and return { cols: [], rows: [] } or derive cols from
INFORMATION_SCHEMA.COLUMNS when no rows exist; update references to result,
pool.request().query, cols, rows, and HTTPException accordingly.
In `@packages/server/src/dao/mssql/table-list.mssql.dao.ts`:
- Around line 24-34: The current tablesList construction does an N+1 per-table
COUNT(*) using string interpolation of table.tableName (in functions around
tablesList, pool, and countResult), which is both injection-prone and expensive;
replace this by issuing a single safe metadata query (using
sys.tables/sys.partitions or sys.dm_db_partition_stats joined to sys.schemas)
that returns table names and row counts for all user tables in one call, then
map that single result set to TableInfoSchemaType entries instead of the
Promise.all loop—ensure no identifier interpolation, use the
pool.request().query(...) once, and convert the returned rows to the same shape
(tableName and rowCount) used by the rest of the code.
In `@packages/server/src/dao/mssql/table-schema.mssql.dao.ts`:
- Around line 91-120: The generated CREATE TABLE in table-schema.mssql.dao.ts
currently builds only column definitions (see columns, columnDef and
createTableSql) and omits primary key and foreign key constraints; update the
logic to collect constraints from the schema query results (or run the
constraint query already present) and build constraint clauses (CONSTRAINT ...
PRIMARY KEY (...), CONSTRAINT ... FOREIGN KEY (...) REFERENCES ... ) into a
constraints array, then include constraints.join(",\n") along with
columns.join(",\n") inside the CREATE TABLE body before the closing parenthesis;
ensure you reference the actual constraint names, referenced table/columns and
include them in the final createTableSql string construction so PKs/FKs are
preserved for round-tripping.
In `@packages/server/src/dao/mssql/tables-data.mssql.dao.ts`:
- Around line 60-72: The COUNT and SELECT queries interpolate the tableName
directly (see variables tableName, countResult and dataResult), which can lead
to SQL injection; validate or sanitize the table identifier before using it in
pool.request().query by reusing the identifier validation utility used elsewhere
in the DAO layer (e.g., call the project’s validateIdentifier/sanitizeIdentifier
function on tableName and throw or return an error if invalid), then use the
validated identifier in the COUNT and SELECT queries and remove direct
interpolation of the raw tableName.
- Around line 49-57: The ORDER BY construction in this function is vulnerable
because column names from sort and the string `sort` are interpolated directly;
update the logic that builds `sortClause` to validate identifiers before using
them: ensure each s.columnName (used when building `sortParts`), the
single-string `sort`, and `order`/s.direction are checked against a strict
whitelist or regex (e.g., only letters, numbers, underscores and optionally a
known set of table columns) and normalize direction to only "ASC" or "DESC"; if
validation fails for any entry, skip it (or fall back to the default `ORDER BY
(SELECT NULL)`). Keep references to the existing variables `sortClause`,
`sortParts`, `s.columnName`, `s.direction`, and `order` when making the change.
In `@packages/server/src/dao/mssql/update-records.mssql.dao.ts`:
- Around line 54-75: The code in update-records.mssql.dao.ts builds SQL using
interpolated identifiers (tableName, primaryKey, and u.columnName) which can be
escaped by malicious input; before constructing the UPDATE query (inside the
loop over updatesByRow / using rowUpdates, booleanColumns, request), validate
and whitelist identifiers: call getTableColumns(tableName) to retrieve allowed
column names and ensure primaryKey is among them and every u.columnName exists;
additionally enforce an identifier regex (e.g., /^[A-Za-z0-9_]+$/) to reject any
names containing ']' or other illegal chars and throw an error if validation
fails; only then use the validated names to build the SET clauses and WHERE
clause.
In `@packages/server/src/db-manager.ts`:
- Line 220: The current console.log prints the raw connectionString (variable
connectionString) which may include credentials; change the log to avoid leaking
secrets by removing the raw connection string and either (a) log a non-sensitive
summary like "Created SQL Server connection pool for host=<host> db=<database>"
after parsing connectionString to extract host and database, or (b) call a small
helper (e.g., redactConnectionString(connectionString)) that masks user/password
before logging; replace the existing console.log(`Created SQL Server connection
pool for: ${connectionString}`) with one of these safe alternatives.
- Around line 187-223: The code can create duplicate pools because the
this.mssqlPools.has(connectionString) check occurs before awaiting new
mssql.ConnectionPool(config).connect(); fix by introducing an in-flight map
(e.g., this.mssqlPoolPromises) and store the promise for new
mssql.ConnectionPool(config).connect() under connectionString before awaiting it
so concurrent callers await the same promise; on success set
this.mssqlPools.set(connectionString, pool) and delete the promise entry, and on
failure remove the promise entry and rethrow the error; keep existing
pool.on("error", ...) registration and return
this.mssqlPools.get(connectionString) as MssqlPool after awaiting the shared
promise.
- Around line 191-202: The mssql.config object built in the db-manager (the
const config) puts connectTimeout under options and disables TLS; update it so
connectionTimeout and requestTimeout are top-level properties on the config
object (not under options) and change options.encrypt to true and
options.trustServerCertificate to false; keep options.enableArithAbort and any
other options as-is and preserve using this.baseConfig values and the dbName
assignment so the rest of the code (e.g., functions that consume mssql.config)
continues to work.
---
Minor comments:
In `@packages/server/src/dao/mssql/add-record.mssql.dao.ts`:
- Around line 21-36: The values mapping in add-record.mssql.dao.ts currently
returns object values raw which breaks consistency with updateRecords; update
the values computation (the Object.values(data).map(...) block that produces
`values` and references `booleanColumns`) to detect when a value is a plain
object or array and JSON.stringify it (while keeping the existing boolean
string-to-1/0 logic), so objects are inserted as JSON strings just like
updateRecords does.
In `@packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts`:
- Around line 71-77: The catch block currently always calls
transaction.rollback() which can throw if transaction.begin() was never called;
update the error handling in the function that performs the bulk insert (the
scope using transaction.begin() / transaction.rollback(), referenced as
transaction) to only call transaction.rollback() when a transaction is active
(e.g., check a boolean flag like transactionStarted or use
transaction.isActive()/transaction.finished if available) or move
transaction.begin() to happen before any operations that can error, and ensure
any thrown HTTPException is still rethrown unchanged; keep existing behavior of
throwing a new HTTPException(500, { message: `Failed to bulk insert records into
"${tableName}"` }) for other errors.
In `@packages/server/src/dao/mssql/database-list.mssql.dao.ts`:
- Around line 83-87: The MSSQL DAO is using parseDatabaseUrl (which defaults to
PostgreSQL port 5432), causing incorrect default ports; in the return object
built in database-list.mssql.dao (the urlDefaults and host/port lines), ensure
the port falls back to MSSQL's default 1433 when neither info.port nor
urlDefaults.port provide a value: update the port expression used in the
returned object (referencing urlDefaults and info.port) to use 1433 as the final
default (e.g., Number(info.port || urlDefaults.port || 1433)) so MSSQL records
get the correct default port.
- Around line 65-74: The DMV subqueries (sys.dm_exec_connections and
sys.dm_exec_sessions) inside the pool.request().query call may throw without
VIEW SERVER STATE permission; modify the code around the query that builds
`result` so you wrap execution of DMV-dependent parts in a try/catch: run the
full query if permitted, but if the query fails due to permission errors catch
the error and re-run a safer query that omits the DMV subqueries (or return
fallback values like null/0 for port and active_connections), and ensure the
catch logs the permission error; update the code that sets `result` (the query
invocation using pool.request().query) to attempt the DMV version first and fall
back to a non-DMV query or defaults on permission failure.
---
Nitpick comments:
In `@packages/server/src/dao/mssql/bulk-insert-records.mssql.dao.ts`:
- Around line 31-61: Replace the row-by-row insert loop in the transaction with
a batched insert using either MSSQL Table-Valued Parameters (sql.Table +
request.input with a TVP type) or multi-row INSERT statements limited to 1000
rows per batch; build batches from records and columns, convert booleanColumns
string values to 1/0 into the batch payload, and execute one request.query per
batch (reusing a single transaction.request per batch and using unique parameter
names if you choose multi-row parameterization). Specifically modify the code
that currently iterates over records (the for loop using transaction.request(),
values, paramNames, and insertSQL) to instead (a) create an sql.Table matching
columnNames, push each record row into that table (ensuring boolean conversion),
then call transaction.request().input('tvp', tvpType, table).query('INSERT INTO
[${tableName}] SELECT * FROM `@tvp`') or (b) group up to 1000 records, generate a
single INSERT ... VALUES (...) with batched parameter names and supply all
parameters once per batch; keep the existing error handling pattern around the
batch execution to throw the same HTTPException on failure.
In `@packages/server/src/dao/mssql/create-table.mssql.dao.ts`:
- Around line 169-176: Wrap the DDL execution in a try/catch around the
pool.request().query(createTableSQL) call so MSSQL errors are caught and
re-thrown with context; on catch, import HTTPException from
"hono/http-exception" and throw a new HTTPException (e.g., 500) that includes a
descriptive message mentioning the target tableName and the original error
message/stack, preserving the original error details for debugging while
preventing raw DB errors from propagating.
In `@packages/server/src/dao/mssql/query.mssql.dao.ts`:
- Around line 27-40: When result.recordset exists but has no columns metadata
and rows is empty, explicitly set columns to an empty array instead of deriving
from rows[0]; update the logic around the variables result.recordset, rows and
columns in query.mssql.dao.ts so it becomes: if result.recordset.columns is
present use Object.keys(result.recordset.columns), else if rows.length > 0 use
Object.keys(rows[0]), otherwise set columns = []; keep the rest of the returned
shape (rows, rowCount, duration, message) unchanged and consider setting message
for empty results if desired.
- Around line 22-24: Wrap the call to pool.request().query(cleanedQuery) in a
try-catch inside the query execution block (the section using startTime, result,
duration and the cleanedQuery variable) so any MSSQL errors are caught; in the
catch, compute duration as performance.now() - startTime, log or throw a new
error that includes a short, user-friendly message plus the cleanedQuery and
original error details (or attach the original error as a cause) to preserve
context for debugging; ensure the function (the DAO method performing the query)
still returns or rethrows consistent error types expected by callers.
In `@packages/server/src/dao/mssql/table-columns.mssql.dao.ts`:
- Around line 79-81: The query currently hardcodes the schema as 'dbo'; update
the DAO method that builds this query (e.g., getTableColumns / the function that
uses `@tableName` in table-columns.mssql.dao.ts) to accept an optional schema
parameter (schema?: string) with a default of 'dbo', replace the literal 'dbo'
in the WHERE clause with a bound parameter (e.g., `@schema`) and add that
parameter to the query bindings, and then update any call sites to pass a schema
when needed so multi-schema tables are returned.
In `@packages/server/src/dao/mssql/tables-data.mssql.dao.ts`:
- Line 16: The declared optional parameter/field "filters?: any[]" is unused;
either remove this declaration from the DAO signature/shape or implement
filtering logic where data is selected (e.g., inside the methods that read table
data in this file such as the table data retrieval function/class methods). If
you choose to implement filtering, use the "filters" array to build the WHERE
clause or apply in-memory filter predicates before returning results; if you
choose to remove it, delete the "filters?: any[]" declaration and any references
to it to avoid dead/unused API surface.
In `@packages/server/src/db-manager.ts`:
- Around line 302-309: Replace the current Promise.all usage with a best-effort
teardown: use Promise.allSettled to await [...pgClosePromises,
...mysqlClosePromises, ...mssqlClosePromises], inspect the results and log any
failures from the settled promises (including which connectionString/pool
failed), and ensure you still clear the pools maps (e.g., this.pgPools.clear(),
this.mssqlPools.clear(), etc.) after awaiting allSettled so cleanup proceeds
even if some pool.close() calls reject; reference mssqlClosePromises,
this.mssqlPools, pool.close, pgClosePromises, mysqlClosePromises, and
this.pgPools.clear when updating the implementation.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
packages/server/package.jsonpackages/server/src/dao/mssql/add-record.mssql.dao.tspackages/server/src/dao/mssql/bulk-insert-records.mssql.dao.tspackages/server/src/dao/mssql/create-table.mssql.dao.tspackages/server/src/dao/mssql/database-list.mssql.dao.tspackages/server/src/dao/mssql/delete-column.mssql.dao.tspackages/server/src/dao/mssql/delete-records.mssql.dao.tspackages/server/src/dao/mssql/delete-table.mssql.dao.tspackages/server/src/dao/mssql/export-table.mssql.dao.tspackages/server/src/dao/mssql/query.mssql.dao.tspackages/server/src/dao/mssql/table-columns.mssql.dao.tspackages/server/src/dao/mssql/table-list.mssql.dao.tspackages/server/src/dao/mssql/table-schema.mssql.dao.tspackages/server/src/dao/mssql/tables-data.mssql.dao.tspackages/server/src/dao/mssql/update-records.mssql.dao.tspackages/server/src/db-manager.tspackages/shared/src/types/column.type.tspackages/shared/src/types/database.types.ts
| const columnNames = columns.map((col) => `[${col}]`).join(", "); | ||
| const paramNames = columns.map((_col, idx) => `@param${idx}`).join(", "); | ||
|
|
||
| const query = ` | ||
| INSERT INTO [${tableName}] (${columnNames}) | ||
| VALUES (${paramNames}) | ||
| `; |
There was a problem hiding this comment.
SQL injection risk: validate table and column names.
Same concern as noted in update-records.mssql.dao.ts — the tableName and column names are directly interpolated. Apply the same identifier validation pattern here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/add-record.mssql.dao.ts` around lines 30 - 36,
The INSERT builds SQL by interpolating tableName and columns which allows
injection; before constructing columnNames/paramNames/query, validate tableName
and each column using the same identifier whitelist used in
update-records.mssql.dao.ts (e.g., isValidIdentifier or
/^[A-Za-z_][A-Za-z0-9_]*$/) and throw on invalid identifiers, then use the
validated names when creating columnNames (still wrapping with brackets) and the
table name; keep parameterization for values (param0..n) unchanged.
| const columnDefinitions = fields.map((field: FieldDataType) => { | ||
| const mappedType = mapColumnTypeToMssql(field.columnType, field.isArray ?? false); | ||
| let columnDef = `[${field.columnName}] ${mappedType}`; | ||
|
|
||
| // NOT NULL | ||
| if (!field.isNullable && !field.isPrimaryKey) { | ||
| columnDef += " NOT NULL"; | ||
| } | ||
|
|
||
| // Default value (skip for IDENTITY columns) | ||
| if (field.defaultValue && !mappedType.includes("IDENTITY")) { | ||
| const defaultValue = formatMssqlDefaultValue(field.defaultValue, mappedType); | ||
| if (defaultValue !== null) { | ||
| columnDef += ` DEFAULT ${defaultValue}`; | ||
| } | ||
| } | ||
|
|
||
| return columnDef; | ||
| }); |
There was a problem hiding this comment.
SQL injection risk in DDL generation.
The tableName, columnName, and constraint names are directly interpolated into the DDL statement. Apply identifier validation as noted in other files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/create-table.mssql.dao.ts` around lines 124 -
142, The DDL builder in create-table.mssql.dao.ts is vulnerable because
identifiers are interpolated directly when building columnDefinitions from
FieldDataType.field.columnName (and elsewhere for tableName and constraint
names); fix by validating or escaping every SQL identifier before interpolation
— reuse the repository's existing identifier validation/escaping helper (e.g.,
validateIdentifier or similar used in other files) when constructing column
names, table names, and constraint names, and ensure columnDefinitions
generation (and any code that calls mapColumnTypeToMssql or uses
formatMssqlDefaultValue) only inserts identifiers that pass validation or have
been safely quoted/escaped.
| const result = await pool | ||
| .request() | ||
| .query(`ALTER TABLE [${tableName}] DROP COLUMN [${columnName}]`); | ||
|
|
There was a problem hiding this comment.
ALTER TABLE statement is vulnerable via unescaped identifiers.
[${tableName}] / [${columnName}] is not safe when input contains ] and can be abused.
🛡️ Proposed identifier escaping fix
+const escapedTableName = tableName.replaceAll("]", "]]");
+const escapedColumnName = columnName.replaceAll("]", "]]");
const result = await pool
.request()
- .query(`ALTER TABLE [${tableName}] DROP COLUMN [${columnName}]`);
+ .query(`ALTER TABLE [${escapedTableName}] DROP COLUMN [${escapedColumnName}]`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/delete-column.mssql.dao.ts` around lines 54 -
57, The ALTER TABLE call is vulnerable to malicious identifiers; sanitize and
escape identifiers before interpolation: replace any ] with ]] for both
tableName and columnName (and optionally validate they match an allowed
pattern/are non-empty) then use those escaped values in the ALTER TABLE string
used by pool.request().query; update the code around the pool.request().query
call (referencing variables tableName and columnName) to use the
escaped/validated identifiers to prevent bracket-injection.
| SELECT TOP 100 * FROM [${constraint.referencingTable}] | ||
| WHERE [${constraint.referencingColumn}] IN (${paramList}) |
There was a problem hiding this comment.
Prevent SQL injection in dynamic DELETE/SELECT identifiers.
tableName/column names are interpolated directly into SQL text. This is exploitable unless identifiers are escaped/validated before interpolation.
🔧 Proposed fix
+const quoteIdentifier = (identifier: string) => `[${identifier.replace(/]/g, "]]")}]`;
...
- const relatedResult = await request.query(`
- SELECT TOP 100 * FROM [${constraint.referencingTable}]
- WHERE [${constraint.referencingColumn}] IN (${paramList})
- `);
+ const relatedResult = await request.query(`
+ SELECT TOP 100 * FROM ${quoteIdentifier(constraint.referencingTable)}
+ WHERE ${quoteIdentifier(constraint.referencingColumn)} IN (${paramList})
+ `);
...
- `DELETE FROM [${tableName}] WHERE [${pkColumn}] IN (${paramList})`,
+ `DELETE FROM ${quoteIdentifier(tableName)} WHERE ${quoteIdentifier(pkColumn)} IN (${paramList})`,
...
- `DELETE FROM [${targetTable}] WHERE [${targetColumn}] IN (${deleteParamList})`,
+ `DELETE FROM ${quoteIdentifier(targetTable)} WHERE ${quoteIdentifier(targetColumn)} IN (${deleteParamList})`,
...
- `DELETE FROM [${tableName}] WHERE [${pkColumn}] IN (${mainParamList})`,
+ `DELETE FROM ${quoteIdentifier(tableName)} WHERE ${quoteIdentifier(pkColumn)} IN (${mainParamList})`,Also applies to: 127-127, 217-217, 240-240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/delete-records.mssql.dao.ts` around lines 86 -
87, The SQL building code interpolates dynamic identifiers (e.g.,
constraint.referencingTable, constraint.referencingColumn, tableName) directly
into queries which allows SQL injection; fix by validating/escaping identifiers
before interpolation — implement a helper (e.g., validateSqlIdentifier or
escapeSqlIdentifier) that ensures names match a strict whitelist/regex (e.g.,
/^[A-Za-z_][A-Za-z0-9_]*$/) and then wrap validated identifiers in square
brackets before using them in the query strings used in functions/methods that
build the DELETE/SELECT (references in delete-records.mssql.dao.ts: uses of
constraint.referencingTable, constraint.referencingColumn, and tableName at the
noted locations 86-87, 127, 217, 240); keep value inputs parameterized (reuse
paramList) and throw or reject if an identifier fails validation instead of
interpolating it.
| const deletedTables = new Set<string>(); | ||
| const visited = new Set<string>(); | ||
|
|
||
| const deleteRelatedRecursively = async ( | ||
| targetTable: string, | ||
| targetColumn: string, | ||
| values: unknown[], | ||
| visitedSet: Set<string>, | ||
| ) => { | ||
| const key = `${targetTable}.${targetColumn}`; | ||
| if (visitedSet.has(key)) return; | ||
| visitedSet.add(key); |
There was a problem hiding this comment.
Recursive dedupe logic can skip required deletions.
deletedTables dedupes by table only, and visited dedupes by table.column only. Different FK columns or different value sets can be skipped, leaving dependents undeleted and causing later FK failures.
🔧 Proposed fix
- const deletedTables = new Set<string>();
const visited = new Set<string>();
+ const processedRootConstraints = new Set<string>();
...
- const key = `${targetTable}.${targetColumn}`;
+ const valueSig = values.map((v) => String(v)).sort().join("|");
+ const key = `${targetTable}.${targetColumn}:${valueSig}`;
if (visitedSet.has(key)) return;
visitedSet.add(key);
...
- deletedTables.add(targetTable);
};
for (const constraint of fkConstraints) {
- if (deletedTables.has(constraint.referencingTable)) continue;
+ const rootKey = `${constraint.referencingTable}.${constraint.referencingColumn}`;
+ if (processedRootConstraints.has(rootKey)) continue;
+ processedRootConstraints.add(rootKey);
await deleteRelatedRecursively(
constraint.referencingTable,
constraint.referencingColumn,
pkValues,
visited,
);
}Also applies to: 223-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/delete-records.mssql.dao.ts` around lines 171 -
182, The recursion wrongly dedupes only by table name (deletedTables) and by
"table.column" (visited key) which lets different FK columns or different sets
of values be skipped; update the dedupe keys to include the specific value set
being deleted (e.g., include a stable serialization/hash of the values array) so
both deletedTables and visited track per-(table,column,values) operations;
modify deleteRelatedRecursively (and the similar logic around lines 223-230) to
compute a composite key like `${targetTable}.${targetColumn}.${valuesHash}` and
use that for visited and for any table-level dedupe to ensure distinct value
sets are processed instead of being skipped.
| const countResult = await pool | ||
| .request() | ||
| .query(`SELECT COUNT(*) as total FROM [${tableName}]`); | ||
| const totalRows = Number(countResult.recordset[0]?.total ?? 0); | ||
|
|
||
| // Get paginated data | ||
| const dataResult = await pool.request().query(` | ||
| SELECT * | ||
| FROM [${tableName}] | ||
| ${sortClause} | ||
| OFFSET ${offset} ROWS | ||
| FETCH NEXT ${limit + 1} ROWS ONLY | ||
| `); |
There was a problem hiding this comment.
Table name interpolation in queries.
The tableName is directly interpolated in both the COUNT and SELECT queries. Apply identifier validation consistent with other DAO files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/tables-data.mssql.dao.ts` around lines 60 - 72,
The COUNT and SELECT queries interpolate the tableName directly (see variables
tableName, countResult and dataResult), which can lead to SQL injection;
validate or sanitize the table identifier before using it in
pool.request().query by reusing the identifier validation utility used elsewhere
in the DAO layer (e.g., call the project’s validateIdentifier/sanitizeIdentifier
function on tableName and throw or return an error if invalid), then use the
validated identifier in the COUNT and SELECT queries and remove direct
interpolation of the raw tableName.
| for (const [pkValue, rowUpdates] of updatesByRow.entries()) { | ||
| const setClauses = rowUpdates.map((u, idx) => `[${u.columnName}] = @value${idx}`); | ||
| const request = transaction.request(); | ||
|
|
||
| rowUpdates.forEach((u, idx) => { | ||
| let value = u.value; | ||
| if (value !== null && typeof value === "object") { | ||
| value = JSON.stringify(value); | ||
| } | ||
| if (booleanColumns.has(u.columnName) && typeof value === "string") { | ||
| value = value === "true" ? 1 : 0; | ||
| } | ||
| request.input(`value${idx}`, value); | ||
| }); | ||
|
|
||
| request.input("pkValue", pkValue); | ||
|
|
||
| const query = ` | ||
| UPDATE [${tableName}] | ||
| SET ${setClauses.join(", ")} | ||
| WHERE [${primaryKey}] = @pkValue | ||
| `; |
There was a problem hiding this comment.
SQL injection risk: table and column names are interpolated directly.
The tableName, columnName, and primaryKey values are directly interpolated into the SQL query. While bracket notation [...] provides some protection, a malicious input containing ] could escape the brackets. Consider validating these identifiers against a whitelist (e.g., using getTableColumns to verify column names exist) or using a regex to ensure they contain only valid identifier characters.
🛡️ Proposed validation approach
+const validIdentifier = /^[a-zA-Z_][a-zA-Z0-9_]*$/;
+
+function validateIdentifier(name: string, type: string): void {
+ if (!validIdentifier.test(name)) {
+ throw new HTTPException(400, {
+ message: `Invalid ${type} name: "${name}"`,
+ });
+ }
+}
+
export async function updateRecords({
params,
db,
}: {
params: UpdateRecordsSchemaType;
db: DatabaseSchemaType["db"];
}): Promise<{ updatedCount: number }> {
const { tableName, updates, primaryKey } = params;
+ validateIdentifier(tableName, "table");
+ validateIdentifier(primaryKey, "column");
const pool = await getMssqlPool(db);And validate column names in the loop:
const setClauses = rowUpdates.map((u, idx) => {
+ validateIdentifier(u.columnName, "column");
return `[${u.columnName}] = `@value`${idx}`;
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mssql/update-records.mssql.dao.ts` around lines 54 -
75, The code in update-records.mssql.dao.ts builds SQL using interpolated
identifiers (tableName, primaryKey, and u.columnName) which can be escaped by
malicious input; before constructing the UPDATE query (inside the loop over
updatesByRow / using rowUpdates, booleanColumns, request), validate and
whitelist identifiers: call getTableColumns(tableName) to retrieve allowed
column names and ensure primaryKey is among them and every u.columnName exists;
additionally enforce an identifier regex (e.g., /^[A-Za-z0-9_]+$/) to reject any
names containing ']' or other illegal chars and throw an error if validation
fails; only then use the validated names to build the SET clauses and WHERE
clause.
| if (!this.mssqlPools.has(connectionString)) { | ||
| const url = new URL(connectionString); | ||
| const dbName = url.pathname.slice(1); | ||
|
|
||
| const config: mssql.config = { | ||
| server: this.baseConfig.host, | ||
| port: this.baseConfig.port, | ||
| user: this.baseConfig.user, | ||
| password: this.baseConfig.password, | ||
| database: dbName || undefined, | ||
| options: { | ||
| encrypt: false, // Use true for Azure | ||
| trustServerCertificate: true, | ||
| enableArithAbort: true, | ||
| connectTimeout: 2000, | ||
| }, | ||
| pool: { | ||
| max: 10, | ||
| min: 0, | ||
| idleTimeoutMillis: 30000, | ||
| }, | ||
| }; | ||
|
|
||
| const pool = await new mssql.ConnectionPool(config).connect(); | ||
|
|
||
| pool.on("error", (err) => { | ||
| console.error( | ||
| `Unexpected error on SQL Server pool for "${connectionString}":`, | ||
| err.message, | ||
| ); | ||
| }); | ||
|
|
||
| this.mssqlPools.set(connectionString, pool); | ||
| console.log(`Created SQL Server connection pool for: ${connectionString}`); | ||
| } | ||
|
|
||
| return this.mssqlPools.get(connectionString) as MssqlPool; |
There was a problem hiding this comment.
Concurrent first-call race can create duplicate MSSQL pools.
The has check happens before await connect(), so parallel callers can both create/connect pools for the same key.
🔧 Proposed fix (track in-flight pool creation)
class DatabaseManager {
private pgPools: Map<string, Pool> = new Map();
private mysqlPools: Map<string, MysqlPool> = new Map();
private mssqlPools: Map<string, MssqlPool> = new Map();
+ private mssqlPoolPromises: Map<string, Promise<MssqlPool>> = new Map();
@@
async getMssqlPool(database?: string): Promise<MssqlPool> {
@@
- if (!this.mssqlPools.has(connectionString)) {
+ if (!this.mssqlPools.has(connectionString)) {
+ if (!this.mssqlPoolPromises.has(connectionString)) {
+ const promise = (async () => {
const url = new URL(connectionString);
const dbName = url.pathname.slice(1);
@@
- const pool = await new mssql.ConnectionPool(config).connect();
+ const pool = await new mssql.ConnectionPool(config).connect();
@@
this.mssqlPools.set(connectionString, pool);
console.log(`Created SQL Server connection pool for: ${connectionString}`);
- }
-
- return this.mssqlPools.get(connectionString) as MssqlPool;
+ return pool;
+ })();
+ this.mssqlPoolPromises.set(connectionString, promise);
+ }
+
+ try {
+ return await this.mssqlPoolPromises.get(connectionString)!;
+ } finally {
+ this.mssqlPoolPromises.delete(connectionString);
+ }
+ }
+
+ return this.mssqlPools.get(connectionString) as MssqlPool;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!this.mssqlPools.has(connectionString)) { | |
| const url = new URL(connectionString); | |
| const dbName = url.pathname.slice(1); | |
| const config: mssql.config = { | |
| server: this.baseConfig.host, | |
| port: this.baseConfig.port, | |
| user: this.baseConfig.user, | |
| password: this.baseConfig.password, | |
| database: dbName || undefined, | |
| options: { | |
| encrypt: false, // Use true for Azure | |
| trustServerCertificate: true, | |
| enableArithAbort: true, | |
| connectTimeout: 2000, | |
| }, | |
| pool: { | |
| max: 10, | |
| min: 0, | |
| idleTimeoutMillis: 30000, | |
| }, | |
| }; | |
| const pool = await new mssql.ConnectionPool(config).connect(); | |
| pool.on("error", (err) => { | |
| console.error( | |
| `Unexpected error on SQL Server pool for "${connectionString}":`, | |
| err.message, | |
| ); | |
| }); | |
| this.mssqlPools.set(connectionString, pool); | |
| console.log(`Created SQL Server connection pool for: ${connectionString}`); | |
| } | |
| return this.mssqlPools.get(connectionString) as MssqlPool; | |
| class DatabaseManager { | |
| private pgPools: Map<string, Pool> = new Map(); | |
| private mysqlPools: Map<string, MysqlPool> = new Map(); | |
| private mssqlPools: Map<string, MssqlPool> = new Map(); | |
| private mssqlPoolPromises: Map<string, Promise<MssqlPool>> = new Map(); | |
| async getMssqlPool(database?: string): Promise<MssqlPool> { | |
| if (!this.mssqlPools.has(connectionString)) { | |
| if (!this.mssqlPoolPromises.has(connectionString)) { | |
| const promise = (async () => { | |
| const url = new URL(connectionString); | |
| const dbName = url.pathname.slice(1); | |
| const config: mssql.config = { | |
| server: this.baseConfig.host, | |
| port: this.baseConfig.port, | |
| user: this.baseConfig.user, | |
| password: this.baseConfig.password, | |
| database: dbName || undefined, | |
| options: { | |
| encrypt: false, // Use true for Azure | |
| trustServerCertificate: true, | |
| enableArithAbort: true, | |
| connectTimeout: 2000, | |
| }, | |
| pool: { | |
| max: 10, | |
| min: 0, | |
| idleTimeoutMillis: 30000, | |
| }, | |
| }; | |
| const pool = await new mssql.ConnectionPool(config).connect(); | |
| pool.on("error", (err) => { | |
| console.error( | |
| `Unexpected error on SQL Server pool for "${connectionString}":`, | |
| err.message, | |
| ); | |
| }); | |
| this.mssqlPools.set(connectionString, pool); | |
| console.log(`Created SQL Server connection pool for: ${connectionString}`); | |
| return pool; | |
| })(); | |
| this.mssqlPoolPromises.set(connectionString, promise); | |
| } | |
| try { | |
| return await this.mssqlPoolPromises.get(connectionString)!; | |
| } finally { | |
| this.mssqlPoolPromises.delete(connectionString); | |
| } | |
| } | |
| return this.mssqlPools.get(connectionString) as MssqlPool; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/db-manager.ts` around lines 187 - 223, The code can
create duplicate pools because the this.mssqlPools.has(connectionString) check
occurs before awaiting new mssql.ConnectionPool(config).connect(); fix by
introducing an in-flight map (e.g., this.mssqlPoolPromises) and store the
promise for new mssql.ConnectionPool(config).connect() under connectionString
before awaiting it so concurrent callers await the same promise; on success set
this.mssqlPools.set(connectionString, pool) and delete the promise entry, and on
failure remove the promise entry and rethrow the error; keep existing
pool.on("error", ...) registration and return
this.mssqlPools.get(connectionString) as MssqlPool after awaiting the shared
promise.
| const config: mssql.config = { | ||
| server: this.baseConfig.host, | ||
| port: this.baseConfig.port, | ||
| user: this.baseConfig.user, | ||
| password: this.baseConfig.password, | ||
| database: dbName || undefined, | ||
| options: { | ||
| encrypt: false, // Use true for Azure | ||
| trustServerCertificate: true, | ||
| enableArithAbort: true, | ||
| connectTimeout: 2000, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
node-mssql v12 connectionTimeout requestTimeout configuration
💡 Result:
In node-mssql v12, both timeouts are set on the top-level config object (not under options):
import sql from "mssql";
const config = {
user: "…",
password: "…",
server: "…",
database: "…",
// timeouts (milliseconds)
connectionTimeout: 30_000, // default 15000
requestTimeout: 120_000, // default 15000
pool: { max: 10, min: 0, idleTimeoutMillis: 30_000 },
options: { encrypt: true, trustServerCertificate: false },
};
await sql.connect(config);connectionTimeout: connection/login timeout in ms (default 15000). [1]requestTimeout: per-request/query timeout in ms (default 15000). For the msnodesqlv8 driver, timeouts< 1saren’t supported; and if you pass it via connection string, the key must berequest timeout(with a space). [1]- node-mssql v12.x is the current major line (e.g., v12.0.0 / v12.1.0). [2]
Sources:
[1] node-mssql README – configuration options (connectionTimeout, requestTimeout, defaults, msnodesqlv8 note) (github.com)
[2] node-mssql GitHub releases – v12.0.0 / v12.1.0 (github.com)
Citations:
- 1: https://github.com/tediousjs/node-mssql
- 2: https://github.com/tediousjs/node-mssql/releases?utm_source=openai
🌐 Web query:
mssql npm package encrypt trustServerCertificate production recommended settings
💡 Result:
For the mssql (node-mssql) npm package in production, the generally recommended TLS settings are:
options.encrypt: true(enable TLS) (tediousjs.github.io)options.trustServerCertificate: false(validate the server’s cert; don’t blindly trust it) (tediousjs.github.io)- If
trustServerCertificate: false, ensure certificate validation can succeed:- The server presents a cert that chains to a trusted CA (OS trust store / container trust store), and
- The hostname you connect to matches the certificate; with the underlying Tedious driver you may need to set
options.serverNameto the certificate DNS name to make hostname validation pass. (deepwiki.com)
- If you must use a private CA/self-signed cert, prefer trusting the CA explicitly (pin CA) rather than setting
trustServerCertificate: true, by passing TLS materials viaoptions.cryptoCredentialsDetails(e.g.,ca). (deepwiki.com) - For highest security (and if your SQL Server supports it), consider
encrypt: 'strict'(Tedious supports a strict mode; requires newer SQL Server versions). (deepwiki.com)
Example (typical production posture):
const config = {
server: process.env.DB_HOST, // use DNS name that matches the cert
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
options: {
encrypt: true,
trustServerCertificate: false,
// serverName: process.env.DB_CERT_DNS_NAME, // if needed for hostname validation (Tedious)
// cryptoCredentialsDetails: { ca: [fs.readFileSync('ca.pem')] } // if using private CA
}
}
``` ([tediousjs.github.io](https://tediousjs.github.io/node-mssql/?utm_source=openai))
Citations:
- 1: https://tediousjs.github.io/node-mssql/?utm_source=openai
- 2: https://tediousjs.github.io/node-mssql/?utm_source=openai
- 3: https://deepwiki.com/tediousjs/tedious/7.2-tlsencryption?utm_source=openai
- 4: https://deepwiki.com/tediousjs/tedious/7.2-tlsencryption?utm_source=openai
- 5: https://deepwiki.com/tediousjs/tedious/7.2-tlsencryption?utm_source=openai
- 6: https://tediousjs.github.io/node-mssql/?utm_source=openai
---
🏁 Script executed:
```shell
# First, check the mssql package version in use
find . -name "package.json" -path "*/packages/server/*" | head -1 | xargs cat | grep -A 5 -B 5 '"mssql"'Repository: husamql3/db-studio
Length of output: 345
🏁 Script executed:
# Check the full db-manager.ts file to understand context and other mssql config patterns
cat -n packages/server/src/db-manager.ts | head -250Repository: husamql3/db-studio
Length of output: 8542
Move timeout configuration to top level and enable TLS with certificate validation.
In node-mssql v12, connectionTimeout and requestTimeout must be top-level config properties—options.connectTimeout is not recognized and will be ignored. Additionally, encrypt: false and trustServerCertificate: true are insecure for production; use encrypt: true and trustServerCertificate: false instead.
Configuration fix
const config: mssql.config = {
server: this.baseConfig.host,
port: this.baseConfig.port,
user: this.baseConfig.user,
password: this.baseConfig.password,
database: dbName || undefined,
+ connectionTimeout: 2000,
+ requestTimeout: 30000,
options: {
- encrypt: false, // Use true for Azure
- trustServerCertificate: true,
+ encrypt: true,
+ trustServerCertificate: false,
enableArithAbort: true,
- connectTimeout: 2000,
},
pool: {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const config: mssql.config = { | |
| server: this.baseConfig.host, | |
| port: this.baseConfig.port, | |
| user: this.baseConfig.user, | |
| password: this.baseConfig.password, | |
| database: dbName || undefined, | |
| options: { | |
| encrypt: false, // Use true for Azure | |
| trustServerCertificate: true, | |
| enableArithAbort: true, | |
| connectTimeout: 2000, | |
| }, | |
| const config: mssql.config = { | |
| server: this.baseConfig.host, | |
| port: this.baseConfig.port, | |
| user: this.baseConfig.user, | |
| password: this.baseConfig.password, | |
| database: dbName || undefined, | |
| connectionTimeout: 2000, | |
| requestTimeout: 30000, | |
| options: { | |
| encrypt: true, | |
| trustServerCertificate: false, | |
| enableArithAbort: true, | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/db-manager.ts` around lines 191 - 202, The mssql.config
object built in the db-manager (the const config) puts connectTimeout under
options and disables TLS; update it so connectionTimeout and requestTimeout are
top-level properties on the config object (not under options) and change
options.encrypt to true and options.trustServerCertificate to false; keep
options.enableArithAbort and any other options as-is and preserve using
this.baseConfig values and the dbName assignment so the rest of the code (e.g.,
functions that consume mssql.config) continues to work.
| }); | ||
|
|
||
| this.mssqlPools.set(connectionString, pool); | ||
| console.log(`Created SQL Server connection pool for: ${connectionString}`); |
There was a problem hiding this comment.
Avoid logging raw connection strings.
This can leak usernames/passwords into logs.
🧹 Proposed safe logging change
+const redacted = new URL(connectionString);
+if (redacted.password) redacted.password = "***";
+const safeConnectionString = redacted.toString();
-console.log(`Created SQL Server connection pool for: ${connectionString}`);
+console.log(`Created SQL Server connection pool for: ${safeConnectionString}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/db-manager.ts` at line 220, The current console.log
prints the raw connectionString (variable connectionString) which may include
credentials; change the log to avoid leaking secrets by removing the raw
connection string and either (a) log a non-sensitive summary like "Created SQL
Server connection pool for host=<host> db=<database>" after parsing
connectionString to extract host and database, or (b) call a small helper (e.g.,
redactConnectionString(connectionString)) that masks user/password before
logging; replace the existing console.log(`Created SQL Server connection pool
for: ${connectionString}`) with one of these safe alternatives.
|
Great work @Amirosagan ❤️ can you add script to initialize a SQL Server DB like what we have in |
This pull request adds comprehensive support for Microsoft SQL Server (MSSQL) to the server-side data access layer, enabling CRUD operations and schema management for MSSQL databases. It introduces new DAO modules for inserting, deleting, and bulk-inserting records, creating tables, deleting columns, and retrieving database/server information, along with the necessary package dependencies.
MSSQL Data Access Layer Enhancements:
addRecord), bulk insertion (bulkInsertRecords), record deletion (with foreign key handling and force delete), column deletion, and table creation with type mapping and constraint support. [1] [2] [3] [4] [5]Database and Server Metadata:
Dependency Updates:
mssqland@types/mssqltopackage.jsonto enable MSSQL connectivity and type support.Summary by CodeRabbit
Release Notes