Conversation
🎯 Auto PR: stage → main
🎯 Auto PR: stage → main
🎯 Auto PR: stage → main
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request introduces MySQL database support alongside existing PostgreSQL functionality. Changes include dialect-specific DAO implementations for all data operations, runtime route dispatch based on database type, pool management updates to handle both databases, enhanced date/time/UUID field handling in the UI, and supporting workflow and initialization script modifications. Changes
Sequence DiagramsequenceDiagram
actor Client
participant Route as Route Handler
participant Context as RouteEnv Context
participant Dispatch as DAO Dispatcher
participant MySQL as MySQL DAO
participant Postgres as PostgreSQL DAO
participant DB as Database
Client->>Route: Request (GET/POST/PATCH/DELETE)
activate Route
Route->>Context: Extract dbType from Variables
activate Context
Context-->>Route: dbType = "mysql" | "pg"
deactivate Context
alt dbType is "mysql"
Route->>Dispatch: Select MySQL DAO
activate Dispatch
Dispatch->>MySQL: Call MySQL implementation
activate MySQL
MySQL->>DB: Execute MySQL Query
activate DB
DB-->>MySQL: Result Set
deactivate DB
MySQL-->>Dispatch: Response
deactivate MySQL
deactivate Dispatch
else dbType is "pg"
Route->>Dispatch: Select PostgreSQL DAO
activate Dispatch
Dispatch->>Postgres: Call PostgreSQL implementation
activate Postgres
Postgres->>DB: Execute PostgreSQL Query
activate DB
DB-->>Postgres: Result Set
deactivate DB
Postgres-->>Dispatch: Response
deactivate Postgres
deactivate Dispatch
end
Route-->>Client: HTTP Response
deactivate Route
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (7)
packages/core/src/components/add-table/add-record/add-record-field.tsx-311-314 (1)
311-314:⚠️ Potential issue | 🟡 MinorRedundant
.padStart(2, "0")calls.The
pad()function at line 26 already appliespadStart(2, "0"), making the chained.padStart(2, "0")calls here unnecessary.🧹 Proposed fix
onClick={() => { const now = new Date(); const yyyy = now.getFullYear(); - const mm = pad(now.getMonth() + 1).padStart(2, "0"); - const dd = pad(now.getDate()).padStart(2, "0"); - const hh = pad(now.getHours()).padStart(2, "0"); - const min = pad(now.getMinutes()).padStart(2, "0"); + const mm = pad(now.getMonth() + 1); + const dd = pad(now.getDate()); + const hh = pad(now.getHours()); + const min = pad(now.getMinutes()); field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`); }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/components/add-table/add-record/add-record-field.tsx` around lines 311 - 314, The mm, dd, hh, and min assignments redundantly call .padStart(2, "0") after using the helper pad() which already applies padding; update the lines that set mm, dd, hh, and min to call pad(...) alone (e.g., replace pad(...).padStart(2, "0") with pad(...)) and leave the pad() helper implementation as-is to avoid double-padding (refer to the variables mm, dd, hh, min and the pad() helper).packages/server/src/dao/mysql/database-list.mysql.dao.ts-91-95 (1)
91-95:⚠️ Potential issue | 🟡 MinorMySQL connection info fallback can report PostgreSQL default port.
At Line 95,parseDatabaseUrl()fallback may use5432(perpackages/server/src/utils/parse-database-url.ts), which is wrong for MySQL fallback behavior.🔧 Proposed fix
const urlDefaults = parseDatabaseUrl(); + const fallbackMysqlPort = 3306; return { host: String(info.host || urlDefaults.host), - port: Number(info.port || urlDefaults.port), + port: Number(info.port ?? fallbackMysqlPort), user: String(info.user),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts` around lines 91 - 95, The fallback for MySQL connection port uses parseDatabaseUrl() (urlDefaults.port) which can return PostgreSQL's 5432; update the port fallback logic in database-list.mysql.dao.ts (where host/port are constructed) to prefer info.port, then urlDefaults.port only if it's a MySQL-appropriate value, and finally use 3306 as the ultimate default; i.e., change the port expression around Number(info.port || urlDefaults.port) to coercively fall back to 3306 when urlDefaults.port is missing or equals 5432 so MySQL connections never default to Postgres port.packages/server/src/dao/mysql/table-list.mysql.dao.ts-29-31 (1)
29-31:⚠️ Potential issue | 🟡 MinorSQL injection risk:
tableNameis directly interpolated into COUNT query.Table names come from
information_schemaqueries in this case (so less exploitable), but the interpolation pattern is still unsafe and inconsistent with best practices. Escape backticks in the identifier.🛡️ Proposed fix: escape backticks in identifier
(tables as Array<{ tableName: string }>).map(async (table) => { + const escapedTableName = table.tableName.replace(/`/g, "``"); const [countRows] = await pool.execute<RowDataPacket[]>( - `SELECT COUNT(*) as count FROM \`${table.tableName}\``, + `SELECT COUNT(*) as count FROM \`${escapedTableName}\``, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts` around lines 29 - 31, The COUNT query interpolates table.tableName directly (in the pool.execute call that assigns countRows), which risks injection; sanitize the identifier by escaping any backticks in table.tableName (e.g., replace ` with ``), then wrap the result in backticks and use that safe identifier in the template literal for the SELECT COUNT(*) query so pool.execute uses the escaped table name.scripts/init-db-mysql.sh-29-29 (1)
29-29:⚠️ Potential issue | 🟡 MinorSecurity concern: Passwordless root with remote access.
The script creates a MySQL container with no root password (
MYSQL_ALLOW_EMPTY_PASSWORD=yes) and grants root access from any host. While acceptable for local development, this poses a risk if the container is accidentally exposed.Consider either:
- Setting a default development password
- Adding a prominent warning comment
- Binding only to localhost (already done via
DB_HOSTdefault)💡 Suggested improvement: Add warning comment
+# WARNING: This configuration is for LOCAL DEVELOPMENT ONLY. +# Do not use in production - root has no password and remote access is enabled. echo "[init-db-mysql] granting remote access to root..." docker exec "${DB_CONTAINER}" mysql -uroot -e " CREATE USER IF NOT EXISTS 'root'@'%';Also applies to: 39-44
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/init-db-mysql.sh` at line 29, The script currently sets MYSQL_ALLOW_EMPTY_PASSWORD=yes which creates a passwordless root user and grants remote access; update the container setup to avoid this by either replacing MYSQL_ALLOW_EMPTY_PASSWORD with a secure default env var (e.g., MYSQL_ROOT_PASSWORD=<dev-password>) or add a prominent warning comment above the MYSQL_ALLOW_EMPTY_PASSWORD line indicating this is insecure for anything but isolated local development and must not be used in production, and/or ensure DB_HOST is bound to localhost only; reference the MYSQL_ALLOW_EMPTY_PASSWORD and DB_HOST env settings in your change so reviewers can find and verify the fix.packages/server/src/dao/mysql/tables-data.mysql.dao.ts-114-114 (1)
114-114:⚠️ Potential issue | 🟡 MinorRedundant ternary expression - both branches are identical.
The expression
Array.isArray(sort) ? sort : sortevaluates tosortin both cases. This appears to be a copy-paste error.🔧 Proposed fix
- const sortClause = buildSortClauseMysql(Array.isArray(sort) ? sort : sort, order); + const sortClause = buildSortClauseMysql(sort, order);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts` at line 114, The ternary in the call to buildSortClauseMysql is redundant (Array.isArray(sort) ? sort : sort) — update the invocation in the tables-data.mysql.dao.ts code (where sortClause is computed) to pass sort directly to buildSortClauseMysql (i.e., replace the ternary with just sort) so the call becomes buildSortClauseMysql(sort, order); ensure you update the reference that computes sortClause accordingly.packages/server/src/routes/records.routes.ts-160-160 (1)
160-160:⚠️ Potential issue | 🟡 MinorRemove debug
console.logstatement.This debug statement should be removed before merging to production.
🧹 Proposed fix
- console.log("result", result); return c.json({ data: result }, 200);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/routes/records.routes.ts` at line 160, Remove the debug console.log call: delete the statement console.log("result", result) from records.routes.ts (where it appears in the request handler) so no debug prints reach production; if you need persisted logging instead, replace it with the project's logger (e.g., processLogger.error/info or the existing request logger) but otherwise simply remove the console.log line.packages/server/tests/routes/mysql/tables.routes.mysql.test.ts-910-919 (1)
910-919:⚠️ Potential issue | 🟡 MinorAdd a response status assertion in the underscore table-name edge case.
Line 913 stores
resbut the test never asserts the HTTP outcome, so route-level regressions can slip through.✅ Suggested patch
it("should handle table names with underscores", async () => { vi.mocked(mysqlTableColumnsDao.getTableColumns).mockResolvedValue([]); const res = await app.request("/mysql/tables/user_profiles/columns?db=testdb"); + expect(res.status).toBe(200); expect(mysqlTableColumnsDao.getTableColumns).toHaveBeenCalledWith({ tableName: "user_profiles", db: "testdb", }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/tests/routes/mysql/tables.routes.mysql.test.ts` around lines 910 - 919, The test "should handle table names with underscores" stores the HTTP response in the variable res but doesn't assert the HTTP status; update the test (in packages/server/tests/routes/mysql/tables.routes.mysql.test.ts) to assert the response status on res (e.g., expect(res.status).toBe(200) or appropriate code) after calling app.request("/mysql/tables/user_profiles/columns?db=testdb") and before verifying mysqlTableColumnsDao.getTableColumns was called to ensure route-level success is validated.
🧹 Nitpick comments (10)
packages/core/src/components/add-table/add-record/add-record-field.tsx (1)
189-210: Consider extracting the action button pattern into a reusable component.The tooltip-wrapped action button appears four times with nearly identical structure (lines 189-210, 244-265, 301-327, 360-381). Extracting this into a helper component would reduce duplication and simplify maintenance.
♻️ Example extraction
// Helper component for input with action button interface ActionButtonProps { ariaLabel: string; tooltip: string; onClick: () => void; icon: React.ReactNode; } const ActionButton = ({ ariaLabel, tooltip, onClick, icon }: ActionButtonProps) => ( <TooltipProvider delayDuration={0}> <Tooltip> <TooltipTrigger asChild> <button aria-label={ariaLabel} className="inline-flex w-9 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50" type="button" onClick={onClick} > {icon} </button> </TooltipTrigger> <TooltipContent className="px-2 py-1 text-xs">{tooltip}</TooltipContent> </Tooltip> </TooltipProvider> );Usage:
<ActionButton ariaLabel="Set to now" tooltip="Set to now" onClick={() => field.onChange(new Date().toISOString())} icon={<Clock aria-hidden="true" className="size-4" size={16} />} />Also applies to: 244-265, 301-327, 360-381
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/components/add-table/add-record/add-record-field.tsx` around lines 189 - 210, Extract the repeated tooltip-wrapped action button into a reusable component (e.g., ActionButton) to remove duplication seen in the four places using TooltipProvider/Tooltip/TooltipTrigger/TooltipContent and a button with className and onClick calling field.onChange; implement ActionButton props (ariaLabel, tooltip, onClick, icon, optional className) and replace each inline block (the instances around the Clock button and the three similar blocks) with <ActionButton ariaLabel="Set to now" tooltip="Set to now" onClick={() => field.onChange(...)} icon={<Clock .../>} /> so the behavior and accessibility (aria-label, tooltip) and styling are preserved while centralizing the markup and classes.packages/server/src/app.types.ts (1)
29-33: Consider derivingRouteEnvfrom a sharedVariablestype to avoid drift.
This is fine functionally, butdbTypeis now duplicated in bothRouteEnvandAppType.♻️ Optional dedup refactor
+type RouteVariables = { + dbType: DatabaseTypeSchema; +}; export type RouteEnv = { - Variables: { - dbType: DatabaseTypeSchema; - }; + Variables: RouteVariables; }; export type AppType = { - Variables: { - dbType: DatabaseTypeSchema; - }; + Variables: RouteVariables; Bindings: { databases: DatabasesRoutes;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/app.types.ts` around lines 29 - 33, RouteEnv duplicates the Variables shape (dbType) that also exists on AppType; extract a single shared Variables type (e.g., export type Variables = { dbType: DatabaseTypeSchema }) and have both RouteEnv and AppType reference that shared Variables type instead of redefining dbType, updating the RouteEnv declaration (RouteEnv = { Variables: Variables }) and any AppType definition to use the same exported Variables symbol so the schema cannot drift..husky/pre-commit (1)
1-3: Running full test suite in pre-commit may slow down developer workflow.Adding
bun run testto pre-commit means every commit triggers the entire test suite. If tests require database connections or take significant time, this could frustrate developers. Consider:
- Running tests only in CI (current workflow already covers this)
- Using
--changedor similar flags to run only affected tests- Making test execution optional via
--no-verifyfor quick commits🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.husky/pre-commit around lines 1 - 3, The pre-commit hook currently runs the full test suite via the command "bun run test", which slows commits; remove or replace that line so pre-commit doesn't execute the entire suite: either delete the "bun run test" entry, replace it with a faster check (e.g., a changed-files-aware test runner flag like "bun run test --changed" or a lint/quick-smoke script), or move full test execution to CI and keep only "bun run check" and "bun run build" in the hook; ensure the modified hook still allows bypassing with --no-verify for quick commits.packages/server/tests/routes/query.routes.test.ts (1)
627-646: Consider simplifying the mock setup by using the top-level mock reference.The dynamic import pattern works but is slightly convoluted. Since the module is already mocked at the top level (lines 12-15), you could reference it directly from the mock or use
vi.mocked()consistently.♻️ Optional: Simplify mock setup
it("should accept mysql database type as valid", async () => { - const { executeQuery: mysqlExecuteQuery } = await import( - "@/dao/mysql/query.mysql.dao.js" - ); - vi.mocked(mysqlExecuteQuery).mockResolvedValue({ + const mysqlQueryDao = await import("@/dao/mysql/query.mysql.dao.js"); + vi.mocked(mysqlQueryDao.executeQuery).mockResolvedValue({ columns: ["1"], rows: [{ "1": 1 }], rowCount: 1, duration: 1.0, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/tests/routes/query.routes.test.ts` around lines 627 - 646, Replace the dynamic import and local mocking in the test with the top-level mocked reference: remove the await import(...) and instead get the already-mocked executeQuery from the shared mock (use the top-level mocked module reference) and call vi.mocked(executeQuery).mockResolvedValue(...) so the test uses the same mocked executeQuery as other tests; specifically update the local mysqlExecuteQuery usage to reference the module-level executeQuery mock and apply mockResolvedValue for columns/rows/rowCount/duration..github/workflows/check.yml (1)
29-44: Consider renaming the job to reflect its broader scope.The job is named
test-serverbut now runsbun run testwhich executes all tests (including MySQL paths). The naming is slightly misleading but not blocking.♻️ Optional: Rename job for clarity
- test-server: + test: runs-on: ubuntu-latest🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/check.yml around lines 29 - 44, Rename the GitHub Actions job currently named "test-server" to a name that reflects it runs the full test suite (e.g., "test" or "run-tests") so it isn't misleading; update the job key where "test-server" appears and ensure any references to that job key (if any) remain consistent—this job executes "bun run test" so choose a broader identifier like "test" or "ci-tests" and replace "test-server" accordingly.packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts (1)
37-62: Performance: Consider using batch INSERT for better throughput.Currently, each record executes a separate
INSERTstatement. For bulk operations, a single multi-rowINSERT INTO ... VALUES (...), (...), (...)would be significantly more efficient.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines 37 - 62, The current loop issues a separate INSERT per record; change bulk-insert-records.mysql.dao.ts to build a single multi-row INSERT for batches: for the variables records, columns, booleanColumns, tableName, columnNames, connection and ResultSetHeader, convert boolean string values the same way but flatten values into a single array and build placeholders like "(?, ?, ...), (?, ?, ...)" for N rows, then call connection.execute<ResultSetHeader>(insertSQL, flattenedValues) once per batch; also implement chunking (e.g., max rows per query) to avoid oversized SQL, increment successCount by the number of rows inserted on success, and keep the existing HTTPException error handling.packages/server/src/dao/mysql/delete-table.mysql.dao.ts (1)
58-61: Table name from database result used in query without sanitization.While
constraint.referencingTablecomes frominformation_schema, it's still interpolated directly into the SQL. This is generally safe since it's system-controlled data, but for defense in depth, consider using backtick escaping consistently or validating the value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/delete-table.mysql.dao.ts` around lines 58 - 61, The code in the fkConstraints loop interpolates constraint.referencingTable directly into SQL; update the loop in delete-table.mysql.dao.ts (where fkConstraints is iterated) to avoid raw interpolation by either validating the table name against a strict whitelist/regex (e.g. allow only alphanumerics, underscore, dollar, and dot if needed) and then wrapping it in backticks, or use the DB library's identifier-escaping helper (e.g. connection.escapeId / pool.format with ??) to produce a safely escaped identifier; apply the same treatment to any other places that build queries from information_schema values.packages/server/src/dao/mysql/export-table.mysql.dao.ts (1)
16-20: Consider distinguishing "table not found" from "table is empty".A 404 error for an empty table may be misleading—the table exists but has no rows. Consider returning an empty result set instead, reserving 404 for when the table genuinely doesn't exist. This aligns with how the PostgreSQL equivalent likely behaves and provides clearer API semantics.
Proposed approach
- if (!rows || rows.length === 0) { - throw new HTTPException(404, { - message: `Table "${tableName}" does not exist or has no data`, - }); - } - - const cols = Object.keys(rows[0]); + const cols = rows.length > 0 ? Object.keys(rows[0]) : []; return { cols, rows: rows as Record<string, CellValue>[] };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/export-table.mysql.dao.ts` around lines 16 - 20, The current block in export-table.mysql.dao.ts throws HTTPException(404) for an empty result set; change this so that when the query returns rows === [] you return an empty result (e.g., empty array or result payload) instead of a 404, and only throw HTTPException(404) for a genuine “table not found” error by detecting MySQL’s no-such-table condition (catch the SQL error code like ER_NO_SUCH_TABLE or query information_schema.tables for tableName); update the logic around the rows variable and tableName handling so HTTPException is reserved for the actual-not-found case while empty tables return an empty response.packages/server/src/dao/mysql/create-table.mysql.dao.ts (1)
112-115: Timezone information is lost when mappingtimestamp with time zonetoDATETIME.PostgreSQL's
timestamptzstores timezone-aware timestamps, but MySQL'sDATETIMEis timezone-naive. Consider usingTIMESTAMPfor MySQL (which is timezone-aware) or documenting this behavioral difference for users migrating schemas.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/dao/mysql/create-table.mysql.dao.ts` around lines 112 - 115, The current type mapping maps "timestamp with time zone" and timestamptz to "DATETIME" which loses timezone info; update the mapping in the table type map (the object that currently contains the keys timestamp, "timestamp without time zone", "timestamp with time zone", timestamptz) so that "timestamp with time zone" and timestamptz map to MySQL "TIMESTAMP" (leave plain timestamp or "timestamp without time zone" as "DATETIME" if you want timezone-naive behavior) and add a short inline comment documenting the behavioral difference for migrations.packages/server/src/db-manager.ts (1)
128-128: Avoid allocating an unmanaged fallback PostgreSQL pool.Line 128 can create a new
Poolthat is not cached and does not register the shared error handler.♻️ Suggested patch
- return this.pgPools.get(connectionString) ?? new Pool({ connectionString }); + const pool = this.pgPools.get(connectionString); + if (!pool) { + throw new Error(`Failed to initialize PostgreSQL pool for: ${connectionString}`); + } + return pool;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/db-manager.ts` at line 128, The current return expression in the method (which uses this.pgPools.get(connectionString) ?? new Pool({ connectionString })) can allocate an unmanaged Pool; change it so that if a Pool for connectionString doesn't exist you create a new Pool, register the shared error handler (the same handler used for pooled instances), store it in this.pgPools under connectionString, and then return that stored Pool; reference pgPools, connectionString, and Pool so you attach the error listener and persist the instance instead of returning a transient Pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 282-340: The DatePicker onChange currently uses date.toISOString()
(in the DatePicker instances in add-record-field.tsx) which stores UTC while the
"Set to now" button formats local time; make both sides use the same local
timestamp format for timestamp columns: update the DatePicker onChange handlers
(the ones that call field.onChange(date ? date.toISOString() : "")) to format
the Date to a local string like `${yyyy}-${mm}-${dd}T${hh}:${min}` (matching the
"Set to now" button) so field.onChange receives consistent local
"YYYY-MM-DDTHH:MM" values; ensure you reuse the same padding logic (pad
function) and avoid including timezone/seconds.
In `@packages/server/src/dao/mysql/add-record.mysql.dao.ts`:
- Around line 34-37: Validate the tableName used when building the INSERT query
in add-record.mysql.dao.ts to prevent SQL injection: before interpolating
tableName into the query string (where query is created using `INSERT INTO
\`${tableName}\`` alongside `columnNames` and `placeholders`), ensure tableName
matches a strict allowlist regex (e.g. only letters, numbers and underscores)
and throw or return an error if it fails validation; keep the rest of the query
construction unchanged and do not attempt to parameterize the table name.
In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts`:
- Around line 32-72: The errors array is never populated because the catch in
the bulk-insert loop immediately throws an HTTPException; decide whether bulk
insert should be all-or-nothing or allow partial failures. If all-or-nothing:
remove the errors array and related failureCount/errors return handling and keep
throwing the HTTPException from the catch (affecting the try/catch inside the
loop and the final return that references errors/successCount/failureCount). If
partial failures: change the catch inside the for-loop to push a descriptive
entry into errors (include recordIndex i and error.message from the caught
error), do not rethrow, continue the loop, and ensure you only call
connection.commit() if desired (or call rollback on fatal error), then compute
failureCount from errors.length and return errors in the response instead of
throwing HTTPException.
- Around line 48-51: The INSERT string directly interpolates tableName and
columnNames (built from records[0]) and is vulnerable to SQL injection; add an
escapeIdentifier utility (e.g., escapeIdentifier(name): replace any backtick
with double backtick and wrap with backticks) and use it to sanitize tableName
and each column name before constructing columnNames and insertSQL (leave VALUES
placeholders and parameter binding as-is). Update where columnNames is built and
the insertSQL template so it uses escaped identifiers instead of raw
tableName/column names (symbols to change: insertSQL, tableName, columnNames,
placeholders, and the code that reads records[0]).
In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts`:
- Around line 53-60: The code casts rows[0] to DatabaseSchemaType without
checking that the DATABASE() column isn't null; update the return path in the
function that queries the list of databases (the block using rows and returning
DatabaseSchemaType) to validate that rows[0].db is non-null/defined before
casting—if rows[0].db is null or undefined, throw an HTTPException(500, {
message: "DATABASE() returned null or no database name" }) (or similar) instead
of returning an object that violates DatabaseSchemaType; ensure you reference
the existing rows variable and DatabaseSchemaType guard before the final return.
In `@packages/server/src/dao/mysql/delete-column.mysql.dao.ts`:
- Around line 45-47: The ALTER TABLE call interpolates tableName and columnName
directly into SQL (pool.execute in delete-column.mysql.dao.ts); fix by escaping
any backticks in identifiers before interpolation (e.g., add a small helper
function like escapeMySQLIdentifier that replaces ` with `` and wraps the name
in backticks) and use it when building the query for tableName and columnName in
the DROP COLUMN statement; apply the same helper to other DAOs mentioned
(table-schema.mysql.dao.ts, delete-table.mysql.dao.ts,
create-table.mysql.dao.ts) wherever identifiers are interpolated into SQL.
In `@packages/server/src/dao/mysql/delete-records.mysql.dao.ts`:
- Around line 76-81: The queries build SQL by directly interpolating identifiers
(e.g., constraint.referencingTable, constraint.referencingColumn) into
pool.execute calls (see the SELECT ... IN (${placeholders}) and the other
similar statements), which risks SQL injection; implement and call a shared
identifier-validation helper (or whitelist/strict regex) to validate and
sanitize table and column names before interpolation, replace direct usage of
constraint.referencingTable/constraint.referencingColumn with the validated
identifier values, and keep using parameterized placeholders for
pkValues/placeholders; apply this change consistently for all occurrences in
delete-records.mysql.dao.ts where identifiers are interpolated (the SELECTs and
related DELETEs).
- Around line 164-198: The recursive function deleteRelatedRecursively can loop
on circular FK chains; modify it to accept and propagate a visited set (e.g.,
Set<string> visited) or similar key (table or table+column) and check
visited.has(key) before exploring nestedFks, adding the key to visited when
entering the function; update recursive call sites to pass the same visited set
and ensure deletedTables is still used only for recording deletions, not cycle
detection (use visited for preventing revisits of the same table/column pair).
In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts`:
- Around line 21-25: The current code throws an HTTPException(500) when the
query result 'tables' is empty, but an empty DB is valid; instead modify the
logic in table-list.mysql.dao (the block checking if (!tables[0])) to return an
empty array (e.g., return []) rather than throwing HTTPException; remove the
HTTPException import/use for this case so consumers receive [] from the function
that queries tables.
In `@packages/server/src/dao/mysql/table-schema.mysql.dao.ts`:
- Around line 29-31: The query interpolates tableName directly into the SHOW
CREATE TABLE statement (see pool.execute and the `SHOW CREATE TABLE
\`${tableName}\`` usage), creating an SQL injection risk; sanitize the
identifier first by escaping any backticks in tableName (replace each ` with ``
per MySQL identifier escaping) or strip embedded backticks, then use the
sanitized variable in the interpolated query; you can add a small helper (e.g.,
sanitizeTableName) or inline the replacement right before calling pool.execute
to ensure only a safely escaped identifier is used.
In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts`:
- Around line 136-148: The COUNT and SELECT queries interpolate tableName
directly (used in pool.execute calls for the COUNT query and the data query with
filterWhereClause and combinedWhereClause/effectiveSortClause), creating an SQL
injection risk; fix by validating or escaping the identifier before use — either
validate tableName against a safe identifier whitelist/regex (as other MySQL
DAOs do) or use the driver’s identifier placeholder/escape (e.g., mysql2’s ?? /
escapeId) to inject the table name safely, and replace direct interpolation in
both the COUNT (`SELECT COUNT(*) as total FROM \`${tableName}\``) and data query
(`SELECT * FROM \`${tableName}\``) with the validated/escaped identifier.
In `@packages/server/src/dao/mysql/update-records.mysql.dao.ts`:
- Around line 55-73: Add strict identifier validation before interpolating names
into SQL: implement and call a validator (e.g., validateIdentifier) for
tableName, primaryKey and every u.columnName used when building
setClauses/rowUpdates; the validator should reject backticks, quotes and
SQL-special characters, enforce a safe regex like /^[A-Za-z0-9_-]+$/ and a
reasonable max length, and throw/fail early when invalid. Update the code paths
that reference tableName, primaryKey, and u.columnName (the block that builds
setClauses, values, and the query) to call this validator and abort with a clear
error instead of constructing the query when validation fails. Ensure
booleanColumns usage remains the same after validation.
In `@packages/server/src/db-manager.ts`:
- Around line 117-126: The code currently logs and returns raw connectionString
(seen in the pool.on error handler, console.log after
this.pgPools.set(connectionString, pool), and any methods that return pool
identifiers), which can leak credentials; update all uses of connectionString
(including in pool.on error logging, the "Created PostgreSQL connection pool"
console.log, and any return values) to first redact credentials by parsing the
connectionString (remove or replace user:password and any query params
containing secrets) or replace it with a safe pool identifier (e.g., sanitized
DSN or generated ID), and ensure this sanitized value is what gets logged,
stored in this.pgPools keys/metadata exposed to callers, and returned by any
getter methods.
- Around line 217-231: The closeAll() implementation in db-manager.ts currently
uses Promise.all over pgClosePromises and mysqlClosePromises so a single
pool.end() rejection can abort overall shutdown and prevent clearing
this.pgPools and this.mysqlPools; change it to use Promise.allSettled (or
individually catch errors for each async map callback) when awaiting the close
promises, log any failures with the associated connectionString (from the
pgPools/mysqlPools map entries) and ensure that regardless of individual errors
the method proceeds to clear this.pgPools and this.mysqlPools and returns/throws
a summarized error if you need to surface failures.
In `@packages/server/src/utils/build-clauses-mysql.ts`:
- Around line 20-21: The code is vulnerable because raw identifier interpolation
like `\`${filter.columnName}\`` allows backtick injection; add a single helper
function (e.g., escapeIdentifier) that takes an identifier string and returns it
with any backticks escaped by doubling (replace ` with ``), then wrap the
escaped result in backticks when building SQL. Replace all direct interpolations
(the local columnName assignment and other places flagged—uses around the
column/table/alias constructions) to use backticks around escapeIdentifier(...)
instead of raw values so identifiers are safely quoted everywhere in
build-clauses-mysql.ts.
- Around line 59-60: The switch handling filter operators in
build-clauses-mysql.ts currently has a default: break; which silently ignores
unknown operators; change that to fail closed by throwing a clear error (e.g.,
throw new Error(`Unsupported operator "${op}" for field "${field}"`)) from the
default branch in the function that builds clauses (the switch that inspects the
operator variable inside buildClauses / buildClausesForMySQL). Ensure the thrown
error includes the operator and field names to aid debugging and adjust any
callers/tests to expect an exception for unsupported operators.
In `@packages/shared/src/constants/column-types-pgsql.ts`:
- Around line 42-43: The exported PostgreSQL column types list in
column-types-pgsql.ts currently includes the invalid standalone entries "array"
and "enum"; remove these two entries from that list (or filter them out before
presenting options) so field.columnType can never be set to "array" or "enum"
because create-table.dao.ts substitutes field.columnType verbatim into SQL (the
`"${field.columnName}" ${field.columnType}` usage), which produces invalid DDL;
alternatively, implement UI/server-side validation in the selector code to
prevent selecting "array" without an element type and to block unnamed enums.
---
Minor comments:
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 311-314: The mm, dd, hh, and min assignments redundantly call
.padStart(2, "0") after using the helper pad() which already applies padding;
update the lines that set mm, dd, hh, and min to call pad(...) alone (e.g.,
replace pad(...).padStart(2, "0") with pad(...)) and leave the pad() helper
implementation as-is to avoid double-padding (refer to the variables mm, dd, hh,
min and the pad() helper).
In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts`:
- Around line 91-95: The fallback for MySQL connection port uses
parseDatabaseUrl() (urlDefaults.port) which can return PostgreSQL's 5432; update
the port fallback logic in database-list.mysql.dao.ts (where host/port are
constructed) to prefer info.port, then urlDefaults.port only if it's a
MySQL-appropriate value, and finally use 3306 as the ultimate default; i.e.,
change the port expression around Number(info.port || urlDefaults.port) to
coercively fall back to 3306 when urlDefaults.port is missing or equals 5432 so
MySQL connections never default to Postgres port.
In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts`:
- Around line 29-31: The COUNT query interpolates table.tableName directly (in
the pool.execute call that assigns countRows), which risks injection; sanitize
the identifier by escaping any backticks in table.tableName (e.g., replace `
with ``), then wrap the result in backticks and use that safe identifier in the
template literal for the SELECT COUNT(*) query so pool.execute uses the escaped
table name.
In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts`:
- Line 114: The ternary in the call to buildSortClauseMysql is redundant
(Array.isArray(sort) ? sort : sort) — update the invocation in the
tables-data.mysql.dao.ts code (where sortClause is computed) to pass sort
directly to buildSortClauseMysql (i.e., replace the ternary with just sort) so
the call becomes buildSortClauseMysql(sort, order); ensure you update the
reference that computes sortClause accordingly.
In `@packages/server/src/routes/records.routes.ts`:
- Line 160: Remove the debug console.log call: delete the statement
console.log("result", result) from records.routes.ts (where it appears in the
request handler) so no debug prints reach production; if you need persisted
logging instead, replace it with the project's logger (e.g.,
processLogger.error/info or the existing request logger) but otherwise simply
remove the console.log line.
In `@packages/server/tests/routes/mysql/tables.routes.mysql.test.ts`:
- Around line 910-919: The test "should handle table names with underscores"
stores the HTTP response in the variable res but doesn't assert the HTTP status;
update the test (in
packages/server/tests/routes/mysql/tables.routes.mysql.test.ts) to assert the
response status on res (e.g., expect(res.status).toBe(200) or appropriate code)
after calling app.request("/mysql/tables/user_profiles/columns?db=testdb") and
before verifying mysqlTableColumnsDao.getTableColumns was called to ensure
route-level success is validated.
In `@scripts/init-db-mysql.sh`:
- Line 29: The script currently sets MYSQL_ALLOW_EMPTY_PASSWORD=yes which
creates a passwordless root user and grants remote access; update the container
setup to avoid this by either replacing MYSQL_ALLOW_EMPTY_PASSWORD with a secure
default env var (e.g., MYSQL_ROOT_PASSWORD=<dev-password>) or add a prominent
warning comment above the MYSQL_ALLOW_EMPTY_PASSWORD line indicating this is
insecure for anything but isolated local development and must not be used in
production, and/or ensure DB_HOST is bound to localhost only; reference the
MYSQL_ALLOW_EMPTY_PASSWORD and DB_HOST env settings in your change so reviewers
can find and verify the fix.
---
Nitpick comments:
In @.github/workflows/check.yml:
- Around line 29-44: Rename the GitHub Actions job currently named "test-server"
to a name that reflects it runs the full test suite (e.g., "test" or
"run-tests") so it isn't misleading; update the job key where "test-server"
appears and ensure any references to that job key (if any) remain
consistent—this job executes "bun run test" so choose a broader identifier like
"test" or "ci-tests" and replace "test-server" accordingly.
In @.husky/pre-commit:
- Around line 1-3: The pre-commit hook currently runs the full test suite via
the command "bun run test", which slows commits; remove or replace that line so
pre-commit doesn't execute the entire suite: either delete the "bun run test"
entry, replace it with a faster check (e.g., a changed-files-aware test runner
flag like "bun run test --changed" or a lint/quick-smoke script), or move full
test execution to CI and keep only "bun run check" and "bun run build" in the
hook; ensure the modified hook still allows bypassing with --no-verify for quick
commits.
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 189-210: Extract the repeated tooltip-wrapped action button into a
reusable component (e.g., ActionButton) to remove duplication seen in the four
places using TooltipProvider/Tooltip/TooltipTrigger/TooltipContent and a button
with className and onClick calling field.onChange; implement ActionButton props
(ariaLabel, tooltip, onClick, icon, optional className) and replace each inline
block (the instances around the Clock button and the three similar blocks) with
<ActionButton ariaLabel="Set to now" tooltip="Set to now" onClick={() =>
field.onChange(...)} icon={<Clock .../>} /> so the behavior and accessibility
(aria-label, tooltip) and styling are preserved while centralizing the markup
and classes.
In `@packages/server/src/app.types.ts`:
- Around line 29-33: RouteEnv duplicates the Variables shape (dbType) that also
exists on AppType; extract a single shared Variables type (e.g., export type
Variables = { dbType: DatabaseTypeSchema }) and have both RouteEnv and AppType
reference that shared Variables type instead of redefining dbType, updating the
RouteEnv declaration (RouteEnv = { Variables: Variables }) and any AppType
definition to use the same exported Variables symbol so the schema cannot drift.
In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts`:
- Around line 37-62: The current loop issues a separate INSERT per record;
change bulk-insert-records.mysql.dao.ts to build a single multi-row INSERT for
batches: for the variables records, columns, booleanColumns, tableName,
columnNames, connection and ResultSetHeader, convert boolean string values the
same way but flatten values into a single array and build placeholders like "(?,
?, ...), (?, ?, ...)" for N rows, then call
connection.execute<ResultSetHeader>(insertSQL, flattenedValues) once per batch;
also implement chunking (e.g., max rows per query) to avoid oversized SQL,
increment successCount by the number of rows inserted on success, and keep the
existing HTTPException error handling.
In `@packages/server/src/dao/mysql/create-table.mysql.dao.ts`:
- Around line 112-115: The current type mapping maps "timestamp with time zone"
and timestamptz to "DATETIME" which loses timezone info; update the mapping in
the table type map (the object that currently contains the keys timestamp,
"timestamp without time zone", "timestamp with time zone", timestamptz) so that
"timestamp with time zone" and timestamptz map to MySQL "TIMESTAMP" (leave plain
timestamp or "timestamp without time zone" as "DATETIME" if you want
timezone-naive behavior) and add a short inline comment documenting the
behavioral difference for migrations.
In `@packages/server/src/dao/mysql/delete-table.mysql.dao.ts`:
- Around line 58-61: The code in the fkConstraints loop interpolates
constraint.referencingTable directly into SQL; update the loop in
delete-table.mysql.dao.ts (where fkConstraints is iterated) to avoid raw
interpolation by either validating the table name against a strict
whitelist/regex (e.g. allow only alphanumerics, underscore, dollar, and dot if
needed) and then wrapping it in backticks, or use the DB library's
identifier-escaping helper (e.g. connection.escapeId / pool.format with ??) to
produce a safely escaped identifier; apply the same treatment to any other
places that build queries from information_schema values.
In `@packages/server/src/dao/mysql/export-table.mysql.dao.ts`:
- Around line 16-20: The current block in export-table.mysql.dao.ts throws
HTTPException(404) for an empty result set; change this so that when the query
returns rows === [] you return an empty result (e.g., empty array or result
payload) instead of a 404, and only throw HTTPException(404) for a genuine
“table not found” error by detecting MySQL’s no-such-table condition (catch the
SQL error code like ER_NO_SUCH_TABLE or query information_schema.tables for
tableName); update the logic around the rows variable and tableName handling so
HTTPException is reserved for the actual-not-found case while empty tables
return an empty response.
In `@packages/server/src/db-manager.ts`:
- Line 128: The current return expression in the method (which uses
this.pgPools.get(connectionString) ?? new Pool({ connectionString })) can
allocate an unmanaged Pool; change it so that if a Pool for connectionString
doesn't exist you create a new Pool, register the shared error handler (the same
handler used for pooled instances), store it in this.pgPools under
connectionString, and then return that stored Pool; reference pgPools,
connectionString, and Pool so you attach the error listener and persist the
instance instead of returning a transient Pool.
In `@packages/server/tests/routes/query.routes.test.ts`:
- Around line 627-646: Replace the dynamic import and local mocking in the test
with the top-level mocked reference: remove the await import(...) and instead
get the already-mocked executeQuery from the shared mock (use the top-level
mocked module reference) and call vi.mocked(executeQuery).mockResolvedValue(...)
so the test uses the same mocked executeQuery as other tests; specifically
update the local mysqlExecuteQuery usage to reference the module-level
executeQuery mock and apply mockResolvedValue for
columns/rows/rowCount/duration.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (46)
.github/workflows/check.yml.husky/pre-commitpackage.jsonpackages/core/src/components/add-table/add-record/add-record-field.tsxpackages/server/package.jsonpackages/server/src/app.types.tspackages/server/src/cmd/get-db-url.tspackages/server/src/dao/mysql/add-record.mysql.dao.tspackages/server/src/dao/mysql/bulk-insert-records.mysql.dao.tspackages/server/src/dao/mysql/create-table.mysql.dao.tspackages/server/src/dao/mysql/database-list.mysql.dao.tspackages/server/src/dao/mysql/delete-column.mysql.dao.tspackages/server/src/dao/mysql/delete-records.mysql.dao.tspackages/server/src/dao/mysql/delete-table.mysql.dao.tspackages/server/src/dao/mysql/export-table.mysql.dao.tspackages/server/src/dao/mysql/query.mysql.dao.tspackages/server/src/dao/mysql/table-columns.mysql.dao.tspackages/server/src/dao/mysql/table-list.mysql.dao.tspackages/server/src/dao/mysql/table-schema.mysql.dao.tspackages/server/src/dao/mysql/tables-data.mysql.dao.tspackages/server/src/dao/mysql/update-records.mysql.dao.tspackages/server/src/db-manager.tspackages/server/src/index.tspackages/server/src/middlewares/error-handler.tspackages/server/src/routes/databases.routes.tspackages/server/src/routes/query.routes.tspackages/server/src/routes/records.routes.tspackages/server/src/routes/tables.routes.tspackages/server/src/utils/build-clauses-mysql.tspackages/server/tests/routes/databases.routes.test.tspackages/server/tests/routes/mysql/databases.routes.mysql.test.tspackages/server/tests/routes/mysql/query.routes.mysql.test.tspackages/server/tests/routes/mysql/records.routes.mysql.test.tspackages/server/tests/routes/mysql/tables.routes.mysql.test.tspackages/server/tests/routes/query.routes.test.tspackages/server/tests/routes/records.routes.test.tspackages/server/tests/routes/tables.routes.test.tspackages/server/tests/utils/create-server.test.tspackages/shared/src/constants/column-types-mysql.tspackages/shared/src/constants/column-types-pgsql.tspackages/shared/src/constants/index.tspackages/shared/src/types/column-info.types.tspackages/shared/src/types/column.type.tspackages/shared/src/types/database.types.tsscripts/init-db-mysql.shscripts/init-db-pgsql.sh
💤 Files with no reviewable changes (1)
- packages/server/src/index.ts
| if (dataTypeLabel === "timestamp") { | ||
| const hasTimestampDefault = | ||
| columnDefault?.toLowerCase().includes("current_timestamp") || | ||
| columnDefault?.toLowerCase().includes("now()") || | ||
| columnDefault?.toLowerCase().includes("localtimestamp"); | ||
|
|
||
| if (hasTimestampDefault) { | ||
| return ( | ||
| <div className="flex"> | ||
| <DatePicker | ||
| value={ | ||
| field.value ? new Date(field.value) : field.value === "" ? null : undefined | ||
| } | ||
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | ||
| showTime={true} | ||
| isFormatted={false} | ||
| placeholder={columnDefault ?? "Select a date and time"} | ||
| className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10" | ||
| /> | ||
| <TooltipProvider delayDuration={0}> | ||
| <Tooltip> | ||
| <TooltipTrigger asChild> | ||
| <button | ||
| aria-label="Set to now" | ||
| className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50" | ||
| type="button" | ||
| onClick={() => { | ||
| const now = new Date(); | ||
| const yyyy = now.getFullYear(); | ||
| const mm = pad(now.getMonth() + 1).padStart(2, "0"); | ||
| const dd = pad(now.getDate()).padStart(2, "0"); | ||
| const hh = pad(now.getHours()).padStart(2, "0"); | ||
| const min = pad(now.getMinutes()).padStart(2, "0"); | ||
| field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`); | ||
| }} | ||
| > | ||
| <Clock | ||
| aria-hidden="true" | ||
| className="size-4" | ||
| size={16} | ||
| /> | ||
| </button> | ||
| </TooltipTrigger> | ||
| <TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent> | ||
| </Tooltip> | ||
| </TooltipProvider> | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| return ( | ||
| <div className="flex"> | ||
| <DatePicker | ||
| value={field.value ? new Date(field.value) : field.value === "" ? null : undefined} | ||
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | ||
| showTime={true} | ||
| isFormatted={false} | ||
| placeholder={columnDefault ?? "Select a date and time"} | ||
| className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10" | ||
| /> | ||
| <TooltipProvider delayDuration={0}> | ||
| <Tooltip> | ||
| <TooltipTrigger asChild> | ||
| <button | ||
| aria-label="Set to now" | ||
| className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50" | ||
| type="button" | ||
| onClick={() => { | ||
| const now = new Date(); | ||
| const yyyy = now.getFullYear(); | ||
| const mm = pad(now.getMonth() + 1).padStart(2, "0"); | ||
| const dd = pad(now.getDate()).padStart(2, "0"); | ||
| const hh = pad(now.getHours()).padStart(2, "0"); | ||
| const min = pad(now.getMinutes()).padStart(2, "0"); | ||
| field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`); | ||
| }} | ||
| > | ||
| <Clock | ||
| aria-hidden="true" | ||
| className="size-4" | ||
| size={16} | ||
| /> | ||
| </button> | ||
| </TooltipTrigger> | ||
| <TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent> | ||
| </Tooltip> | ||
| </TooltipProvider> | ||
| </div> | ||
| <DatePicker | ||
| value={field.value ? new Date(field.value) : field.value === "" ? null : undefined} | ||
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | ||
| showTime={true} | ||
| isFormatted={false} | ||
| placeholder={columnDefault ?? "Select a date and time"} | ||
| /> | ||
| ); |
There was a problem hiding this comment.
Inconsistent time formatting between DatePicker and "Set to now" button.
The DatePicker's onChange (lines 295, 335) uses date.toISOString() which returns UTC time (e.g., "2024-01-15T10:30:00.000Z"), but the "Set to now" button (lines 309-315) formats local time without a timezone indicator (e.g., "2024-01-15T10:30").
For a timestamp (without timezone) column, this inconsistency means:
- Selecting via DatePicker → UTC time stored
- Clicking "Set to now" → Local time stored
This can cause time values to differ by the user's timezone offset.
🐛 Proposed fix to use consistent local time formatting
if (dataTypeLabel === "timestamp") {
+ const formatLocalDateTime = (date: Date): string => {
+ const yyyy = date.getFullYear();
+ const mm = pad(date.getMonth() + 1);
+ const dd = pad(date.getDate());
+ const hh = pad(date.getHours());
+ const min = pad(date.getMinutes());
+ const ss = pad(date.getSeconds());
+ return `${yyyy}-${mm}-${dd}T${hh}:${min}:${ss}`;
+ };
+
const hasTimestampDefault =
columnDefault?.toLowerCase().includes("current_timestamp") ||
columnDefault?.toLowerCase().includes("now()") ||
columnDefault?.toLowerCase().includes("localtimestamp");
if (hasTimestampDefault) {
return (
<div className="flex">
<DatePicker
value={
field.value ? new Date(field.value) : field.value === "" ? null : undefined
}
- onChange={(date) => field.onChange(date ? date.toISOString() : "")}
+ onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
...
onClick={() => {
- const now = new Date();
- const yyyy = now.getFullYear();
- const mm = pad(now.getMonth() + 1).padStart(2, "0");
- const dd = pad(now.getDate()).padStart(2, "0");
- const hh = pad(now.getHours()).padStart(2, "0");
- const min = pad(now.getMinutes()).padStart(2, "0");
- field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
+ field.onChange(formatLocalDateTime(new Date()));
}}
...
</div>
);
}
return (
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
- onChange={(date) => field.onChange(date ? date.toISOString() : "")}
+ onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
/>
);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (dataTypeLabel === "timestamp") { | |
| const hasTimestampDefault = | |
| columnDefault?.toLowerCase().includes("current_timestamp") || | |
| columnDefault?.toLowerCase().includes("now()") || | |
| columnDefault?.toLowerCase().includes("localtimestamp"); | |
| if (hasTimestampDefault) { | |
| return ( | |
| <div className="flex"> | |
| <DatePicker | |
| value={ | |
| field.value ? new Date(field.value) : field.value === "" ? null : undefined | |
| } | |
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | |
| showTime={true} | |
| isFormatted={false} | |
| placeholder={columnDefault ?? "Select a date and time"} | |
| className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10" | |
| /> | |
| <TooltipProvider delayDuration={0}> | |
| <Tooltip> | |
| <TooltipTrigger asChild> | |
| <button | |
| aria-label="Set to now" | |
| className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50" | |
| type="button" | |
| onClick={() => { | |
| const now = new Date(); | |
| const yyyy = now.getFullYear(); | |
| const mm = pad(now.getMonth() + 1).padStart(2, "0"); | |
| const dd = pad(now.getDate()).padStart(2, "0"); | |
| const hh = pad(now.getHours()).padStart(2, "0"); | |
| const min = pad(now.getMinutes()).padStart(2, "0"); | |
| field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`); | |
| }} | |
| > | |
| <Clock | |
| aria-hidden="true" | |
| className="size-4" | |
| size={16} | |
| /> | |
| </button> | |
| </TooltipTrigger> | |
| <TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent> | |
| </Tooltip> | |
| </TooltipProvider> | |
| </div> | |
| ); | |
| } | |
| return ( | |
| <div className="flex"> | |
| <DatePicker | |
| value={field.value ? new Date(field.value) : field.value === "" ? null : undefined} | |
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | |
| showTime={true} | |
| isFormatted={false} | |
| placeholder={columnDefault ?? "Select a date and time"} | |
| className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10" | |
| /> | |
| <TooltipProvider delayDuration={0}> | |
| <Tooltip> | |
| <TooltipTrigger asChild> | |
| <button | |
| aria-label="Set to now" | |
| className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50" | |
| type="button" | |
| onClick={() => { | |
| const now = new Date(); | |
| const yyyy = now.getFullYear(); | |
| const mm = pad(now.getMonth() + 1).padStart(2, "0"); | |
| const dd = pad(now.getDate()).padStart(2, "0"); | |
| const hh = pad(now.getHours()).padStart(2, "0"); | |
| const min = pad(now.getMinutes()).padStart(2, "0"); | |
| field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`); | |
| }} | |
| > | |
| <Clock | |
| aria-hidden="true" | |
| className="size-4" | |
| size={16} | |
| /> | |
| </button> | |
| </TooltipTrigger> | |
| <TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent> | |
| </Tooltip> | |
| </TooltipProvider> | |
| </div> | |
| <DatePicker | |
| value={field.value ? new Date(field.value) : field.value === "" ? null : undefined} | |
| onChange={(date) => field.onChange(date ? date.toISOString() : "")} | |
| showTime={true} | |
| isFormatted={false} | |
| placeholder={columnDefault ?? "Select a date and time"} | |
| /> | |
| ); | |
| if (dataTypeLabel === "timestamp") { | |
| const formatLocalDateTime = (date: Date): string => { | |
| const yyyy = date.getFullYear(); | |
| const mm = pad(date.getMonth() + 1); | |
| const dd = pad(date.getDate()); | |
| const hh = pad(date.getHours()); | |
| const min = pad(date.getMinutes()); | |
| const ss = pad(date.getSeconds()); | |
| return `${yyyy}-${mm}-${dd}T${hh}:${min}:${ss}`; | |
| }; | |
| const hasTimestampDefault = | |
| columnDefault?.toLowerCase().includes("current_timestamp") || | |
| columnDefault?.toLowerCase().includes("now()") || | |
| columnDefault?.toLowerCase().includes("localtimestamp"); | |
| if (hasTimestampDefault) { | |
| return ( | |
| <div className="flex"> | |
| <DatePicker | |
| value={ | |
| field.value ? new Date(field.value) : field.value === "" ? null : undefined | |
| } | |
| onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")} | |
| showTime={true} | |
| isFormatted={false} | |
| placeholder={columnDefault ?? "Select a date and time"} | |
| className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10" | |
| /> | |
| <TooltipProvider delayDuration={0}> | |
| <Tooltip> | |
| <TooltipTrigger asChild> | |
| <button | |
| aria-label="Set to now" | |
| className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50" | |
| type="button" | |
| onClick={() => { | |
| field.onChange(formatLocalDateTime(new Date())); | |
| }} | |
| > | |
| <Clock | |
| aria-hidden="true" | |
| className="size-4" | |
| size={16} | |
| /> | |
| </button> | |
| </TooltipTrigger> | |
| <TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent> | |
| </Tooltip> | |
| </TooltipProvider> | |
| </div> | |
| ); | |
| } | |
| return ( | |
| <DatePicker | |
| value={field.value ? new Date(field.value) : field.value === "" ? null : undefined} | |
| onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")} | |
| showTime={true} | |
| isFormatted={false} | |
| placeholder={columnDefault ?? "Select a date and time"} | |
| /> | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`
around lines 282 - 340, The DatePicker onChange currently uses
date.toISOString() (in the DatePicker instances in add-record-field.tsx) which
stores UTC while the "Set to now" button formats local time; make both sides use
the same local timestamp format for timestamp columns: update the DatePicker
onChange handlers (the ones that call field.onChange(date ? date.toISOString() :
"")) to format the Date to a local string like
`${yyyy}-${mm}-${dd}T${hh}:${min}` (matching the "Set to now" button) so
field.onChange receives consistent local "YYYY-MM-DDTHH:MM" values; ensure you
reuse the same padding logic (pad function) and avoid including
timezone/seconds.
| const query = ` | ||
| INSERT INTO \`${tableName}\` (${columnNames}) | ||
| VALUES (${placeholders}) | ||
| `; |
There was a problem hiding this comment.
SQL injection risk: tableName is directly interpolated.
The tableName parameter is directly interpolated into the SQL query without validation or parameterization. While MySQL doesn't support parameterized table names, you should validate that tableName contains only allowed characters (alphanumeric, underscores) to prevent injection attacks.
🛡️ Proposed fix to add table name validation
+const SAFE_IDENTIFIER_REGEX = /^[a-zA-Z_][a-zA-Z0-9_]*$/;
+
export async function addRecord({
db,
params,
}: {
db: DatabaseSchemaType["db"];
params: AddRecordSchemaType;
}): Promise<{ insertedCount: number }> {
const { tableName, data } = params;
+
+ if (!SAFE_IDENTIFIER_REGEX.test(tableName)) {
+ throw new HTTPException(400, { message: "Invalid table name" });
+ }
+
const pool = getMysqlPool(db);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mysql/add-record.mysql.dao.ts` around lines 34 - 37,
Validate the tableName used when building the INSERT query in
add-record.mysql.dao.ts to prevent SQL injection: before interpolating tableName
into the query string (where query is created using `INSERT INTO
\`${tableName}\`` alongside `columnNames` and `placeholders`), ensure tableName
matches a strict allowlist regex (e.g. only letters, numbers and underscores)
and throw or return an error if it fails validation; keep the rest of the query
construction unchanged and do not attempt to parameterize the table name.
| let successCount = 0; | ||
| const errors: Array<{ recordIndex: number; error: string }> = []; | ||
|
|
||
| await connection.beginTransaction(); | ||
|
|
||
| for (let i = 0; i < records.length; i++) { | ||
| const record = records[i]; | ||
| const values = columns.map((col) => { | ||
| const value = record[col]; | ||
| if (booleanColumns.has(col) && typeof value === "string") { | ||
| return value === "true" ? 1 : 0; | ||
| } | ||
| return value; | ||
| }); | ||
| const placeholders = columns.map(() => "?").join(", "); | ||
|
|
||
| const insertSQL = ` | ||
| INSERT INTO \`${tableName}\` (${columnNames}) | ||
| VALUES (${placeholders}) | ||
| `; | ||
|
|
||
| try { | ||
| // biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[] | ||
| await connection.execute<ResultSetHeader>(insertSQL, values as any); | ||
| successCount++; | ||
| } catch (error) { | ||
| throw new HTTPException(500, { | ||
| message: `Failed: ${error instanceof Error ? error.message : String(error)}`, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| await connection.commit(); | ||
|
|
||
| const failureCount = errors.length; | ||
| return { | ||
| success: failureCount === 0, | ||
| message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`, | ||
| successCount, | ||
| failureCount, | ||
| errors: errors.length > 0 ? errors : undefined, |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Dead code: errors array is never populated.
The inner catch block (lines 57-61) throws an HTTPException immediately instead of collecting errors. This means errors is always empty, failureCount is always 0, and the error aggregation logic never executes. If partial failures should be supported, collect errors and continue; otherwise, remove the unused errors array.
♻️ Option A: Remove dead code if all-or-nothing is intended
let successCount = 0;
- const errors: Array<{ recordIndex: number; error: string }> = [];
await connection.beginTransaction();
// ... loop ...
await connection.commit();
- const failureCount = errors.length;
return {
- success: failureCount === 0,
- message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`,
+ success: true,
+ message: `Bulk insert completed: ${successCount} records inserted`,
successCount,
- failureCount,
- errors: errors.length > 0 ? errors : undefined,
+ failureCount: 0,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let successCount = 0; | |
| const errors: Array<{ recordIndex: number; error: string }> = []; | |
| await connection.beginTransaction(); | |
| for (let i = 0; i < records.length; i++) { | |
| const record = records[i]; | |
| const values = columns.map((col) => { | |
| const value = record[col]; | |
| if (booleanColumns.has(col) && typeof value === "string") { | |
| return value === "true" ? 1 : 0; | |
| } | |
| return value; | |
| }); | |
| const placeholders = columns.map(() => "?").join(", "); | |
| const insertSQL = ` | |
| INSERT INTO \`${tableName}\` (${columnNames}) | |
| VALUES (${placeholders}) | |
| `; | |
| try { | |
| // biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[] | |
| await connection.execute<ResultSetHeader>(insertSQL, values as any); | |
| successCount++; | |
| } catch (error) { | |
| throw new HTTPException(500, { | |
| message: `Failed: ${error instanceof Error ? error.message : String(error)}`, | |
| }); | |
| } | |
| } | |
| await connection.commit(); | |
| const failureCount = errors.length; | |
| return { | |
| success: failureCount === 0, | |
| message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`, | |
| successCount, | |
| failureCount, | |
| errors: errors.length > 0 ? errors : undefined, | |
| let successCount = 0; | |
| await connection.beginTransaction(); | |
| for (let i = 0; i < records.length; i++) { | |
| const record = records[i]; | |
| const values = columns.map((col) => { | |
| const value = record[col]; | |
| if (booleanColumns.has(col) && typeof value === "string") { | |
| return value === "true" ? 1 : 0; | |
| } | |
| return value; | |
| }); | |
| const placeholders = columns.map(() => "?").join(", "); | |
| const insertSQL = ` | |
| INSERT INTO \`${tableName}\` (${columnNames}) | |
| VALUES (${placeholders}) | |
| `; | |
| try { | |
| // biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[] | |
| await connection.execute<ResultSetHeader>(insertSQL, values as any); | |
| successCount++; | |
| } catch (error) { | |
| throw new HTTPException(500, { | |
| message: `Failed: ${error instanceof Error ? error.message : String(error)}`, | |
| }); | |
| } | |
| } | |
| await connection.commit(); | |
| return { | |
| success: true, | |
| message: `Bulk insert completed: ${successCount} records inserted`, | |
| successCount, | |
| failureCount: 0, | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines
32 - 72, The errors array is never populated because the catch in the
bulk-insert loop immediately throws an HTTPException; decide whether bulk insert
should be all-or-nothing or allow partial failures. If all-or-nothing: remove
the errors array and related failureCount/errors return handling and keep
throwing the HTTPException from the catch (affecting the try/catch inside the
loop and the final return that references errors/successCount/failureCount). If
partial failures: change the catch inside the for-loop to push a descriptive
entry into errors (include recordIndex i and error.message from the caught
error), do not rethrow, continue the loop, and ensure you only call
connection.commit() if desired (or call rollback on fatal error), then compute
failureCount from errors.length and return errors in the response instead of
throwing HTTPException.
| const insertSQL = ` | ||
| INSERT INTO \`${tableName}\` (${columnNames}) | ||
| VALUES (${placeholders}) | ||
| `; |
There was a problem hiding this comment.
SQL injection risk: tableName is directly interpolated into INSERT statement.
Column names from records[0] and tableName are interpolated without escaping. Malicious column names or table names could inject SQL.
🛡️ Proposed fix: escape backticks in identifiers
+ const escapeIdentifier = (id: string) => id.replace(/`/g, "``");
+ const escapedTableName = escapeIdentifier(tableName);
+ const columnNames = columns.map((col) => `\`${escapeIdentifier(col)}\``).join(", ");
- const columnNames = columns.map((col) => `\`${col}\``).join(", ");
// ... later in the loop ...
const insertSQL = `
- INSERT INTO \`${tableName}\` (${columnNames})
+ INSERT INTO \`${escapedTableName}\` (${columnNames})
VALUES (${placeholders})
`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines
48 - 51, The INSERT string directly interpolates tableName and columnNames
(built from records[0]) and is vulnerable to SQL injection; add an
escapeIdentifier utility (e.g., escapeIdentifier(name): replace any backtick
with double backtick and wrap with backticks) and use it to sanitize tableName
and each column name before constructing columnNames and insertSQL (leave VALUES
placeholders and parameter binding as-is). Update where columnNames is built and
the insertSQL template so it uses escaped identifiers instead of raw
tableName/column names (symbols to change: insertSQL, tableName, columnNames,
placeholders, and the code that reads records[0]).
| if (!rows[0]) { | ||
| throw new HTTPException(500, { | ||
| message: "No current database returned from server", | ||
| }); | ||
| } | ||
|
|
||
| return (rows as Array<{ db: string }>)[0] as DatabaseSchemaType; | ||
| } |
There was a problem hiding this comment.
Guard DATABASE() null before returning DatabaseSchemaType.
At Line 59, unchecked casting can return db: null, violating the declared schema contract.
🐛 Proposed fix
export async function getCurrentDatabase(): Promise<DatabaseSchemaType> {
const pool = getMysqlPool();
const [rows] = await pool.execute<RowDataPacket[]>("SELECT DATABASE() AS db");
if (!rows[0]) {
throw new HTTPException(500, {
message: "No current database returned from server",
});
}
-
- return (rows as Array<{ db: string }>)[0] as DatabaseSchemaType;
+ const db = (rows as Array<{ db: string | null }>)[0]?.db;
+ if (!db) {
+ throw new HTTPException(500, {
+ message: "Current database is null or empty",
+ });
+ }
+ return { db };
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts` around lines 53 -
60, The code casts rows[0] to DatabaseSchemaType without checking that the
DATABASE() column isn't null; update the return path in the function that
queries the list of databases (the block using rows and returning
DatabaseSchemaType) to validate that rows[0].db is non-null/defined before
casting—if rows[0].db is null or undefined, throw an HTTPException(500, {
message: "DATABASE() returned null or no database name" }) (or similar) instead
of returning an object that violates DatabaseSchemaType; ensure you reference
the existing rows variable and DatabaseSchemaType guard before the final return.
| pool.on("error", (err) => { | ||
| console.error( | ||
| `Unexpected error on database pool for "${connectionString}":`, | ||
| `Unexpected error on PostgreSQL pool for "${connectionString}":`, | ||
| err.message, | ||
| ); | ||
| }); | ||
|
|
||
| this.pools.set(connectionString, pool); | ||
| console.log(`Created connection pool for: ${connectionString}`); | ||
| this.pgPools.set(connectionString, pool); | ||
| console.log(`Created PostgreSQL connection pool for: ${connectionString}`); | ||
| } |
There was a problem hiding this comment.
Redact connection strings before logging or returning pool identifiers.
Line 125, Line 160, Line 181, Line 193, Line 220, and Line 226 log full connection strings, and Line 238 returns them. If credentials are embedded, this leaks secrets.
🔐 Suggested patch
class DatabaseManager {
+ private redactConnectionString(connectionString: string): string {
+ try {
+ const url = new URL(connectionString);
+ if (url.password) url.password = "***";
+ return url.toString();
+ } catch {
+ return "<invalid-connection-string>";
+ }
+ }
+
...
pool.on("error", (err) => {
console.error(
- `Unexpected error on PostgreSQL pool for "${connectionString}":`,
+ `Unexpected error on PostgreSQL pool for "${this.redactConnectionString(connectionString)}":`,
err.message,
);
});
this.pgPools.set(connectionString, pool);
- console.log(`Created PostgreSQL connection pool for: ${connectionString}`);
+ console.log(`Created PostgreSQL connection pool for: ${this.redactConnectionString(connectionString)}`);
}
...
- console.log(`Created MySQL connection pool for: ${connectionString}`);
+ console.log(`Created MySQL connection pool for: ${this.redactConnectionString(connectionString)}`);
}
...
- console.log(`Closed PostgreSQL connection pool for: ${connectionString}`);
+ console.log(`Closed PostgreSQL connection pool for: ${this.redactConnectionString(connectionString)}`);
}
...
- console.log(`Closed MySQL connection pool for: ${connectionString}`);
+ console.log(`Closed MySQL connection pool for: ${this.redactConnectionString(connectionString)}`);
}
...
- console.log(`Closed PostgreSQL pool for: ${connectionString}`);
+ console.log(`Closed PostgreSQL pool for: ${this.redactConnectionString(connectionString)}`);
},
);
const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
- console.log(`Closed MySQL pool for: ${connectionString}`);
+ console.log(`Closed MySQL pool for: ${this.redactConnectionString(connectionString)}`);
},
);
...
getActivePools(): string[] {
- return [...Array.from(this.pgPools.keys()), ...Array.from(this.mysqlPools.keys())];
+ return [...Array.from(this.pgPools.keys()), ...Array.from(this.mysqlPools.keys())].map(
+ (connectionString) => this.redactConnectionString(connectionString),
+ );
}
}Also applies to: 159-161, 181-182, 193-194, 217-227, 237-239
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/db-manager.ts` around lines 117 - 126, The code currently
logs and returns raw connectionString (seen in the pool.on error handler,
console.log after this.pgPools.set(connectionString, pool), and any methods that
return pool identifiers), which can leak credentials; update all uses of
connectionString (including in pool.on error logging, the "Created PostgreSQL
connection pool" console.log, and any return values) to first redact credentials
by parsing the connectionString (remove or replace user:password and any query
params containing secrets) or replace it with a safe pool identifier (e.g.,
sanitized DSN or generated ID), and ensure this sanitized value is what gets
logged, stored in this.pgPools keys/metadata exposed to callers, and returned by
any getter methods.
| const pgClosePromises = Array.from(this.pgPools.entries()).map( | ||
| async ([connectionString, pool]) => { | ||
| await pool.end(); | ||
| console.log(`Closed connection pool for: ${connectionString}`); | ||
| console.log(`Closed PostgreSQL pool for: ${connectionString}`); | ||
| }, | ||
| ); | ||
| await Promise.all(closePromises); | ||
| this.pools.clear(); | ||
| const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map( | ||
| async ([connectionString, pool]) => { | ||
| await pool.end(); | ||
| console.log(`Closed MySQL pool for: ${connectionString}`); | ||
| }, | ||
| ); | ||
| await Promise.all([...pgClosePromises, ...mysqlClosePromises]); | ||
| this.pgPools.clear(); | ||
| this.mysqlPools.clear(); |
There was a problem hiding this comment.
Make closeAll() resilient to partial pool shutdown failures.
Line 229 uses Promise.all, so one pool.end() failure can prevent complete cleanup and skip map clearing.
🧯 Suggested patch
async closeAll(): Promise<void> {
const pgClosePromises = Array.from(this.pgPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed PostgreSQL pool for: ${connectionString}`);
},
);
const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed MySQL pool for: ${connectionString}`);
},
);
- await Promise.all([...pgClosePromises, ...mysqlClosePromises]);
- this.pgPools.clear();
- this.mysqlPools.clear();
+ const results = await Promise.allSettled([...pgClosePromises, ...mysqlClosePromises]);
+ this.pgPools.clear();
+ this.mysqlPools.clear();
+ const failures = results.filter((r) => r.status === "rejected");
+ if (failures.length > 0) {
+ throw new Error(`Failed to close ${failures.length} database pool(s).`);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const pgClosePromises = Array.from(this.pgPools.entries()).map( | |
| async ([connectionString, pool]) => { | |
| await pool.end(); | |
| console.log(`Closed connection pool for: ${connectionString}`); | |
| console.log(`Closed PostgreSQL pool for: ${connectionString}`); | |
| }, | |
| ); | |
| await Promise.all(closePromises); | |
| this.pools.clear(); | |
| const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map( | |
| async ([connectionString, pool]) => { | |
| await pool.end(); | |
| console.log(`Closed MySQL pool for: ${connectionString}`); | |
| }, | |
| ); | |
| await Promise.all([...pgClosePromises, ...mysqlClosePromises]); | |
| this.pgPools.clear(); | |
| this.mysqlPools.clear(); | |
| const pgClosePromises = Array.from(this.pgPools.entries()).map( | |
| async ([connectionString, pool]) => { | |
| await pool.end(); | |
| console.log(`Closed PostgreSQL pool for: ${connectionString}`); | |
| }, | |
| ); | |
| const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map( | |
| async ([connectionString, pool]) => { | |
| await pool.end(); | |
| console.log(`Closed MySQL pool for: ${connectionString}`); | |
| }, | |
| ); | |
| const results = await Promise.allSettled([...pgClosePromises, ...mysqlClosePromises]); | |
| this.pgPools.clear(); | |
| this.mysqlPools.clear(); | |
| const failures = results.filter((r) => r.status === "rejected"); | |
| if (failures.length > 0) { | |
| throw new Error(`Failed to close ${failures.length} database pool(s).`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/db-manager.ts` around lines 217 - 231, The closeAll()
implementation in db-manager.ts currently uses Promise.all over pgClosePromises
and mysqlClosePromises so a single pool.end() rejection can abort overall
shutdown and prevent clearing this.pgPools and this.mysqlPools; change it to use
Promise.allSettled (or individually catch errors for each async map callback)
when awaiting the close promises, log any failures with the associated
connectionString (from the pgPools/mysqlPools map entries) and ensure that
regardless of individual errors the method proceeds to clear this.pgPools and
this.mysqlPools and returns/throws a summarized error if you need to surface
failures.
| const columnName = `\`${filter.columnName}\``; | ||
|
|
There was a problem hiding this comment.
Unsanitized identifier interpolation enables SQL injection.
Backtick-wrapping alone is insufficient. A crafted columnName containing backticks can break out of identifier context in all affected spots.
🔒 Proposed fix (escape identifiers centrally)
import type { CursorData, FilterType, SortDirection, SortType } from "shared/types";
+function escapeMysqlIdentifier(identifier: string): string {
+ return `\`${identifier.replaceAll("`", "``")}\``;
+}
+
/**
* MySQL-specific WHERE clause builder.
@@
- const columnName = `\`${filter.columnName}\``;
+ const columnName = escapeMysqlIdentifier(filter.columnName);
@@
- const sortParts = sorts.map(
- (sort) => `\`${sort.columnName}\` ${sort.direction.toUpperCase()}`,
- );
+ const sortParts = sorts.map(
+ (sort) => `${escapeMysqlIdentifier(sort.columnName)} ${sort.direction.toUpperCase()}`,
+ );
@@
- return `ORDER BY \`${sorts}\` ${order?.toUpperCase() || "ASC"}`;
+ return `ORDER BY ${escapeMysqlIdentifier(sorts)} ${order?.toUpperCase() || "ASC"}`;
@@
- const columnList = sortColumns.map((col) => `\`${col}\``).join(", ");
+ const columnList = sortColumns.map((col) => escapeMysqlIdentifier(col)).join(", ");Also applies to: 84-85, 90-90, 114-114
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/utils/build-clauses-mysql.ts` around lines 20 - 21, The
code is vulnerable because raw identifier interpolation like
`\`${filter.columnName}\`` allows backtick injection; add a single helper
function (e.g., escapeIdentifier) that takes an identifier string and returns it
with any backticks escaped by doubling (replace ` with ``), then wrap the
escaped result in backticks when building SQL. Replace all direct interpolations
(the local columnName assignment and other places flagged—uses around the
column/table/alias constructions) to use backticks around escapeIdentifier(...)
instead of raw values so identifiers are safely quoted everywhere in
build-clauses-mysql.ts.
| default: | ||
| break; |
There was a problem hiding this comment.
Unsupported operators should fail closed, not silently skip filters.
At Line 59-60, dropping unknown operators can unexpectedly broaden result sets.
🛡️ Proposed fix
case "not like":
case "not ilike":
conditions.push(`${columnName} NOT LIKE ?`);
values.push(filter.value);
break;
default:
- break;
+ throw new Error(`Unsupported filter operator: ${String(filter.operator)}`);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| default: | |
| break; | |
| default: | |
| throw new Error(`Unsupported filter operator: ${String(filter.operator)}`); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/utils/build-clauses-mysql.ts` around lines 59 - 60, The
switch handling filter operators in build-clauses-mysql.ts currently has a
default: break; which silently ignores unknown operators; change that to fail
closed by throwing a clear error (e.g., throw new Error(`Unsupported operator
"${op}" for field "${field}"`)) from the default branch in the function that
builds clauses (the switch that inspects the operator variable inside
buildClauses / buildClausesForMySQL). Ensure the thrown error includes the
operator and field names to aid debugging and adjust any callers/tests to expect
an exception for unsupported operators.
| { value: "array", label: "array" }, | ||
| { value: "enum", label: "enum" }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
PostgreSQL CREATE TABLE column type declarations "array" "enum" valid DDL syntax
💡 Result:
Array column types in CREATE TABLE (valid PostgreSQL DDL)
PostgreSQL does not use a standalone type name like array; you declare an array by decorating an element type:
CREATE TABLE t (
ints_1d integer[],
texts_2d text[][],
mood_list mood[] -- arrays also work for user-defined types (e.g., enums)
);Array type declaration forms PostgreSQL accepts include:
element_type[](one-dimensional)element_type[][](multi-dimensional)- Optionally with sizes (documentational; not enforced):
integer[3][3][1] - SQL-standard alternative (1-D only):
integer ARRAYorinteger ARRAY[4][1]
Enum column types in CREATE TABLE (valid PostgreSQL DDL)
Enums must be defined as a type first, then used as a column type:
CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');
CREATE TABLE person (
name text,
current_mood mood
);This is the canonical PostgreSQL enum DDL pattern (CREATE TYPE ... AS ENUM (...) then use the type name in CREATE TABLE) [2].
You can also make arrays of enums with mood[] (same array syntax as above) [1][2].
Sources: PostgreSQL docs on arrays [1] and enumerated types [2].
[1] https://www.postgresql.org/docs/current/arrays.html
[2] https://www.postgresql.org/docs/11/datatype-enum.html
🏁 Script executed:
# Search for how these column type constants are used in the codebase
rg -t ts "column-types-pgsql" --max-count=20Repository: husamql3/db-studio
Length of output: 141
🏁 Script executed:
# Find the file and explore its context
find . -name "column-types-pgsql.ts" -type fRepository: husamql3/db-studio
Length of output: 115
🏁 Script executed:
# Read the column-types-pgsql.ts file to see context
cat -n ./packages/shared/src/constants/column-types-pgsql.ts | head -60Repository: husamql3/db-studio
Length of output: 1878
🏁 Script executed:
# Search for usages of these constants, particularly in SQL builders or DDL generation
rg -t ts "array\"|\"enum\"" packages/shared/src/constants/column-types-pgsql.ts -A 2 -B 2Repository: husamql3/db-studio
Length of output: 199
🏁 Script executed:
# Search for where these column type constants are imported and used
rg -t ts "column-types-pgsql|columnTypes" --max-count=30Repository: husamql3/db-studio
Length of output: 141
🏁 Script executed:
# Search for imports and uses of PGSQL_COLUMN_TYPES across the codebase
rg -t ts "PGSQL_COLUMN_TYPES" --max-count=50Repository: husamql3/db-studio
Length of output: 277
🏁 Script executed:
# Search for SQL generation or DDL builders that might use column type values
rg -t ts "CREATE TABLE|INSERT|ALTER TABLE" -l | head -20Repository: husamql3/db-studio
Length of output: 944
🏁 Script executed:
# Look for files that might handle column type selection/processing
fd -t f -e ts "column|type|builder|query|sql" packages/ | grep -v node_modules | head -20Repository: husamql3/db-studio
Length of output: 1075
🏁 Script executed:
# Check the create-table DAO for PostgreSQL to see how column types are used in SQL generation
cat -n packages/server/src/dao/create-table.dao.ts | head -80Repository: husamql3/db-studio
Length of output: 2530
🏁 Script executed:
# Also check if there's a PostgreSQL-specific create-table DAO
fd -t f "create-table.*pgsql" packages/server/src/dao/Repository: husamql3/db-studio
Length of output: 44
🏁 Script executed:
# Search for how the column type value is actually used in SQL generation
rg -t ts "\.value" packages/server/src/dao/create-table.dao.ts -B 3 -A 3Repository: husamql3/db-studio
Length of output: 44
🏁 Script executed:
# Search for where columnType gets set and if there's validation for "array" and "enum"
rg -t ts "columnType|isArray" packages/ --max-count=50 | head -60Repository: husamql3/db-studio
Length of output: 6253
🏁 Script executed:
# Check if there's UI logic that handles "array" and "enum" specially
rg -t tsx "array|enum" packages/core/src/ -l | grep -E "form|field|column|type" | head -10Repository: husamql3/db-studio
Length of output: 89
🏁 Script executed:
# Look at the type definition for FieldDataType to understand the structure
rg -t ts "type FieldDataType|interface FieldDataType" -A 15Repository: husamql3/db-studio
Length of output: 2880
🏁 Script executed:
# Check if there's validation for "array" or "enum" column types
rg -t ts "array|enum" packages/server/src/dao/create-table.dao.ts -B 5 -A 5Repository: husamql3/db-studio
Length of output: 335
🏁 Script executed:
# Look at how columnType is validated or processed before SQL generation
rg -t ts "mapColumnTypeToPostgres|validateColumnType|columnType" packages/server/src/dao/ -lRepository: husamql3/db-studio
Length of output: 218
🏁 Script executed:
# Check if there's any preprocessing or validation in the create-table endpoint
rg -t ts "createTable" packages/server/src/routes/ -A 10 | head -60Repository: husamql3/db-studio
Length of output: 3669
Remove "array" and "enum" from PostgreSQL column types—they produce invalid DDL.
At lines 42-43, array and enum are invalid standalone PostgreSQL column types in CREATE TABLE statements:
- PostgreSQL requires array syntax as
element_type[](e.g.,integer[],text[]), not barearray - Enum columns require a pre-created named enum type (e.g.,
CREATE TYPE mood AS ENUM (...)first), which this UI doesn't support
Since field.columnType is substituted directly into SQL (line 21 of create-table.dao.ts: "${field.columnName}" ${field.columnType}), selecting these values produces invalid SQL. The array case has a workaround (isArray flag appends []), but enum has no recovery path. Remove both entries or implement proper validation to prevent their selection.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/constants/column-types-pgsql.ts` around lines 42 - 43,
The exported PostgreSQL column types list in column-types-pgsql.ts currently
includes the invalid standalone entries "array" and "enum"; remove these two
entries from that list (or filter them out before presenting options) so
field.columnType can never be set to "array" or "enum" because
create-table.dao.ts substitutes field.columnType verbatim into SQL (the
`"${field.columnName}" ${field.columnType}` usage), which produces invalid DDL;
alternatively, implement UI/server-side validation in the selector code to
prevent selecting "array" without an element type and to block unnamed enums.
Summary by CodeRabbit
New Features
Bug Fixes
Chores