Skip to content

support mysql database#125

Merged
husamql3 merged 6 commits intostagefrom
feat/123/support-mysql-database
Mar 1, 2026
Merged

support mysql database#125
husamql3 merged 6 commits intostagefrom
feat/123/support-mysql-database

Conversation

@husamql3
Copy link
Owner

@husamql3 husamql3 commented Feb 28, 2026

Summary by CodeRabbit

  • New Features

    • Added MySQL database support alongside PostgreSQL.
    • Enhanced date and timestamp fields with "Set to now" controls.
    • Added UUID field generation with auto-fill capability.
  • Bug Fixes

    • Improved database connection error detection and messaging.
  • Chores

    • Updated database initialization to support both PostgreSQL and MySQL setup.
    • Removed automatic browser launch on server startup.
    • Enhanced test coverage for multi-database support.

🎯 Auto PR: stage → main
🎯 Auto PR: stage → main
🎯 Auto PR: stage → main
@coderabbitai
Copy link

coderabbitai bot commented Feb 28, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request introduces MySQL database support alongside existing PostgreSQL functionality. Changes include dialect-specific DAO implementations for all data operations, runtime route dispatch based on database type, pool management updates to handle both databases, enhanced date/time/UUID field handling in the UI, and supporting workflow and initialization script modifications.

Changes

Cohort / File(s) Summary
MySQL Data Access Layer
packages/server/src/dao/mysql/add-record.mysql.dao.ts, bulk-insert-records.mysql.dao.ts, create-table.mysql.dao.ts, database-list.mysql.dao.ts, delete-column.mysql.dao.ts, delete-records.mysql.dao.ts, delete-table.mysql.dao.ts, export-table.mysql.dao.ts, query.mysql.dao.ts, table-columns.mysql.dao.ts, table-list.mysql.dao.ts, table-schema.mysql.dao.ts, tables-data.mysql.dao.ts, update-records.mysql.dao.ts
New MySQL-specific DAO implementations for CRUD operations, table/column management, database introspection, and querying. Each provides database-native SQL generation and error handling.
Database Configuration & Pool Management
packages/server/src/db-manager.ts, packages/shared/src/types/database.types.ts
Extended db-manager to support separate MySQL and PostgreSQL connection pools with type-aware pool selection. Updated DATABASE_TYPES to include "mysql" alongside "pg".
Route Dispatch & Environment Typing
packages/server/src/routes/databases.routes.ts, query.routes.ts, records.routes.ts, tables.routes.ts, packages/server/src/app.types.ts
Updated all major routes to dispatch to MySQL or PostgreSQL DAOs based on dbType extracted from request context. Added RouteEnv type for typed route access to database type.
Date/Time/UUID Field Enhancements
packages/core/src/components/add-table/add-record/add-record-field.tsx
Enhanced date picker rendering with "Set to now" controls for fields with current date/time defaults. Added UUID generation button and improved timestamp handling for date-like field types.
Type Definitions & Constants
packages/shared/src/constants/column-types-mysql.ts, column-types-pgsql.ts, index.ts, packages/shared/src/types/column-info.types.ts, column.type.ts
Added MySQL column type mappings, PostgreSQL column type constants, and standardized data type labels. Implemented mapMysqlToDataType and standardizeMysqlDataTypeLabel for MySQL type normalization.
SQL Clause Builders
packages/server/src/utils/build-clauses-mysql.ts
New MySQL-specific SQL clause builders for WHERE, ORDER BY, and cursor-based pagination using MySQL parameterization syntax (?) and backtick identifiers.
Comprehensive Test Suites
packages/server/tests/routes/mysql/databases.routes.mysql.test.ts, query.routes.mysql.test.ts, records.routes.mysql.test.ts, tables.routes.mysql.test.ts, packages/server/tests/routes/query.routes.test.ts, records.routes.test.ts, tables.routes.test.ts, utils/create-server.test.ts
Added extensive MySQL-specific test coverage for all routes and updated existing test mocks to support MySQL dispatch and validation.
Workflow & Build Configuration
.github/workflows/check.yml, .husky/pre-commit, package.json, packages/server/package.json, packages/server/src/index.ts
Updated test workflow to run generic tests, added test execution to pre-commit hook, separated database initialization scripts for PostgreSQL and MySQL, replaced "open" dependency with "mysql2", and removed auto-open browser behavior.
Database Initialization Scripts
scripts/init-db-mysql.sh
New bash script for Docker-based MySQL database provisioning with schema creation, seed data insertion, and .env synchronization.
Input Validation
packages/server/src/cmd/get-db-url.ts
Updated prompt validation callbacks to handle optional string inputs with safe chaining for empty/whitespace checks.

Sequence Diagram

sequenceDiagram
    actor Client
    participant Route as Route Handler
    participant Context as RouteEnv Context
    participant Dispatch as DAO Dispatcher
    participant MySQL as MySQL DAO
    participant Postgres as PostgreSQL DAO
    participant DB as Database

    Client->>Route: Request (GET/POST/PATCH/DELETE)
    activate Route
    Route->>Context: Extract dbType from Variables
    activate Context
    Context-->>Route: dbType = "mysql" | "pg"
    deactivate Context
    
    alt dbType is "mysql"
        Route->>Dispatch: Select MySQL DAO
        activate Dispatch
        Dispatch->>MySQL: Call MySQL implementation
        activate MySQL
        MySQL->>DB: Execute MySQL Query
        activate DB
        DB-->>MySQL: Result Set
        deactivate DB
        MySQL-->>Dispatch: Response
        deactivate MySQL
        deactivate Dispatch
    else dbType is "pg"
        Route->>Dispatch: Select PostgreSQL DAO
        activate Dispatch
        Dispatch->>Postgres: Call PostgreSQL implementation
        activate Postgres
        Postgres->>DB: Execute PostgreSQL Query
        activate DB
        DB-->>Postgres: Result Set
        deactivate DB
        Postgres-->>Dispatch: Response
        deactivate Postgres
        deactivate Dispatch
    end
    
    Route-->>Client: HTTP Response
    deactivate Route
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • PR #52: Extends date/time picker controls in add-record-field.tsx with "Set to now" behavior and ISO/local datetime conversion helpers.
  • PR #56: Adds time controls and "Set to now" UI for timestamptz/timestamp field handling in the same add-record-field component.
  • PR #85: Implements delete-column functionality with server DAO and client hook integration, overlapping with MySQL delete-column DAO added here.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 48.28% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'support mysql database' directly and clearly describes the main objective of the pull request, which is to add MySQL database support across multiple components and routes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/123/support-mysql-database

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@husamql3
Copy link
Owner Author

@CodeRabbit review

@coderabbitai
Copy link

coderabbitai bot commented Feb 28, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (7)
packages/core/src/components/add-table/add-record/add-record-field.tsx-311-314 (1)

311-314: ⚠️ Potential issue | 🟡 Minor

Redundant .padStart(2, "0") calls.

The pad() function at line 26 already applies padStart(2, "0"), making the chained .padStart(2, "0") calls here unnecessary.

🧹 Proposed fix
 onClick={() => {
   const now = new Date();
   const yyyy = now.getFullYear();
-  const mm = pad(now.getMonth() + 1).padStart(2, "0");
-  const dd = pad(now.getDate()).padStart(2, "0");
-  const hh = pad(now.getHours()).padStart(2, "0");
-  const min = pad(now.getMinutes()).padStart(2, "0");
+  const mm = pad(now.getMonth() + 1);
+  const dd = pad(now.getDate());
+  const hh = pad(now.getHours());
+  const min = pad(now.getMinutes());
   field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
 }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`
around lines 311 - 314, The mm, dd, hh, and min assignments redundantly call
.padStart(2, "0") after using the helper pad() which already applies padding;
update the lines that set mm, dd, hh, and min to call pad(...) alone (e.g.,
replace pad(...).padStart(2, "0") with pad(...)) and leave the pad() helper
implementation as-is to avoid double-padding (refer to the variables mm, dd, hh,
min and the pad() helper).
packages/server/src/dao/mysql/database-list.mysql.dao.ts-91-95 (1)

91-95: ⚠️ Potential issue | 🟡 Minor

MySQL connection info fallback can report PostgreSQL default port.
At Line 95, parseDatabaseUrl() fallback may use 5432 (per packages/server/src/utils/parse-database-url.ts), which is wrong for MySQL fallback behavior.

🔧 Proposed fix
 	const urlDefaults = parseDatabaseUrl();
+	const fallbackMysqlPort = 3306;

 	return {
 		host: String(info.host || urlDefaults.host),
-		port: Number(info.port || urlDefaults.port),
+		port: Number(info.port ?? fallbackMysqlPort),
 		user: String(info.user),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts` around lines 91 -
95, The fallback for MySQL connection port uses parseDatabaseUrl()
(urlDefaults.port) which can return PostgreSQL's 5432; update the port fallback
logic in database-list.mysql.dao.ts (where host/port are constructed) to prefer
info.port, then urlDefaults.port only if it's a MySQL-appropriate value, and
finally use 3306 as the ultimate default; i.e., change the port expression
around Number(info.port || urlDefaults.port) to coercively fall back to 3306
when urlDefaults.port is missing or equals 5432 so MySQL connections never
default to Postgres port.
packages/server/src/dao/mysql/table-list.mysql.dao.ts-29-31 (1)

29-31: ⚠️ Potential issue | 🟡 Minor

SQL injection risk: tableName is directly interpolated into COUNT query.

Table names come from information_schema queries in this case (so less exploitable), but the interpolation pattern is still unsafe and inconsistent with best practices. Escape backticks in the identifier.

🛡️ Proposed fix: escape backticks in identifier
 		(tables as Array<{ tableName: string }>).map(async (table) => {
+			const escapedTableName = table.tableName.replace(/`/g, "``");
 			const [countRows] = await pool.execute<RowDataPacket[]>(
-				`SELECT COUNT(*) as count FROM \`${table.tableName}\``,
+				`SELECT COUNT(*) as count FROM \`${escapedTableName}\``,
 			);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts` around lines 29 - 31,
The COUNT query interpolates table.tableName directly (in the pool.execute call
that assigns countRows), which risks injection; sanitize the identifier by
escaping any backticks in table.tableName (e.g., replace ` with ``), then wrap
the result in backticks and use that safe identifier in the template literal for
the SELECT COUNT(*) query so pool.execute uses the escaped table name.
scripts/init-db-mysql.sh-29-29 (1)

29-29: ⚠️ Potential issue | 🟡 Minor

Security concern: Passwordless root with remote access.

The script creates a MySQL container with no root password (MYSQL_ALLOW_EMPTY_PASSWORD=yes) and grants root access from any host. While acceptable for local development, this poses a risk if the container is accidentally exposed.

Consider either:

  1. Setting a default development password
  2. Adding a prominent warning comment
  3. Binding only to localhost (already done via DB_HOST default)
💡 Suggested improvement: Add warning comment
+# WARNING: This configuration is for LOCAL DEVELOPMENT ONLY.
+# Do not use in production - root has no password and remote access is enabled.
 echo "[init-db-mysql] granting remote access to root..."
 docker exec "${DB_CONTAINER}" mysql -uroot -e "
   CREATE USER IF NOT EXISTS 'root'@'%';

Also applies to: 39-44

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/init-db-mysql.sh` at line 29, The script currently sets
MYSQL_ALLOW_EMPTY_PASSWORD=yes which creates a passwordless root user and grants
remote access; update the container setup to avoid this by either replacing
MYSQL_ALLOW_EMPTY_PASSWORD with a secure default env var (e.g.,
MYSQL_ROOT_PASSWORD=<dev-password>) or add a prominent warning comment above the
MYSQL_ALLOW_EMPTY_PASSWORD line indicating this is insecure for anything but
isolated local development and must not be used in production, and/or ensure
DB_HOST is bound to localhost only; reference the MYSQL_ALLOW_EMPTY_PASSWORD and
DB_HOST env settings in your change so reviewers can find and verify the fix.
packages/server/src/dao/mysql/tables-data.mysql.dao.ts-114-114 (1)

114-114: ⚠️ Potential issue | 🟡 Minor

Redundant ternary expression - both branches are identical.

The expression Array.isArray(sort) ? sort : sort evaluates to sort in both cases. This appears to be a copy-paste error.

🔧 Proposed fix
-	const sortClause = buildSortClauseMysql(Array.isArray(sort) ? sort : sort, order);
+	const sortClause = buildSortClauseMysql(sort, order);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts` at line 114, The
ternary in the call to buildSortClauseMysql is redundant (Array.isArray(sort) ?
sort : sort) — update the invocation in the tables-data.mysql.dao.ts code (where
sortClause is computed) to pass sort directly to buildSortClauseMysql (i.e.,
replace the ternary with just sort) so the call becomes
buildSortClauseMysql(sort, order); ensure you update the reference that computes
sortClause accordingly.
packages/server/src/routes/records.routes.ts-160-160 (1)

160-160: ⚠️ Potential issue | 🟡 Minor

Remove debug console.log statement.

This debug statement should be removed before merging to production.

🧹 Proposed fix
-			console.log("result", result);
 			return c.json({ data: result }, 200);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/routes/records.routes.ts` at line 160, Remove the debug
console.log call: delete the statement console.log("result", result) from
records.routes.ts (where it appears in the request handler) so no debug prints
reach production; if you need persisted logging instead, replace it with the
project's logger (e.g., processLogger.error/info or the existing request logger)
but otherwise simply remove the console.log line.
packages/server/tests/routes/mysql/tables.routes.mysql.test.ts-910-919 (1)

910-919: ⚠️ Potential issue | 🟡 Minor

Add a response status assertion in the underscore table-name edge case.

Line 913 stores res but the test never asserts the HTTP outcome, so route-level regressions can slip through.

✅ Suggested patch
 		it("should handle table names with underscores", async () => {
 			vi.mocked(mysqlTableColumnsDao.getTableColumns).mockResolvedValue([]);

 			const res = await app.request("/mysql/tables/user_profiles/columns?db=testdb");

+			expect(res.status).toBe(200);
 			expect(mysqlTableColumnsDao.getTableColumns).toHaveBeenCalledWith({
 				tableName: "user_profiles",
 				db: "testdb",
 			});
 		});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/tests/routes/mysql/tables.routes.mysql.test.ts` around lines
910 - 919, The test "should handle table names with underscores" stores the HTTP
response in the variable res but doesn't assert the HTTP status; update the test
(in packages/server/tests/routes/mysql/tables.routes.mysql.test.ts) to assert
the response status on res (e.g., expect(res.status).toBe(200) or appropriate
code) after calling app.request("/mysql/tables/user_profiles/columns?db=testdb")
and before verifying mysqlTableColumnsDao.getTableColumns was called to ensure
route-level success is validated.
🧹 Nitpick comments (10)
packages/core/src/components/add-table/add-record/add-record-field.tsx (1)

189-210: Consider extracting the action button pattern into a reusable component.

The tooltip-wrapped action button appears four times with nearly identical structure (lines 189-210, 244-265, 301-327, 360-381). Extracting this into a helper component would reduce duplication and simplify maintenance.

♻️ Example extraction
// Helper component for input with action button
interface ActionButtonProps {
  ariaLabel: string;
  tooltip: string;
  onClick: () => void;
  icon: React.ReactNode;
}

const ActionButton = ({ ariaLabel, tooltip, onClick, icon }: ActionButtonProps) => (
  <TooltipProvider delayDuration={0}>
    <Tooltip>
      <TooltipTrigger asChild>
        <button
          aria-label={ariaLabel}
          className="inline-flex w-9 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50"
          type="button"
          onClick={onClick}
        >
          {icon}
        </button>
      </TooltipTrigger>
      <TooltipContent className="px-2 py-1 text-xs">{tooltip}</TooltipContent>
    </Tooltip>
  </TooltipProvider>
);

Usage:

<ActionButton
  ariaLabel="Set to now"
  tooltip="Set to now"
  onClick={() => field.onChange(new Date().toISOString())}
  icon={<Clock aria-hidden="true" className="size-4" size={16} />}
/>

Also applies to: 244-265, 301-327, 360-381

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`
around lines 189 - 210, Extract the repeated tooltip-wrapped action button into
a reusable component (e.g., ActionButton) to remove duplication seen in the four
places using TooltipProvider/Tooltip/TooltipTrigger/TooltipContent and a button
with className and onClick calling field.onChange; implement ActionButton props
(ariaLabel, tooltip, onClick, icon, optional className) and replace each inline
block (the instances around the Clock button and the three similar blocks) with
<ActionButton ariaLabel="Set to now" tooltip="Set to now" onClick={() =>
field.onChange(...)} icon={<Clock .../>} /> so the behavior and accessibility
(aria-label, tooltip) and styling are preserved while centralizing the markup
and classes.
packages/server/src/app.types.ts (1)

29-33: Consider deriving RouteEnv from a shared Variables type to avoid drift.
This is fine functionally, but dbType is now duplicated in both RouteEnv and AppType.

♻️ Optional dedup refactor
+type RouteVariables = {
+	dbType: DatabaseTypeSchema;
+};

 export type RouteEnv = {
-	Variables: {
-		dbType: DatabaseTypeSchema;
-	};
+	Variables: RouteVariables;
 };

 export type AppType = {
-	Variables: {
-		dbType: DatabaseTypeSchema;
-	};
+	Variables: RouteVariables;
 	Bindings: {
 		databases: DatabasesRoutes;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/app.types.ts` around lines 29 - 33, RouteEnv duplicates
the Variables shape (dbType) that also exists on AppType; extract a single
shared Variables type (e.g., export type Variables = { dbType:
DatabaseTypeSchema }) and have both RouteEnv and AppType reference that shared
Variables type instead of redefining dbType, updating the RouteEnv declaration
(RouteEnv = { Variables: Variables }) and any AppType definition to use the same
exported Variables symbol so the schema cannot drift.
.husky/pre-commit (1)

1-3: Running full test suite in pre-commit may slow down developer workflow.

Adding bun run test to pre-commit means every commit triggers the entire test suite. If tests require database connections or take significant time, this could frustrate developers. Consider:

  1. Running tests only in CI (current workflow already covers this)
  2. Using --changed or similar flags to run only affected tests
  3. Making test execution optional via --no-verify for quick commits
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.husky/pre-commit around lines 1 - 3, The pre-commit hook currently runs the
full test suite via the command "bun run test", which slows commits; remove or
replace that line so pre-commit doesn't execute the entire suite: either delete
the "bun run test" entry, replace it with a faster check (e.g., a
changed-files-aware test runner flag like "bun run test --changed" or a
lint/quick-smoke script), or move full test execution to CI and keep only "bun
run check" and "bun run build" in the hook; ensure the modified hook still
allows bypassing with --no-verify for quick commits.
packages/server/tests/routes/query.routes.test.ts (1)

627-646: Consider simplifying the mock setup by using the top-level mock reference.

The dynamic import pattern works but is slightly convoluted. Since the module is already mocked at the top level (lines 12-15), you could reference it directly from the mock or use vi.mocked() consistently.

♻️ Optional: Simplify mock setup
 		it("should accept mysql database type as valid", async () => {
-			const { executeQuery: mysqlExecuteQuery } = await import(
-				"@/dao/mysql/query.mysql.dao.js"
-			);
-			vi.mocked(mysqlExecuteQuery).mockResolvedValue({
+			const mysqlQueryDao = await import("@/dao/mysql/query.mysql.dao.js");
+			vi.mocked(mysqlQueryDao.executeQuery).mockResolvedValue({
 				columns: ["1"],
 				rows: [{ "1": 1 }],
 				rowCount: 1,
 				duration: 1.0,
 			});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/tests/routes/query.routes.test.ts` around lines 627 - 646,
Replace the dynamic import and local mocking in the test with the top-level
mocked reference: remove the await import(...) and instead get the
already-mocked executeQuery from the shared mock (use the top-level mocked
module reference) and call vi.mocked(executeQuery).mockResolvedValue(...) so the
test uses the same mocked executeQuery as other tests; specifically update the
local mysqlExecuteQuery usage to reference the module-level executeQuery mock
and apply mockResolvedValue for columns/rows/rowCount/duration.
.github/workflows/check.yml (1)

29-44: Consider renaming the job to reflect its broader scope.

The job is named test-server but now runs bun run test which executes all tests (including MySQL paths). The naming is slightly misleading but not blocking.

♻️ Optional: Rename job for clarity
-  test-server:
+  test:
     runs-on: ubuntu-latest
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/check.yml around lines 29 - 44, Rename the GitHub Actions
job currently named "test-server" to a name that reflects it runs the full test
suite (e.g., "test" or "run-tests") so it isn't misleading; update the job key
where "test-server" appears and ensure any references to that job key (if any)
remain consistent—this job executes "bun run test" so choose a broader
identifier like "test" or "ci-tests" and replace "test-server" accordingly.
packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts (1)

37-62: Performance: Consider using batch INSERT for better throughput.

Currently, each record executes a separate INSERT statement. For bulk operations, a single multi-row INSERT INTO ... VALUES (...), (...), (...) would be significantly more efficient.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines
37 - 62, The current loop issues a separate INSERT per record; change
bulk-insert-records.mysql.dao.ts to build a single multi-row INSERT for batches:
for the variables records, columns, booleanColumns, tableName, columnNames,
connection and ResultSetHeader, convert boolean string values the same way but
flatten values into a single array and build placeholders like "(?, ?, ...), (?,
?, ...)" for N rows, then call connection.execute<ResultSetHeader>(insertSQL,
flattenedValues) once per batch; also implement chunking (e.g., max rows per
query) to avoid oversized SQL, increment successCount by the number of rows
inserted on success, and keep the existing HTTPException error handling.
packages/server/src/dao/mysql/delete-table.mysql.dao.ts (1)

58-61: Table name from database result used in query without sanitization.

While constraint.referencingTable comes from information_schema, it's still interpolated directly into the SQL. This is generally safe since it's system-controlled data, but for defense in depth, consider using backtick escaping consistently or validating the value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/delete-table.mysql.dao.ts` around lines 58 -
61, The code in the fkConstraints loop interpolates constraint.referencingTable
directly into SQL; update the loop in delete-table.mysql.dao.ts (where
fkConstraints is iterated) to avoid raw interpolation by either validating the
table name against a strict whitelist/regex (e.g. allow only alphanumerics,
underscore, dollar, and dot if needed) and then wrapping it in backticks, or use
the DB library's identifier-escaping helper (e.g. connection.escapeId /
pool.format with ??) to produce a safely escaped identifier; apply the same
treatment to any other places that build queries from information_schema values.
packages/server/src/dao/mysql/export-table.mysql.dao.ts (1)

16-20: Consider distinguishing "table not found" from "table is empty".

A 404 error for an empty table may be misleading—the table exists but has no rows. Consider returning an empty result set instead, reserving 404 for when the table genuinely doesn't exist. This aligns with how the PostgreSQL equivalent likely behaves and provides clearer API semantics.

Proposed approach
-	if (!rows || rows.length === 0) {
-		throw new HTTPException(404, {
-			message: `Table "${tableName}" does not exist or has no data`,
-		});
-	}
-
-	const cols = Object.keys(rows[0]);
+	const cols = rows.length > 0 ? Object.keys(rows[0]) : [];
 
 	return { cols, rows: rows as Record<string, CellValue>[] };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/export-table.mysql.dao.ts` around lines 16 -
20, The current block in export-table.mysql.dao.ts throws HTTPException(404) for
an empty result set; change this so that when the query returns rows === [] you
return an empty result (e.g., empty array or result payload) instead of a 404,
and only throw HTTPException(404) for a genuine “table not found” error by
detecting MySQL’s no-such-table condition (catch the SQL error code like
ER_NO_SUCH_TABLE or query information_schema.tables for tableName); update the
logic around the rows variable and tableName handling so HTTPException is
reserved for the actual-not-found case while empty tables return an empty
response.
packages/server/src/dao/mysql/create-table.mysql.dao.ts (1)

112-115: Timezone information is lost when mapping timestamp with time zone to DATETIME.

PostgreSQL's timestamptz stores timezone-aware timestamps, but MySQL's DATETIME is timezone-naive. Consider using TIMESTAMP for MySQL (which is timezone-aware) or documenting this behavioral difference for users migrating schemas.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/create-table.mysql.dao.ts` around lines 112 -
115, The current type mapping maps "timestamp with time zone" and timestamptz to
"DATETIME" which loses timezone info; update the mapping in the table type map
(the object that currently contains the keys timestamp, "timestamp without time
zone", "timestamp with time zone", timestamptz) so that "timestamp with time
zone" and timestamptz map to MySQL "TIMESTAMP" (leave plain timestamp or
"timestamp without time zone" as "DATETIME" if you want timezone-naive behavior)
and add a short inline comment documenting the behavioral difference for
migrations.
packages/server/src/db-manager.ts (1)

128-128: Avoid allocating an unmanaged fallback PostgreSQL pool.

Line 128 can create a new Pool that is not cached and does not register the shared error handler.

♻️ Suggested patch
-		return this.pgPools.get(connectionString) ?? new Pool({ connectionString });
+		const pool = this.pgPools.get(connectionString);
+		if (!pool) {
+			throw new Error(`Failed to initialize PostgreSQL pool for: ${connectionString}`);
+		}
+		return pool;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/db-manager.ts` at line 128, The current return expression
in the method (which uses this.pgPools.get(connectionString) ?? new Pool({
connectionString })) can allocate an unmanaged Pool; change it so that if a Pool
for connectionString doesn't exist you create a new Pool, register the shared
error handler (the same handler used for pooled instances), store it in
this.pgPools under connectionString, and then return that stored Pool; reference
pgPools, connectionString, and Pool so you attach the error listener and persist
the instance instead of returning a transient Pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 282-340: The DatePicker onChange currently uses date.toISOString()
(in the DatePicker instances in add-record-field.tsx) which stores UTC while the
"Set to now" button formats local time; make both sides use the same local
timestamp format for timestamp columns: update the DatePicker onChange handlers
(the ones that call field.onChange(date ? date.toISOString() : "")) to format
the Date to a local string like `${yyyy}-${mm}-${dd}T${hh}:${min}` (matching the
"Set to now" button) so field.onChange receives consistent local
"YYYY-MM-DDTHH:MM" values; ensure you reuse the same padding logic (pad
function) and avoid including timezone/seconds.

In `@packages/server/src/dao/mysql/add-record.mysql.dao.ts`:
- Around line 34-37: Validate the tableName used when building the INSERT query
in add-record.mysql.dao.ts to prevent SQL injection: before interpolating
tableName into the query string (where query is created using `INSERT INTO
\`${tableName}\`` alongside `columnNames` and `placeholders`), ensure tableName
matches a strict allowlist regex (e.g. only letters, numbers and underscores)
and throw or return an error if it fails validation; keep the rest of the query
construction unchanged and do not attempt to parameterize the table name.

In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts`:
- Around line 32-72: The errors array is never populated because the catch in
the bulk-insert loop immediately throws an HTTPException; decide whether bulk
insert should be all-or-nothing or allow partial failures. If all-or-nothing:
remove the errors array and related failureCount/errors return handling and keep
throwing the HTTPException from the catch (affecting the try/catch inside the
loop and the final return that references errors/successCount/failureCount). If
partial failures: change the catch inside the for-loop to push a descriptive
entry into errors (include recordIndex i and error.message from the caught
error), do not rethrow, continue the loop, and ensure you only call
connection.commit() if desired (or call rollback on fatal error), then compute
failureCount from errors.length and return errors in the response instead of
throwing HTTPException.
- Around line 48-51: The INSERT string directly interpolates tableName and
columnNames (built from records[0]) and is vulnerable to SQL injection; add an
escapeIdentifier utility (e.g., escapeIdentifier(name): replace any backtick
with double backtick and wrap with backticks) and use it to sanitize tableName
and each column name before constructing columnNames and insertSQL (leave VALUES
placeholders and parameter binding as-is). Update where columnNames is built and
the insertSQL template so it uses escaped identifiers instead of raw
tableName/column names (symbols to change: insertSQL, tableName, columnNames,
placeholders, and the code that reads records[0]).

In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts`:
- Around line 53-60: The code casts rows[0] to DatabaseSchemaType without
checking that the DATABASE() column isn't null; update the return path in the
function that queries the list of databases (the block using rows and returning
DatabaseSchemaType) to validate that rows[0].db is non-null/defined before
casting—if rows[0].db is null or undefined, throw an HTTPException(500, {
message: "DATABASE() returned null or no database name" }) (or similar) instead
of returning an object that violates DatabaseSchemaType; ensure you reference
the existing rows variable and DatabaseSchemaType guard before the final return.

In `@packages/server/src/dao/mysql/delete-column.mysql.dao.ts`:
- Around line 45-47: The ALTER TABLE call interpolates tableName and columnName
directly into SQL (pool.execute in delete-column.mysql.dao.ts); fix by escaping
any backticks in identifiers before interpolation (e.g., add a small helper
function like escapeMySQLIdentifier that replaces ` with `` and wraps the name
in backticks) and use it when building the query for tableName and columnName in
the DROP COLUMN statement; apply the same helper to other DAOs mentioned
(table-schema.mysql.dao.ts, delete-table.mysql.dao.ts,
create-table.mysql.dao.ts) wherever identifiers are interpolated into SQL.

In `@packages/server/src/dao/mysql/delete-records.mysql.dao.ts`:
- Around line 76-81: The queries build SQL by directly interpolating identifiers
(e.g., constraint.referencingTable, constraint.referencingColumn) into
pool.execute calls (see the SELECT ... IN (${placeholders}) and the other
similar statements), which risks SQL injection; implement and call a shared
identifier-validation helper (or whitelist/strict regex) to validate and
sanitize table and column names before interpolation, replace direct usage of
constraint.referencingTable/constraint.referencingColumn with the validated
identifier values, and keep using parameterized placeholders for
pkValues/placeholders; apply this change consistently for all occurrences in
delete-records.mysql.dao.ts where identifiers are interpolated (the SELECTs and
related DELETEs).
- Around line 164-198: The recursive function deleteRelatedRecursively can loop
on circular FK chains; modify it to accept and propagate a visited set (e.g.,
Set<string> visited) or similar key (table or table+column) and check
visited.has(key) before exploring nestedFks, adding the key to visited when
entering the function; update recursive call sites to pass the same visited set
and ensure deletedTables is still used only for recording deletions, not cycle
detection (use visited for preventing revisits of the same table/column pair).

In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts`:
- Around line 21-25: The current code throws an HTTPException(500) when the
query result 'tables' is empty, but an empty DB is valid; instead modify the
logic in table-list.mysql.dao (the block checking if (!tables[0])) to return an
empty array (e.g., return []) rather than throwing HTTPException; remove the
HTTPException import/use for this case so consumers receive [] from the function
that queries tables.

In `@packages/server/src/dao/mysql/table-schema.mysql.dao.ts`:
- Around line 29-31: The query interpolates tableName directly into the SHOW
CREATE TABLE statement (see pool.execute and the `SHOW CREATE TABLE
\`${tableName}\`` usage), creating an SQL injection risk; sanitize the
identifier first by escaping any backticks in tableName (replace each ` with ``
per MySQL identifier escaping) or strip embedded backticks, then use the
sanitized variable in the interpolated query; you can add a small helper (e.g.,
sanitizeTableName) or inline the replacement right before calling pool.execute
to ensure only a safely escaped identifier is used.

In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts`:
- Around line 136-148: The COUNT and SELECT queries interpolate tableName
directly (used in pool.execute calls for the COUNT query and the data query with
filterWhereClause and combinedWhereClause/effectiveSortClause), creating an SQL
injection risk; fix by validating or escaping the identifier before use — either
validate tableName against a safe identifier whitelist/regex (as other MySQL
DAOs do) or use the driver’s identifier placeholder/escape (e.g., mysql2’s ?? /
escapeId) to inject the table name safely, and replace direct interpolation in
both the COUNT (`SELECT COUNT(*) as total FROM \`${tableName}\``) and data query
(`SELECT * FROM \`${tableName}\``) with the validated/escaped identifier.

In `@packages/server/src/dao/mysql/update-records.mysql.dao.ts`:
- Around line 55-73: Add strict identifier validation before interpolating names
into SQL: implement and call a validator (e.g., validateIdentifier) for
tableName, primaryKey and every u.columnName used when building
setClauses/rowUpdates; the validator should reject backticks, quotes and
SQL-special characters, enforce a safe regex like /^[A-Za-z0-9_-]+$/ and a
reasonable max length, and throw/fail early when invalid. Update the code paths
that reference tableName, primaryKey, and u.columnName (the block that builds
setClauses, values, and the query) to call this validator and abort with a clear
error instead of constructing the query when validation fails. Ensure
booleanColumns usage remains the same after validation.

In `@packages/server/src/db-manager.ts`:
- Around line 117-126: The code currently logs and returns raw connectionString
(seen in the pool.on error handler, console.log after
this.pgPools.set(connectionString, pool), and any methods that return pool
identifiers), which can leak credentials; update all uses of connectionString
(including in pool.on error logging, the "Created PostgreSQL connection pool"
console.log, and any return values) to first redact credentials by parsing the
connectionString (remove or replace user:password and any query params
containing secrets) or replace it with a safe pool identifier (e.g., sanitized
DSN or generated ID), and ensure this sanitized value is what gets logged,
stored in this.pgPools keys/metadata exposed to callers, and returned by any
getter methods.
- Around line 217-231: The closeAll() implementation in db-manager.ts currently
uses Promise.all over pgClosePromises and mysqlClosePromises so a single
pool.end() rejection can abort overall shutdown and prevent clearing
this.pgPools and this.mysqlPools; change it to use Promise.allSettled (or
individually catch errors for each async map callback) when awaiting the close
promises, log any failures with the associated connectionString (from the
pgPools/mysqlPools map entries) and ensure that regardless of individual errors
the method proceeds to clear this.pgPools and this.mysqlPools and returns/throws
a summarized error if you need to surface failures.

In `@packages/server/src/utils/build-clauses-mysql.ts`:
- Around line 20-21: The code is vulnerable because raw identifier interpolation
like `\`${filter.columnName}\`` allows backtick injection; add a single helper
function (e.g., escapeIdentifier) that takes an identifier string and returns it
with any backticks escaped by doubling (replace ` with ``), then wrap the
escaped result in backticks when building SQL. Replace all direct interpolations
(the local columnName assignment and other places flagged—uses around the
column/table/alias constructions) to use backticks around escapeIdentifier(...)
instead of raw values so identifiers are safely quoted everywhere in
build-clauses-mysql.ts.
- Around line 59-60: The switch handling filter operators in
build-clauses-mysql.ts currently has a default: break; which silently ignores
unknown operators; change that to fail closed by throwing a clear error (e.g.,
throw new Error(`Unsupported operator "${op}" for field "${field}"`)) from the
default branch in the function that builds clauses (the switch that inspects the
operator variable inside buildClauses / buildClausesForMySQL). Ensure the thrown
error includes the operator and field names to aid debugging and adjust any
callers/tests to expect an exception for unsupported operators.

In `@packages/shared/src/constants/column-types-pgsql.ts`:
- Around line 42-43: The exported PostgreSQL column types list in
column-types-pgsql.ts currently includes the invalid standalone entries "array"
and "enum"; remove these two entries from that list (or filter them out before
presenting options) so field.columnType can never be set to "array" or "enum"
because create-table.dao.ts substitutes field.columnType verbatim into SQL (the
`"${field.columnName}" ${field.columnType}` usage), which produces invalid DDL;
alternatively, implement UI/server-side validation in the selector code to
prevent selecting "array" without an element type and to block unnamed enums.

---

Minor comments:
In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 311-314: The mm, dd, hh, and min assignments redundantly call
.padStart(2, "0") after using the helper pad() which already applies padding;
update the lines that set mm, dd, hh, and min to call pad(...) alone (e.g.,
replace pad(...).padStart(2, "0") with pad(...)) and leave the pad() helper
implementation as-is to avoid double-padding (refer to the variables mm, dd, hh,
min and the pad() helper).

In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts`:
- Around line 91-95: The fallback for MySQL connection port uses
parseDatabaseUrl() (urlDefaults.port) which can return PostgreSQL's 5432; update
the port fallback logic in database-list.mysql.dao.ts (where host/port are
constructed) to prefer info.port, then urlDefaults.port only if it's a
MySQL-appropriate value, and finally use 3306 as the ultimate default; i.e.,
change the port expression around Number(info.port || urlDefaults.port) to
coercively fall back to 3306 when urlDefaults.port is missing or equals 5432 so
MySQL connections never default to Postgres port.

In `@packages/server/src/dao/mysql/table-list.mysql.dao.ts`:
- Around line 29-31: The COUNT query interpolates table.tableName directly (in
the pool.execute call that assigns countRows), which risks injection; sanitize
the identifier by escaping any backticks in table.tableName (e.g., replace `
with ``), then wrap the result in backticks and use that safe identifier in the
template literal for the SELECT COUNT(*) query so pool.execute uses the escaped
table name.

In `@packages/server/src/dao/mysql/tables-data.mysql.dao.ts`:
- Line 114: The ternary in the call to buildSortClauseMysql is redundant
(Array.isArray(sort) ? sort : sort) — update the invocation in the
tables-data.mysql.dao.ts code (where sortClause is computed) to pass sort
directly to buildSortClauseMysql (i.e., replace the ternary with just sort) so
the call becomes buildSortClauseMysql(sort, order); ensure you update the
reference that computes sortClause accordingly.

In `@packages/server/src/routes/records.routes.ts`:
- Line 160: Remove the debug console.log call: delete the statement
console.log("result", result) from records.routes.ts (where it appears in the
request handler) so no debug prints reach production; if you need persisted
logging instead, replace it with the project's logger (e.g.,
processLogger.error/info or the existing request logger) but otherwise simply
remove the console.log line.

In `@packages/server/tests/routes/mysql/tables.routes.mysql.test.ts`:
- Around line 910-919: The test "should handle table names with underscores"
stores the HTTP response in the variable res but doesn't assert the HTTP status;
update the test (in
packages/server/tests/routes/mysql/tables.routes.mysql.test.ts) to assert the
response status on res (e.g., expect(res.status).toBe(200) or appropriate code)
after calling app.request("/mysql/tables/user_profiles/columns?db=testdb") and
before verifying mysqlTableColumnsDao.getTableColumns was called to ensure
route-level success is validated.

In `@scripts/init-db-mysql.sh`:
- Line 29: The script currently sets MYSQL_ALLOW_EMPTY_PASSWORD=yes which
creates a passwordless root user and grants remote access; update the container
setup to avoid this by either replacing MYSQL_ALLOW_EMPTY_PASSWORD with a secure
default env var (e.g., MYSQL_ROOT_PASSWORD=<dev-password>) or add a prominent
warning comment above the MYSQL_ALLOW_EMPTY_PASSWORD line indicating this is
insecure for anything but isolated local development and must not be used in
production, and/or ensure DB_HOST is bound to localhost only; reference the
MYSQL_ALLOW_EMPTY_PASSWORD and DB_HOST env settings in your change so reviewers
can find and verify the fix.

---

Nitpick comments:
In @.github/workflows/check.yml:
- Around line 29-44: Rename the GitHub Actions job currently named "test-server"
to a name that reflects it runs the full test suite (e.g., "test" or
"run-tests") so it isn't misleading; update the job key where "test-server"
appears and ensure any references to that job key (if any) remain
consistent—this job executes "bun run test" so choose a broader identifier like
"test" or "ci-tests" and replace "test-server" accordingly.

In @.husky/pre-commit:
- Around line 1-3: The pre-commit hook currently runs the full test suite via
the command "bun run test", which slows commits; remove or replace that line so
pre-commit doesn't execute the entire suite: either delete the "bun run test"
entry, replace it with a faster check (e.g., a changed-files-aware test runner
flag like "bun run test --changed" or a lint/quick-smoke script), or move full
test execution to CI and keep only "bun run check" and "bun run build" in the
hook; ensure the modified hook still allows bypassing with --no-verify for quick
commits.

In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`:
- Around line 189-210: Extract the repeated tooltip-wrapped action button into a
reusable component (e.g., ActionButton) to remove duplication seen in the four
places using TooltipProvider/Tooltip/TooltipTrigger/TooltipContent and a button
with className and onClick calling field.onChange; implement ActionButton props
(ariaLabel, tooltip, onClick, icon, optional className) and replace each inline
block (the instances around the Clock button and the three similar blocks) with
<ActionButton ariaLabel="Set to now" tooltip="Set to now" onClick={() =>
field.onChange(...)} icon={<Clock .../>} /> so the behavior and accessibility
(aria-label, tooltip) and styling are preserved while centralizing the markup
and classes.

In `@packages/server/src/app.types.ts`:
- Around line 29-33: RouteEnv duplicates the Variables shape (dbType) that also
exists on AppType; extract a single shared Variables type (e.g., export type
Variables = { dbType: DatabaseTypeSchema }) and have both RouteEnv and AppType
reference that shared Variables type instead of redefining dbType, updating the
RouteEnv declaration (RouteEnv = { Variables: Variables }) and any AppType
definition to use the same exported Variables symbol so the schema cannot drift.

In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts`:
- Around line 37-62: The current loop issues a separate INSERT per record;
change bulk-insert-records.mysql.dao.ts to build a single multi-row INSERT for
batches: for the variables records, columns, booleanColumns, tableName,
columnNames, connection and ResultSetHeader, convert boolean string values the
same way but flatten values into a single array and build placeholders like "(?,
?, ...), (?, ?, ...)" for N rows, then call
connection.execute<ResultSetHeader>(insertSQL, flattenedValues) once per batch;
also implement chunking (e.g., max rows per query) to avoid oversized SQL,
increment successCount by the number of rows inserted on success, and keep the
existing HTTPException error handling.

In `@packages/server/src/dao/mysql/create-table.mysql.dao.ts`:
- Around line 112-115: The current type mapping maps "timestamp with time zone"
and timestamptz to "DATETIME" which loses timezone info; update the mapping in
the table type map (the object that currently contains the keys timestamp,
"timestamp without time zone", "timestamp with time zone", timestamptz) so that
"timestamp with time zone" and timestamptz map to MySQL "TIMESTAMP" (leave plain
timestamp or "timestamp without time zone" as "DATETIME" if you want
timezone-naive behavior) and add a short inline comment documenting the
behavioral difference for migrations.

In `@packages/server/src/dao/mysql/delete-table.mysql.dao.ts`:
- Around line 58-61: The code in the fkConstraints loop interpolates
constraint.referencingTable directly into SQL; update the loop in
delete-table.mysql.dao.ts (where fkConstraints is iterated) to avoid raw
interpolation by either validating the table name against a strict
whitelist/regex (e.g. allow only alphanumerics, underscore, dollar, and dot if
needed) and then wrapping it in backticks, or use the DB library's
identifier-escaping helper (e.g. connection.escapeId / pool.format with ??) to
produce a safely escaped identifier; apply the same treatment to any other
places that build queries from information_schema values.

In `@packages/server/src/dao/mysql/export-table.mysql.dao.ts`:
- Around line 16-20: The current block in export-table.mysql.dao.ts throws
HTTPException(404) for an empty result set; change this so that when the query
returns rows === [] you return an empty result (e.g., empty array or result
payload) instead of a 404, and only throw HTTPException(404) for a genuine
“table not found” error by detecting MySQL’s no-such-table condition (catch the
SQL error code like ER_NO_SUCH_TABLE or query information_schema.tables for
tableName); update the logic around the rows variable and tableName handling so
HTTPException is reserved for the actual-not-found case while empty tables
return an empty response.

In `@packages/server/src/db-manager.ts`:
- Line 128: The current return expression in the method (which uses
this.pgPools.get(connectionString) ?? new Pool({ connectionString })) can
allocate an unmanaged Pool; change it so that if a Pool for connectionString
doesn't exist you create a new Pool, register the shared error handler (the same
handler used for pooled instances), store it in this.pgPools under
connectionString, and then return that stored Pool; reference pgPools,
connectionString, and Pool so you attach the error listener and persist the
instance instead of returning a transient Pool.

In `@packages/server/tests/routes/query.routes.test.ts`:
- Around line 627-646: Replace the dynamic import and local mocking in the test
with the top-level mocked reference: remove the await import(...) and instead
get the already-mocked executeQuery from the shared mock (use the top-level
mocked module reference) and call vi.mocked(executeQuery).mockResolvedValue(...)
so the test uses the same mocked executeQuery as other tests; specifically
update the local mysqlExecuteQuery usage to reference the module-level
executeQuery mock and apply mockResolvedValue for
columns/rows/rowCount/duration.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0831490 and 1a73d60.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (46)
  • .github/workflows/check.yml
  • .husky/pre-commit
  • package.json
  • packages/core/src/components/add-table/add-record/add-record-field.tsx
  • packages/server/package.json
  • packages/server/src/app.types.ts
  • packages/server/src/cmd/get-db-url.ts
  • packages/server/src/dao/mysql/add-record.mysql.dao.ts
  • packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts
  • packages/server/src/dao/mysql/create-table.mysql.dao.ts
  • packages/server/src/dao/mysql/database-list.mysql.dao.ts
  • packages/server/src/dao/mysql/delete-column.mysql.dao.ts
  • packages/server/src/dao/mysql/delete-records.mysql.dao.ts
  • packages/server/src/dao/mysql/delete-table.mysql.dao.ts
  • packages/server/src/dao/mysql/export-table.mysql.dao.ts
  • packages/server/src/dao/mysql/query.mysql.dao.ts
  • packages/server/src/dao/mysql/table-columns.mysql.dao.ts
  • packages/server/src/dao/mysql/table-list.mysql.dao.ts
  • packages/server/src/dao/mysql/table-schema.mysql.dao.ts
  • packages/server/src/dao/mysql/tables-data.mysql.dao.ts
  • packages/server/src/dao/mysql/update-records.mysql.dao.ts
  • packages/server/src/db-manager.ts
  • packages/server/src/index.ts
  • packages/server/src/middlewares/error-handler.ts
  • packages/server/src/routes/databases.routes.ts
  • packages/server/src/routes/query.routes.ts
  • packages/server/src/routes/records.routes.ts
  • packages/server/src/routes/tables.routes.ts
  • packages/server/src/utils/build-clauses-mysql.ts
  • packages/server/tests/routes/databases.routes.test.ts
  • packages/server/tests/routes/mysql/databases.routes.mysql.test.ts
  • packages/server/tests/routes/mysql/query.routes.mysql.test.ts
  • packages/server/tests/routes/mysql/records.routes.mysql.test.ts
  • packages/server/tests/routes/mysql/tables.routes.mysql.test.ts
  • packages/server/tests/routes/query.routes.test.ts
  • packages/server/tests/routes/records.routes.test.ts
  • packages/server/tests/routes/tables.routes.test.ts
  • packages/server/tests/utils/create-server.test.ts
  • packages/shared/src/constants/column-types-mysql.ts
  • packages/shared/src/constants/column-types-pgsql.ts
  • packages/shared/src/constants/index.ts
  • packages/shared/src/types/column-info.types.ts
  • packages/shared/src/types/column.type.ts
  • packages/shared/src/types/database.types.ts
  • scripts/init-db-mysql.sh
  • scripts/init-db-pgsql.sh
💤 Files with no reviewable changes (1)
  • packages/server/src/index.ts

Comment on lines 282 to 340
if (dataTypeLabel === "timestamp") {
const hasTimestampDefault =
columnDefault?.toLowerCase().includes("current_timestamp") ||
columnDefault?.toLowerCase().includes("now()") ||
columnDefault?.toLowerCase().includes("localtimestamp");

if (hasTimestampDefault) {
return (
<div className="flex">
<DatePicker
value={
field.value ? new Date(field.value) : field.value === "" ? null : undefined
}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
<TooltipProvider delayDuration={0}>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label="Set to now"
className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50"
type="button"
onClick={() => {
const now = new Date();
const yyyy = now.getFullYear();
const mm = pad(now.getMonth() + 1).padStart(2, "0");
const dd = pad(now.getDate()).padStart(2, "0");
const hh = pad(now.getHours()).padStart(2, "0");
const min = pad(now.getMinutes()).padStart(2, "0");
field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
}}
>
<Clock
aria-hidden="true"
className="size-4"
size={16}
/>
</button>
</TooltipTrigger>
<TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
);
}

return (
<div className="flex">
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
<TooltipProvider delayDuration={0}>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label="Set to now"
className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50"
type="button"
onClick={() => {
const now = new Date();
const yyyy = now.getFullYear();
const mm = pad(now.getMonth() + 1).padStart(2, "0");
const dd = pad(now.getDate()).padStart(2, "0");
const hh = pad(now.getHours()).padStart(2, "0");
const min = pad(now.getMinutes()).padStart(2, "0");
field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
}}
>
<Clock
aria-hidden="true"
className="size-4"
size={16}
/>
</button>
</TooltipTrigger>
<TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
/>
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent time formatting between DatePicker and "Set to now" button.

The DatePicker's onChange (lines 295, 335) uses date.toISOString() which returns UTC time (e.g., "2024-01-15T10:30:00.000Z"), but the "Set to now" button (lines 309-315) formats local time without a timezone indicator (e.g., "2024-01-15T10:30").

For a timestamp (without timezone) column, this inconsistency means:

  • Selecting via DatePicker → UTC time stored
  • Clicking "Set to now" → Local time stored

This can cause time values to differ by the user's timezone offset.

🐛 Proposed fix to use consistent local time formatting
 if (dataTypeLabel === "timestamp") {
+    const formatLocalDateTime = (date: Date): string => {
+      const yyyy = date.getFullYear();
+      const mm = pad(date.getMonth() + 1);
+      const dd = pad(date.getDate());
+      const hh = pad(date.getHours());
+      const min = pad(date.getMinutes());
+      const ss = pad(date.getSeconds());
+      return `${yyyy}-${mm}-${dd}T${hh}:${min}:${ss}`;
+    };
+
     const hasTimestampDefault =
       columnDefault?.toLowerCase().includes("current_timestamp") ||
       columnDefault?.toLowerCase().includes("now()") ||
       columnDefault?.toLowerCase().includes("localtimestamp");

     if (hasTimestampDefault) {
       return (
         <div className="flex">
           <DatePicker
             value={
               field.value ? new Date(field.value) : field.value === "" ? null : undefined
             }
-            onChange={(date) => field.onChange(date ? date.toISOString() : "")}
+            onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
             showTime={true}
             isFormatted={false}
             placeholder={columnDefault ?? "Select a date and time"}
             className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
           />
           ...
                 onClick={() => {
-                  const now = new Date();
-                  const yyyy = now.getFullYear();
-                  const mm = pad(now.getMonth() + 1).padStart(2, "0");
-                  const dd = pad(now.getDate()).padStart(2, "0");
-                  const hh = pad(now.getHours()).padStart(2, "0");
-                  const min = pad(now.getMinutes()).padStart(2, "0");
-                  field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
+                  field.onChange(formatLocalDateTime(new Date()));
                 }}
           ...
         </div>
       );
     }

     return (
       <DatePicker
         value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
-        onChange={(date) => field.onChange(date ? date.toISOString() : "")}
+        onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
         showTime={true}
         isFormatted={false}
         placeholder={columnDefault ?? "Select a date and time"}
       />
     );
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (dataTypeLabel === "timestamp") {
const hasTimestampDefault =
columnDefault?.toLowerCase().includes("current_timestamp") ||
columnDefault?.toLowerCase().includes("now()") ||
columnDefault?.toLowerCase().includes("localtimestamp");
if (hasTimestampDefault) {
return (
<div className="flex">
<DatePicker
value={
field.value ? new Date(field.value) : field.value === "" ? null : undefined
}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
<TooltipProvider delayDuration={0}>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label="Set to now"
className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50"
type="button"
onClick={() => {
const now = new Date();
const yyyy = now.getFullYear();
const mm = pad(now.getMonth() + 1).padStart(2, "0");
const dd = pad(now.getDate()).padStart(2, "0");
const hh = pad(now.getHours()).padStart(2, "0");
const min = pad(now.getMinutes()).padStart(2, "0");
field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
}}
>
<Clock
aria-hidden="true"
className="size-4"
size={16}
/>
</button>
</TooltipTrigger>
<TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
);
}
return (
<div className="flex">
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
<TooltipProvider delayDuration={0}>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label="Set to now"
className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50"
type="button"
onClick={() => {
const now = new Date();
const yyyy = now.getFullYear();
const mm = pad(now.getMonth() + 1).padStart(2, "0");
const dd = pad(now.getDate()).padStart(2, "0");
const hh = pad(now.getHours()).padStart(2, "0");
const min = pad(now.getMinutes()).padStart(2, "0");
field.onChange(`${yyyy}-${mm}-${dd}T${hh}:${min}`);
}}
>
<Clock
aria-hidden="true"
className="size-4"
size={16}
/>
</button>
</TooltipTrigger>
<TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
onChange={(date) => field.onChange(date ? date.toISOString() : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
/>
);
if (dataTypeLabel === "timestamp") {
const formatLocalDateTime = (date: Date): string => {
const yyyy = date.getFullYear();
const mm = pad(date.getMonth() + 1);
const dd = pad(date.getDate());
const hh = pad(date.getHours());
const min = pad(date.getMinutes());
const ss = pad(date.getSeconds());
return `${yyyy}-${mm}-${dd}T${hh}:${min}:${ss}`;
};
const hasTimestampDefault =
columnDefault?.toLowerCase().includes("current_timestamp") ||
columnDefault?.toLowerCase().includes("now()") ||
columnDefault?.toLowerCase().includes("localtimestamp");
if (hasTimestampDefault) {
return (
<div className="flex">
<DatePicker
value={
field.value ? new Date(field.value) : field.value === "" ? null : undefined
}
onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
className="-me-px flex-1 rounded-e-none shadow-none focus-visible:z-10"
/>
<TooltipProvider delayDuration={0}>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label="Set to now"
className="inline-flex w-7 items-center justify-center rounded-e-md border border-input bg-background text-muted-foreground/80 text-sm outline-none transition-[color,box-shadow] hover:text-accent-foreground focus:z-10 focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50"
type="button"
onClick={() => {
field.onChange(formatLocalDateTime(new Date()));
}}
>
<Clock
aria-hidden="true"
className="size-4"
size={16}
/>
</button>
</TooltipTrigger>
<TooltipContent className="px-2 py-1 text-xs">Set to now</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
);
}
return (
<DatePicker
value={field.value ? new Date(field.value) : field.value === "" ? null : undefined}
onChange={(date) => field.onChange(date ? formatLocalDateTime(date) : "")}
showTime={true}
isFormatted={false}
placeholder={columnDefault ?? "Select a date and time"}
/>
);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/components/add-table/add-record/add-record-field.tsx`
around lines 282 - 340, The DatePicker onChange currently uses
date.toISOString() (in the DatePicker instances in add-record-field.tsx) which
stores UTC while the "Set to now" button formats local time; make both sides use
the same local timestamp format for timestamp columns: update the DatePicker
onChange handlers (the ones that call field.onChange(date ? date.toISOString() :
"")) to format the Date to a local string like
`${yyyy}-${mm}-${dd}T${hh}:${min}` (matching the "Set to now" button) so
field.onChange receives consistent local "YYYY-MM-DDTHH:MM" values; ensure you
reuse the same padding logic (pad function) and avoid including
timezone/seconds.

Comment on lines +34 to +37
const query = `
INSERT INTO \`${tableName}\` (${columnNames})
VALUES (${placeholders})
`;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SQL injection risk: tableName is directly interpolated.

The tableName parameter is directly interpolated into the SQL query without validation or parameterization. While MySQL doesn't support parameterized table names, you should validate that tableName contains only allowed characters (alphanumeric, underscores) to prevent injection attacks.

🛡️ Proposed fix to add table name validation
+const SAFE_IDENTIFIER_REGEX = /^[a-zA-Z_][a-zA-Z0-9_]*$/;
+
 export async function addRecord({
 	db,
 	params,
 }: {
 	db: DatabaseSchemaType["db"];
 	params: AddRecordSchemaType;
 }): Promise<{ insertedCount: number }> {
 	const { tableName, data } = params;
+
+	if (!SAFE_IDENTIFIER_REGEX.test(tableName)) {
+		throw new HTTPException(400, { message: "Invalid table name" });
+	}
+
 	const pool = getMysqlPool(db);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/add-record.mysql.dao.ts` around lines 34 - 37,
Validate the tableName used when building the INSERT query in
add-record.mysql.dao.ts to prevent SQL injection: before interpolating tableName
into the query string (where query is created using `INSERT INTO
\`${tableName}\`` alongside `columnNames` and `placeholders`), ensure tableName
matches a strict allowlist regex (e.g. only letters, numbers and underscores)
and throw or return an error if it fails validation; keep the rest of the query
construction unchanged and do not attempt to parameterize the table name.

Comment on lines +32 to +72
let successCount = 0;
const errors: Array<{ recordIndex: number; error: string }> = [];

await connection.beginTransaction();

for (let i = 0; i < records.length; i++) {
const record = records[i];
const values = columns.map((col) => {
const value = record[col];
if (booleanColumns.has(col) && typeof value === "string") {
return value === "true" ? 1 : 0;
}
return value;
});
const placeholders = columns.map(() => "?").join(", ");

const insertSQL = `
INSERT INTO \`${tableName}\` (${columnNames})
VALUES (${placeholders})
`;

try {
// biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[]
await connection.execute<ResultSetHeader>(insertSQL, values as any);
successCount++;
} catch (error) {
throw new HTTPException(500, {
message: `Failed: ${error instanceof Error ? error.message : String(error)}`,
});
}
}

await connection.commit();

const failureCount = errors.length;
return {
success: failureCount === 0,
message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`,
successCount,
failureCount,
errors: errors.length > 0 ? errors : undefined,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Dead code: errors array is never populated.

The inner catch block (lines 57-61) throws an HTTPException immediately instead of collecting errors. This means errors is always empty, failureCount is always 0, and the error aggregation logic never executes. If partial failures should be supported, collect errors and continue; otherwise, remove the unused errors array.

♻️ Option A: Remove dead code if all-or-nothing is intended
 		let successCount = 0;
-		const errors: Array<{ recordIndex: number; error: string }> = [];

 		await connection.beginTransaction();

 		// ... loop ...
 
 		await connection.commit();

-		const failureCount = errors.length;
 		return {
-			success: failureCount === 0,
-			message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`,
+			success: true,
+			message: `Bulk insert completed: ${successCount} records inserted`,
 			successCount,
-			failureCount,
-			errors: errors.length > 0 ? errors : undefined,
+			failureCount: 0,
 		};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let successCount = 0;
const errors: Array<{ recordIndex: number; error: string }> = [];
await connection.beginTransaction();
for (let i = 0; i < records.length; i++) {
const record = records[i];
const values = columns.map((col) => {
const value = record[col];
if (booleanColumns.has(col) && typeof value === "string") {
return value === "true" ? 1 : 0;
}
return value;
});
const placeholders = columns.map(() => "?").join(", ");
const insertSQL = `
INSERT INTO \`${tableName}\` (${columnNames})
VALUES (${placeholders})
`;
try {
// biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[]
await connection.execute<ResultSetHeader>(insertSQL, values as any);
successCount++;
} catch (error) {
throw new HTTPException(500, {
message: `Failed: ${error instanceof Error ? error.message : String(error)}`,
});
}
}
await connection.commit();
const failureCount = errors.length;
return {
success: failureCount === 0,
message: `Bulk insert completed: ${successCount} records inserted${failureCount > 0 ? `, ${failureCount} failed` : ""}`,
successCount,
failureCount,
errors: errors.length > 0 ? errors : undefined,
let successCount = 0;
await connection.beginTransaction();
for (let i = 0; i < records.length; i++) {
const record = records[i];
const values = columns.map((col) => {
const value = record[col];
if (booleanColumns.has(col) && typeof value === "string") {
return value === "true" ? 1 : 0;
}
return value;
});
const placeholders = columns.map(() => "?").join(", ");
const insertSQL = `
INSERT INTO \`${tableName}\` (${columnNames})
VALUES (${placeholders})
`;
try {
// biome-ignore lint/suspicious/noExplicitAny: mysql2 execute doesn't accept unknown[]
await connection.execute<ResultSetHeader>(insertSQL, values as any);
successCount++;
} catch (error) {
throw new HTTPException(500, {
message: `Failed: ${error instanceof Error ? error.message : String(error)}`,
});
}
}
await connection.commit();
return {
success: true,
message: `Bulk insert completed: ${successCount} records inserted`,
successCount,
failureCount: 0,
};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines
32 - 72, The errors array is never populated because the catch in the
bulk-insert loop immediately throws an HTTPException; decide whether bulk insert
should be all-or-nothing or allow partial failures. If all-or-nothing: remove
the errors array and related failureCount/errors return handling and keep
throwing the HTTPException from the catch (affecting the try/catch inside the
loop and the final return that references errors/successCount/failureCount). If
partial failures: change the catch inside the for-loop to push a descriptive
entry into errors (include recordIndex i and error.message from the caught
error), do not rethrow, continue the loop, and ensure you only call
connection.commit() if desired (or call rollback on fatal error), then compute
failureCount from errors.length and return errors in the response instead of
throwing HTTPException.

Comment on lines +48 to +51
const insertSQL = `
INSERT INTO \`${tableName}\` (${columnNames})
VALUES (${placeholders})
`;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SQL injection risk: tableName is directly interpolated into INSERT statement.

Column names from records[0] and tableName are interpolated without escaping. Malicious column names or table names could inject SQL.

🛡️ Proposed fix: escape backticks in identifiers
+	const escapeIdentifier = (id: string) => id.replace(/`/g, "``");
+	const escapedTableName = escapeIdentifier(tableName);
+	const columnNames = columns.map((col) => `\`${escapeIdentifier(col)}\``).join(", ");
-	const columnNames = columns.map((col) => `\`${col}\``).join(", ");

 	// ... later in the loop ...
 	const insertSQL = `
-		INSERT INTO \`${tableName}\` (${columnNames})
+		INSERT INTO \`${escapedTableName}\` (${columnNames})
 		VALUES (${placeholders})
 	`;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/bulk-insert-records.mysql.dao.ts` around lines
48 - 51, The INSERT string directly interpolates tableName and columnNames
(built from records[0]) and is vulnerable to SQL injection; add an
escapeIdentifier utility (e.g., escapeIdentifier(name): replace any backtick
with double backtick and wrap with backticks) and use it to sanitize tableName
and each column name before constructing columnNames and insertSQL (leave VALUES
placeholders and parameter binding as-is). Update where columnNames is built and
the insertSQL template so it uses escaped identifiers instead of raw
tableName/column names (symbols to change: insertSQL, tableName, columnNames,
placeholders, and the code that reads records[0]).

Comment on lines +53 to +60
if (!rows[0]) {
throw new HTTPException(500, {
message: "No current database returned from server",
});
}

return (rows as Array<{ db: string }>)[0] as DatabaseSchemaType;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard DATABASE() null before returning DatabaseSchemaType.
At Line 59, unchecked casting can return db: null, violating the declared schema contract.

🐛 Proposed fix
 export async function getCurrentDatabase(): Promise<DatabaseSchemaType> {
 	const pool = getMysqlPool();
 	const [rows] = await pool.execute<RowDataPacket[]>("SELECT DATABASE() AS db");

 	if (!rows[0]) {
 		throw new HTTPException(500, {
 			message: "No current database returned from server",
 		});
 	}
-
-	return (rows as Array<{ db: string }>)[0] as DatabaseSchemaType;
+	const db = (rows as Array<{ db: string | null }>)[0]?.db;
+	if (!db) {
+		throw new HTTPException(500, {
+			message: "Current database is null or empty",
+		});
+	}
+	return { db };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/dao/mysql/database-list.mysql.dao.ts` around lines 53 -
60, The code casts rows[0] to DatabaseSchemaType without checking that the
DATABASE() column isn't null; update the return path in the function that
queries the list of databases (the block using rows and returning
DatabaseSchemaType) to validate that rows[0].db is non-null/defined before
casting—if rows[0].db is null or undefined, throw an HTTPException(500, {
message: "DATABASE() returned null or no database name" }) (or similar) instead
of returning an object that violates DatabaseSchemaType; ensure you reference
the existing rows variable and DatabaseSchemaType guard before the final return.

Comment on lines 117 to 126
pool.on("error", (err) => {
console.error(
`Unexpected error on database pool for "${connectionString}":`,
`Unexpected error on PostgreSQL pool for "${connectionString}":`,
err.message,
);
});

this.pools.set(connectionString, pool);
console.log(`Created connection pool for: ${connectionString}`);
this.pgPools.set(connectionString, pool);
console.log(`Created PostgreSQL connection pool for: ${connectionString}`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Redact connection strings before logging or returning pool identifiers.

Line 125, Line 160, Line 181, Line 193, Line 220, and Line 226 log full connection strings, and Line 238 returns them. If credentials are embedded, this leaks secrets.

🔐 Suggested patch
 class DatabaseManager {
+	private redactConnectionString(connectionString: string): string {
+		try {
+			const url = new URL(connectionString);
+			if (url.password) url.password = "***";
+			return url.toString();
+		} catch {
+			return "<invalid-connection-string>";
+		}
+	}
+
 	...
 			pool.on("error", (err) => {
 				console.error(
-					`Unexpected error on PostgreSQL pool for "${connectionString}":`,
+					`Unexpected error on PostgreSQL pool for "${this.redactConnectionString(connectionString)}":`,
 					err.message,
 				);
 			});

 			this.pgPools.set(connectionString, pool);
-			console.log(`Created PostgreSQL connection pool for: ${connectionString}`);
+			console.log(`Created PostgreSQL connection pool for: ${this.redactConnectionString(connectionString)}`);
 		}
 	...
-			console.log(`Created MySQL connection pool for: ${connectionString}`);
+			console.log(`Created MySQL connection pool for: ${this.redactConnectionString(connectionString)}`);
 		}
 	...
-			console.log(`Closed PostgreSQL connection pool for: ${connectionString}`);
+			console.log(`Closed PostgreSQL connection pool for: ${this.redactConnectionString(connectionString)}`);
 		}
 	...
-			console.log(`Closed MySQL connection pool for: ${connectionString}`);
+			console.log(`Closed MySQL connection pool for: ${this.redactConnectionString(connectionString)}`);
 		}
 	...
-				console.log(`Closed PostgreSQL pool for: ${connectionString}`);
+				console.log(`Closed PostgreSQL pool for: ${this.redactConnectionString(connectionString)}`);
 			},
 		);
 		const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
 			async ([connectionString, pool]) => {
 				await pool.end();
-				console.log(`Closed MySQL pool for: ${connectionString}`);
+				console.log(`Closed MySQL pool for: ${this.redactConnectionString(connectionString)}`);
 			},
 		);
 	...
 	getActivePools(): string[] {
-		return [...Array.from(this.pgPools.keys()), ...Array.from(this.mysqlPools.keys())];
+		return [...Array.from(this.pgPools.keys()), ...Array.from(this.mysqlPools.keys())].map(
+			(connectionString) => this.redactConnectionString(connectionString),
+		);
 	}
 }

Also applies to: 159-161, 181-182, 193-194, 217-227, 237-239

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/db-manager.ts` around lines 117 - 126, The code currently
logs and returns raw connectionString (seen in the pool.on error handler,
console.log after this.pgPools.set(connectionString, pool), and any methods that
return pool identifiers), which can leak credentials; update all uses of
connectionString (including in pool.on error logging, the "Created PostgreSQL
connection pool" console.log, and any return values) to first redact credentials
by parsing the connectionString (remove or replace user:password and any query
params containing secrets) or replace it with a safe pool identifier (e.g.,
sanitized DSN or generated ID), and ensure this sanitized value is what gets
logged, stored in this.pgPools keys/metadata exposed to callers, and returned by
any getter methods.

Comment on lines +217 to +231
const pgClosePromises = Array.from(this.pgPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed connection pool for: ${connectionString}`);
console.log(`Closed PostgreSQL pool for: ${connectionString}`);
},
);
await Promise.all(closePromises);
this.pools.clear();
const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed MySQL pool for: ${connectionString}`);
},
);
await Promise.all([...pgClosePromises, ...mysqlClosePromises]);
this.pgPools.clear();
this.mysqlPools.clear();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make closeAll() resilient to partial pool shutdown failures.

Line 229 uses Promise.all, so one pool.end() failure can prevent complete cleanup and skip map clearing.

🧯 Suggested patch
 	async closeAll(): Promise<void> {
 		const pgClosePromises = Array.from(this.pgPools.entries()).map(
 			async ([connectionString, pool]) => {
 				await pool.end();
 				console.log(`Closed PostgreSQL pool for: ${connectionString}`);
 			},
 		);
 		const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
 			async ([connectionString, pool]) => {
 				await pool.end();
 				console.log(`Closed MySQL pool for: ${connectionString}`);
 			},
 		);
-		await Promise.all([...pgClosePromises, ...mysqlClosePromises]);
-		this.pgPools.clear();
-		this.mysqlPools.clear();
+		const results = await Promise.allSettled([...pgClosePromises, ...mysqlClosePromises]);
+		this.pgPools.clear();
+		this.mysqlPools.clear();
+		const failures = results.filter((r) => r.status === "rejected");
+		if (failures.length > 0) {
+			throw new Error(`Failed to close ${failures.length} database pool(s).`);
+		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const pgClosePromises = Array.from(this.pgPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed connection pool for: ${connectionString}`);
console.log(`Closed PostgreSQL pool for: ${connectionString}`);
},
);
await Promise.all(closePromises);
this.pools.clear();
const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed MySQL pool for: ${connectionString}`);
},
);
await Promise.all([...pgClosePromises, ...mysqlClosePromises]);
this.pgPools.clear();
this.mysqlPools.clear();
const pgClosePromises = Array.from(this.pgPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed PostgreSQL pool for: ${connectionString}`);
},
);
const mysqlClosePromises = Array.from(this.mysqlPools.entries()).map(
async ([connectionString, pool]) => {
await pool.end();
console.log(`Closed MySQL pool for: ${connectionString}`);
},
);
const results = await Promise.allSettled([...pgClosePromises, ...mysqlClosePromises]);
this.pgPools.clear();
this.mysqlPools.clear();
const failures = results.filter((r) => r.status === "rejected");
if (failures.length > 0) {
throw new Error(`Failed to close ${failures.length} database pool(s).`);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/db-manager.ts` around lines 217 - 231, The closeAll()
implementation in db-manager.ts currently uses Promise.all over pgClosePromises
and mysqlClosePromises so a single pool.end() rejection can abort overall
shutdown and prevent clearing this.pgPools and this.mysqlPools; change it to use
Promise.allSettled (or individually catch errors for each async map callback)
when awaiting the close promises, log any failures with the associated
connectionString (from the pgPools/mysqlPools map entries) and ensure that
regardless of individual errors the method proceeds to clear this.pgPools and
this.mysqlPools and returns/throws a summarized error if you need to surface
failures.

Comment on lines +20 to +21
const columnName = `\`${filter.columnName}\``;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Unsanitized identifier interpolation enables SQL injection.
Backtick-wrapping alone is insufficient. A crafted columnName containing backticks can break out of identifier context in all affected spots.

🔒 Proposed fix (escape identifiers centrally)
 import type { CursorData, FilterType, SortDirection, SortType } from "shared/types";
 
+function escapeMysqlIdentifier(identifier: string): string {
+	return `\`${identifier.replaceAll("`", "``")}\``;
+}
+
 /**
  * MySQL-specific WHERE clause builder.
@@
-		const columnName = `\`${filter.columnName}\``;
+		const columnName = escapeMysqlIdentifier(filter.columnName);
@@
-		const sortParts = sorts.map(
-			(sort) => `\`${sort.columnName}\` ${sort.direction.toUpperCase()}`,
-		);
+		const sortParts = sorts.map(
+			(sort) => `${escapeMysqlIdentifier(sort.columnName)} ${sort.direction.toUpperCase()}`,
+		);
@@
-		return `ORDER BY \`${sorts}\` ${order?.toUpperCase() || "ASC"}`;
+		return `ORDER BY ${escapeMysqlIdentifier(sorts)} ${order?.toUpperCase() || "ASC"}`;
@@
-		const columnList = sortColumns.map((col) => `\`${col}\``).join(", ");
+		const columnList = sortColumns.map((col) => escapeMysqlIdentifier(col)).join(", ");

Also applies to: 84-85, 90-90, 114-114

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/utils/build-clauses-mysql.ts` around lines 20 - 21, The
code is vulnerable because raw identifier interpolation like
`\`${filter.columnName}\`` allows backtick injection; add a single helper
function (e.g., escapeIdentifier) that takes an identifier string and returns it
with any backticks escaped by doubling (replace ` with ``), then wrap the
escaped result in backticks when building SQL. Replace all direct interpolations
(the local columnName assignment and other places flagged—uses around the
column/table/alias constructions) to use backticks around escapeIdentifier(...)
instead of raw values so identifiers are safely quoted everywhere in
build-clauses-mysql.ts.

Comment on lines +59 to +60
default:
break;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unsupported operators should fail closed, not silently skip filters.
At Line 59-60, dropping unknown operators can unexpectedly broaden result sets.

🛡️ Proposed fix
 			case "not like":
 			case "not ilike":
 				conditions.push(`${columnName} NOT LIKE ?`);
 				values.push(filter.value);
 				break;
 			default:
-				break;
+				throw new Error(`Unsupported filter operator: ${String(filter.operator)}`);
 		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
default:
break;
default:
throw new Error(`Unsupported filter operator: ${String(filter.operator)}`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/server/src/utils/build-clauses-mysql.ts` around lines 59 - 60, The
switch handling filter operators in build-clauses-mysql.ts currently has a
default: break; which silently ignores unknown operators; change that to fail
closed by throwing a clear error (e.g., throw new Error(`Unsupported operator
"${op}" for field "${field}"`)) from the default branch in the function that
builds clauses (the switch that inspects the operator variable inside
buildClauses / buildClausesForMySQL). Ensure the thrown error includes the
operator and field names to aid debugging and adjust any callers/tests to expect
an exception for unsupported operators.

Comment on lines +42 to +43
{ value: "array", label: "array" },
{ value: "enum", label: "enum" },
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

PostgreSQL CREATE TABLE column type declarations "array" "enum" valid DDL syntax

💡 Result:

Array column types in CREATE TABLE (valid PostgreSQL DDL)

PostgreSQL does not use a standalone type name like array; you declare an array by decorating an element type:

CREATE TABLE t (
  ints_1d     integer[],
  texts_2d    text[][],
  mood_list   mood[]      -- arrays also work for user-defined types (e.g., enums)
);

Array type declaration forms PostgreSQL accepts include:

  • element_type[] (one-dimensional)
  • element_type[][] (multi-dimensional)
  • Optionally with sizes (documentational; not enforced): integer[3][3] [1]
  • SQL-standard alternative (1-D only): integer ARRAY or integer ARRAY[4] [1]

Enum column types in CREATE TABLE (valid PostgreSQL DDL)

Enums must be defined as a type first, then used as a column type:

CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');

CREATE TABLE person (
  name         text,
  current_mood mood
);

This is the canonical PostgreSQL enum DDL pattern (CREATE TYPE ... AS ENUM (...) then use the type name in CREATE TABLE) [2].

You can also make arrays of enums with mood[] (same array syntax as above) [1][2].

Sources: PostgreSQL docs on arrays [1] and enumerated types [2].

[1] https://www.postgresql.org/docs/current/arrays.html
[2] https://www.postgresql.org/docs/11/datatype-enum.html


🏁 Script executed:

# Search for how these column type constants are used in the codebase
rg -t ts "column-types-pgsql" --max-count=20

Repository: husamql3/db-studio

Length of output: 141


🏁 Script executed:

# Find the file and explore its context
find . -name "column-types-pgsql.ts" -type f

Repository: husamql3/db-studio

Length of output: 115


🏁 Script executed:

# Read the column-types-pgsql.ts file to see context
cat -n ./packages/shared/src/constants/column-types-pgsql.ts | head -60

Repository: husamql3/db-studio

Length of output: 1878


🏁 Script executed:

# Search for usages of these constants, particularly in SQL builders or DDL generation
rg -t ts "array\"|\"enum\"" packages/shared/src/constants/column-types-pgsql.ts -A 2 -B 2

Repository: husamql3/db-studio

Length of output: 199


🏁 Script executed:

# Search for where these column type constants are imported and used
rg -t ts "column-types-pgsql|columnTypes" --max-count=30

Repository: husamql3/db-studio

Length of output: 141


🏁 Script executed:

# Search for imports and uses of PGSQL_COLUMN_TYPES across the codebase
rg -t ts "PGSQL_COLUMN_TYPES" --max-count=50

Repository: husamql3/db-studio

Length of output: 277


🏁 Script executed:

# Search for SQL generation or DDL builders that might use column type values
rg -t ts "CREATE TABLE|INSERT|ALTER TABLE" -l | head -20

Repository: husamql3/db-studio

Length of output: 944


🏁 Script executed:

# Look for files that might handle column type selection/processing
fd -t f -e ts "column|type|builder|query|sql" packages/ | grep -v node_modules | head -20

Repository: husamql3/db-studio

Length of output: 1075


🏁 Script executed:

# Check the create-table DAO for PostgreSQL to see how column types are used in SQL generation
cat -n packages/server/src/dao/create-table.dao.ts | head -80

Repository: husamql3/db-studio

Length of output: 2530


🏁 Script executed:

# Also check if there's a PostgreSQL-specific create-table DAO
fd -t f "create-table.*pgsql" packages/server/src/dao/

Repository: husamql3/db-studio

Length of output: 44


🏁 Script executed:

# Search for how the column type value is actually used in SQL generation
rg -t ts "\.value" packages/server/src/dao/create-table.dao.ts -B 3 -A 3

Repository: husamql3/db-studio

Length of output: 44


🏁 Script executed:

# Search for where columnType gets set and if there's validation for "array" and "enum"
rg -t ts "columnType|isArray" packages/ --max-count=50 | head -60

Repository: husamql3/db-studio

Length of output: 6253


🏁 Script executed:

# Check if there's UI logic that handles "array" and "enum" specially
rg -t tsx "array|enum" packages/core/src/ -l | grep -E "form|field|column|type" | head -10

Repository: husamql3/db-studio

Length of output: 89


🏁 Script executed:

# Look at the type definition for FieldDataType to understand the structure
rg -t ts "type FieldDataType|interface FieldDataType" -A 15

Repository: husamql3/db-studio

Length of output: 2880


🏁 Script executed:

# Check if there's validation for "array" or "enum" column types
rg -t ts "array|enum" packages/server/src/dao/create-table.dao.ts -B 5 -A 5

Repository: husamql3/db-studio

Length of output: 335


🏁 Script executed:

# Look at how columnType is validated or processed before SQL generation
rg -t ts "mapColumnTypeToPostgres|validateColumnType|columnType" packages/server/src/dao/ -l

Repository: husamql3/db-studio

Length of output: 218


🏁 Script executed:

# Check if there's any preprocessing or validation in the create-table endpoint
rg -t ts "createTable" packages/server/src/routes/ -A 10 | head -60

Repository: husamql3/db-studio

Length of output: 3669


Remove "array" and "enum" from PostgreSQL column types—they produce invalid DDL.

At lines 42-43, array and enum are invalid standalone PostgreSQL column types in CREATE TABLE statements:

  • PostgreSQL requires array syntax as element_type[] (e.g., integer[], text[]), not bare array
  • Enum columns require a pre-created named enum type (e.g., CREATE TYPE mood AS ENUM (...) first), which this UI doesn't support

Since field.columnType is substituted directly into SQL (line 21 of create-table.dao.ts: "${field.columnName}" ${field.columnType}), selecting these values produces invalid SQL. The array case has a workaround (isArray flag appends []), but enum has no recovery path. Remove both entries or implement proper validation to prevent their selection.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/shared/src/constants/column-types-pgsql.ts` around lines 42 - 43,
The exported PostgreSQL column types list in column-types-pgsql.ts currently
includes the invalid standalone entries "array" and "enum"; remove these two
entries from that list (or filter them out before presenting options) so
field.columnType can never be set to "array" or "enum" because
create-table.dao.ts substitutes field.columnType verbatim into SQL (the
`"${field.columnName}" ${field.columnType}` usage), which produces invalid DDL;
alternatively, implement UI/server-side validation in the selector code to
prevent selecting "array" without an element type and to block unnamed enums.

@husamql3 husamql3 merged commit 2eefe20 into stage Mar 1, 2026
1 check passed
@husamql3 husamql3 deleted the feat/123/support-mysql-database branch March 1, 2026 22:26
@coderabbitai coderabbitai bot mentioned this pull request Mar 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant