Skip to content

feat: add MongoDB support with CRUD operations #108

Open
Youssef-joe wants to merge 12 commits intostagefrom
mongodb-support
Open

feat: add MongoDB support with CRUD operations #108
Youssef-joe wants to merge 12 commits intostagefrom
mongodb-support

Conversation

@Youssef-joe
Copy link
Collaborator

@Youssef-joe Youssef-joe commented Feb 9, 2026

(#107)

  • Implemented MongoDB query execution with executeMongoQuery.
  • Added functions for managing records in MongoDB (addMongoRecord, updateMongoRecords, deleteMongoRecords, etc.).
  • Created schema retrieval for MongoDB databases with getMongoDatabaseSchema.
  • Developed table management functions for MongoDB, including getMongoTablesList, createMongoCollection, and deleteMongoColumn.
  • Enhanced error handling to include MongoDB-specific errors.
  • Updated routes to support MongoDB operations alongside existing Postg
    reSQL functionality.
  • Modified database URL parsing to accommodate MongoDB connection strings.
  • Expanded system prompt generation to include MongoDB query guidelines.
  • Updated shared types to include MongoDB as a valid database type.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added full MongoDB support: connect to MongoDB databases, execute queries, create collections, and manage documents
    • Code editor now intelligently adapts syntax mode based on database type (JSON for MongoDB, SQL for PostgreSQL)
    • Automatic database type detection from connection URLs
  • Improvements

    • Enhanced error handling for database connection failures

…107)

- Implemented MongoDB query execution with `executeMongoQuery`.
- Added functions for managing records in MongoDB (`addMongoRecord`, `updateMongoRecords`, `deleteMongoRecords`, etc.).
- Created schema retrieval for MongoDB databases with `getMongoDatabaseSchema`.
- Developed table management functions for MongoDB, including `getMongoTablesList`, `createMongoCollection`, and `deleteMongoColumn`.
- Enhanced error handling to include MongoDB-specific errors.
- Updated routes to support MongoDB operations alongside existing PostgreSQL functionality.
- Modified database URL parsing to accommodate MongoDB connection strings.
- Expanded system prompt generation to include MongoDB query guidelines.
- Updated shared types to include MongoDB as a valid database type.
@Youssef-joe Youssef-joe requested a review from husamql3 as a code owner February 9, 2026 19:14
@Youssef-joe Youssef-joe changed the title feat: add MongoDB support with CRUD operations and schema management … feat: add MongoDB support with CRUD operations Feb 9, 2026
@Youssef-joe
Copy link
Collaborator Author

Mongo DAO Integration Flow-2026-02-09-191649

@husamql3
Copy link
Owner

husamql3 commented Feb 9, 2026

@CodeRabbit review

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR introduces MongoDB database support to the application, adding development orchestration scripts, a MongoDB client manager, data access objects for MongoDB operations (queries, records, tables, schema), frontend language/type awareness, database store synchronization, and conditional routing in server endpoints between PostgreSQL and MongoDB implementations.

Changes

Cohort / File(s) Summary
Development Infrastructure
dev.sh, kill-dev.sh
New Bash scripts for orchestrating local development: dev.sh starts multiple services (server, core, proxy, www) in parallel with cleanup on exit; kill-dev.sh terminates tracked processes and cleans up listening ports.
Frontend - Editor Language Support
packages/core/src/components/runnr-tab/cdoe-editor.tsx, packages/core/src/components/runnr-tab/runner-tab.tsx
CodeEditor component now accepts a language prop ("pgsql" | "json") to support multiple query languages; RunnerTab passes language based on dbType (JSON for MongoDB, pgsql for others) and selects appropriate placeholder queries.
Frontend - Database Type Awareness
packages/core/src/components/sidebar/sidebar-search-tables.tsx, packages/core/src/hooks/use-databases-list.ts, packages/core/src/stores/database.store.ts, packages/core/src/utils/constants/placeholders.ts
Store now tracks dbType; hooks synchronize database selection between store and API; sidebar conditionally hides "Add Table" for MongoDB; new MongoDB placeholder query constant added.
Server - MongoDB Infrastructure
packages/server/src/mongo-manager.ts, packages/server/package.json
New MongoDB client lifecycle manager with singleton pattern; handles connection, database selection, and ObjectId validation; mongodb ^6.19.0 dependency added.
Server - MongoDB Data Access Objects
packages/server/src/dao/mongo/database-list.dao.ts, packages/server/src/dao/mongo/query.dao.ts, packages/server/src/dao/mongo/records.dao.ts, packages/server/src/dao/mongo/schema.dao.ts, packages/server/src/dao/mongo/tables.dao.ts
Five new MongoDB DAOs implementing: database/collection listing with metrics, query execution with filtering/sorting/pagination, CRUD record operations, schema inference with type detection, and table metadata with cursor-based pagination and sample data.
Server - Database Type Routing
packages/server/src/routes/databases.routes.ts, packages/server/src/routes/query.routes.ts, packages/server/src/routes/records.routes.ts, packages/server/src/routes/tables.routes.ts, packages/server/src/dao/table-details-schema.ts
Routes now check dbType and conditionally invoke MongoDB or PostgreSQL DAOs; database schema retrieval routes to MongoDB schema builder for MongoDB databases; response shapes remain consistent across both paths.
Server - Configuration & Utilities
packages/server/src/db-manager.ts, packages/server/src/middlewares/error-handler.ts, packages/server/src/utils/parse-database-url.ts, packages/server/src/utils/system-prompt-generator.ts
Database type detection now recognizes mongodb:// URLs; error handler treats MongoDB network errors as connection failures; default port calculation supports MongoDB (27017); system prompt generator includes MongoDB-specific guidance and query examples.
Shared Types
packages/shared/src/types/database.types.ts
DATABASE_TYPES constant extended from ["pg"] to ["pg", "mongodb"].

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • 🎯 Auto PR: stage → main #56: Implements parallel multi-database feature expansion (database selection, listing/current/connection endpoints, database query parameter threading) and modifies the same frontend store/hooks and server routes/DAOs infrastructure.
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.74% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: add MongoDB support with CRUD operations' directly and clearly describes the main objective of the changeset, which is to add MongoDB support with comprehensive CRUD functionality throughout the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch mongodb-support

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/server/src/db-manager.ts (2)

50-58: ⚠️ Potential issue | 🟡 Minor

Port defaults to 5432 (PostgreSQL) for MongoDB connections.

Line 54 falls back to port 5432 when the URL has no explicit port. For mongodb+srv:// URLs, there's typically no port in the URL (DNS SRV handles it), so this will incorrectly store 5432. While this config may not be used for MongoDB connections directly, it's misleading and could cause bugs if referenced.

Suggested fix
-			port: Number.parseInt(url.port, 10) || 5432,
+			port: Number.parseInt(url.port, 10) || (this.detectDbType(url) === "mongodb" ? 27017 : 5432),

127-133: ⚠️ Potential issue | 🟠 Major

Connection strings (potentially containing credentials) are logged to console.

Lines 127 and 133 log full connection strings which may contain passwords. Consider redacting credentials before logging.

🤖 Fix all issues with AI agents
In `@kill-dev.sh`:
- Around line 32-34: Remove the stray leading spaces on the top-level tokens so
indentation is consistent: locate the lines containing the literal token done
and the line echo "[kill-dev] done" and left-align them (no leading space) to
match the rest of the script's top-level statements.

In `@packages/core/src/components/runnr-tab/cdoe-editor.tsx`:
- Around line 42-230: The effect registers providers with monaco but never
disposes them, causing duplicates; capture the IDisposables returned by
monaco.languages.registerDocumentFormattingEditProvider and
monaco.languages.registerCompletionItemProvider (e.g., formattingDisposable and
completionDisposable) when you register the providers around
provideDocumentFormattingEdits and provideCompletionItems, then return a cleanup
function from the effect that calls dispose() on each disposable (guarding for
existence) so providers are removed on effect teardown or when language changes.

In `@packages/core/src/hooks/use-databases-list.ts`:
- Line 55: The hook useCurrentDatabase destructures setDbType from
useDatabaseStore but never calls it, causing the store dbType to stay out of
sync with the API client; update useCurrentDatabase so that whenever
setApiDbType(...) is invoked (the same place where the API client's base URL is
changed), also call setDbType(...) with the same db type value; ensure you
reference/modify the block that currently calls setApiDbType and add a call to
setDbType there so components reading useDatabaseStore().dbType (e.g.,
SidebarSearchTables) receive the updated type.

In `@packages/server/package.json`:
- Line 60: Update the package metadata to reflect MongoDB support: change the
"description" field to mention MongoDB alongside PostgreSQL, add "mongodb" and
"mongo" to the "keywords" array, and either bump the "mongodb" dependency from
"mongodb": "^6.19.0" to the current stable "^7.1.0" if your code is compatible
or leave the version and add a TODO comment in package metadata noting the
intentional pin for compatibility; target the "description", "keywords", and the
"mongodb" dependency entries to make these changes.

In `@packages/server/src/dao/mongo/database-list.dao.ts`:
- Around line 54-62: The returned object's max_connections is using
serverStatus.connections.available (remaining slots) instead of the real max;
update the code that builds the return object (the block returning
host/port/user/database/version/active_connections/max_connections) to compute
max_connections as (serverStatus.connections?.current ?? 0) +
(serverStatus.connections?.available ?? 0) so the true maximum is shown; keep
active_connections as serverStatus.connections?.current ?? 0 and preserve use of
urlDefaults and getMongoDbName().

In `@packages/server/src/dao/mongo/query.dao.ts`:
- Around line 117-131: The current handlers for updateOne/updateMany set
rowCount = result.modifiedCount which hides cases where documents matched but
weren't modified; change the assignment to use result.matchedCount (e.g.,
rowCount = result.matchedCount ?? 0) and also surface the modified count by
adding a separate modifiedCount field or appending it to message so callers can
distinguish "matched but not modified" from "no match"; update the code paths
around updateOne/updateMany, rowCount, result.modifiedCount, result.matchedCount
and the response construction to include both matchedCount and modifiedCount.
- Around line 83-93: The call to buildMongoSortForQuery using an empty string as
the first arg is unclear and results in a default { _id: 1 }; change the call in
the "find" case to pass undefined (or null) for the field parameter so intent is
explicit: replace buildMongoSortForQuery("", undefined) with
buildMongoSortForQuery(undefined, undefined) (or
buildMongoSortForQuery(undefined)) and ensure the local variable sort (used in
cursor.sort(sort)) remains the same; reference payload.operation === "find", the
sort variable, and buildMongoSortForQuery when making the change.
- Around line 28-45: normalizeIdFilter currently only converts _id when it's a
plain string or inside a $in array; update it to recursively walk the filter
object (handle logical operators and nested objects) and coerce any string
values intended as ObjectId using toMongoId. Specifically, enhance
normalizeIdFilter to: 1) convert string _id values for operators $ne, $nin, $not
(and $in) to ObjectId; 2) map arrays under $in/$nin to toMongoId for string
elements; and 3) descend into nested logical operators ($and, $or, $nor, $not)
and apply the same conversions to any _id occurrences. Keep using the existing
toMongoId helper and ensure the function returns a new normalized filter object
without mutating the original.

In `@packages/server/src/dao/mongo/schema.dao.ts`:
- Around line 17-27: The getMongoDatabaseSchema function accepts maxTables but
never uses it; modify getMongoDatabaseSchema to limit the collections processed
by applying the maxTables value (e.g., if options.maxTables is set, take only
the first maxTables entries from the collections list or call
listCollections().limit(maxTables) before toArray()) so you don't fetch/process
all collections in parallel; update the code path that iterates over the
collections (the collections variable and any downstream mapping that loads
sample data when includeSampleData is used) to use the truncated collection list
so maxTables correctly bounds work and memory usage.

In `@packages/server/src/dao/mongo/tables.dao.ts`:
- Around line 86-94: The parseValue function returns the original untrimmed raw
string on the fallback path which preserves whitespace inconsistently; update
parseValue (the function using trimmed, Number(), isValidObjectId and
coerceObjectId) to return trimmed instead of raw for the final fallback so plain
string filter values don't retain leading/trailing whitespace.
- Around line 64-81: The switch in mapDataTypeLabel has inconsistent indentation
for the "json" and "date" cases; update the indentation of those case lines and
their returns to match the other cases (same indentation level as "number",
"boolean", "array", "enum") so the entire switch in function mapDataTypeLabel
(returning ColumnInfoSchemaType["dataTypeLabel"]) is uniformly indented and
formatted.
- Around line 215-228: The code only marks columns nullable when a value is
explicitly null/undefined; update the logic after the documents loop to mark
columns as nullable if they are absent in any document: iterate over documents
and maintain a count or a Set of seen keys, then for each key in columnMap
(refer to columnMap, documents, and ensureColumn) set nullable=true when its
seen-count < documents.length (i.e., missing in some docs); this ensures
sparse/missing fields are flagged nullable in the generated metadata.
- Around line 83-148: Summary: Regex injection risk in buildMongoFilters where
parseValue(filter.value) is used directly for $regex in the "like"/"ilike"/"not
like"/"not ilike" branches. Fix: add an escape helper (e.g., escapeRegex) and
use it before creating any $regex condition inside buildMongoFilters; ensure the
value used for regex is coerced to a string (check typeof value === "string" or
use String(value) fallback) and escape regex metacharacters, then use the
escaped string as the $regex pattern and keep $options ("i" for ilike/not ilike,
"" for like/not like); update the cases for "like","not like","ilike","not
ilike" in buildMongoFilters and ensure parseValue remains unchanged for other
operators.

In `@packages/server/src/mongo-manager.ts`:
- Around line 34-46: The getMongoClient function currently assigns the
module-level client before awaiting connect, so a failed client.connect() leaves
a broken client cached; change the logic to only set the module-level client
after a successful connection (e.g., create a local/temp MongoClient, await
temp.connect(), then assign temp to the module-level client variable), or catch
errors from client.connect() and ensure the module-level client remains
undefined/closed on failure; update references to client, getMongoClient, and
baseConfig accordingly.
🧹 Nitpick comments (23)
kill-dev.sh (1)

24-31: sed captures only the last pid= token per line.

If ss ever outputs multiple pid= entries on a single line (e.g., multiple sockets sharing a port), only the last PID is extracted due to greedy .*. Using grep -oP would be more robust.

Proposed fix
-  pids=$(ss -lptn "sport = :${port}" 2>/dev/null | sed -n 's/.*pid=\([0-9]\+\).*/\1/p')
+  pids=$(ss -lptn "sport = :${port}" 2>/dev/null | grep -oP 'pid=\K[0-9]+')
dev.sh (1)

14-29: Backgrounded subshell PIDs may not propagate signals to child processes.

$! captures the PID of the (cd … && cmd) & subshell, not the inner cmd. Killing the subshell with SIGTERM doesn't guarantee the child (e.g., npm run dev) is also terminated, potentially leaving orphaned processes.

This is partially mitigated by the port-based cleanup in kill-dev.sh, but you could make the cleanup more reliable by using exec in the subshell so the inner command replaces the subshell process:

Proposed fix
 run() {
   local name="$1"
   shift
   echo "[dev] starting ${name}..."
-  (cd "$ROOT_DIR" && "$@") &
+  (cd "$ROOT_DIR" && exec "$@") &
   echo $! >> /tmp/db-studio-dev.pids
 }
 
 run_in() {
   local name="$1"
   local dir="$2"
   shift 2
   echo "[dev] starting ${name}..."
-  (cd "$dir" && "$@") &
+  (cd "$dir" && exec "$@") &
   echo $! >> /tmp/db-studio-dev.pids
 }
packages/server/package.json (1)

5-5: Update description and keywords to reflect MongoDB support.

The description says "Modern database client for PostgreSQL…" and the keywords are PostgreSQL/MySQL/SQLite-centric. Now that MongoDB is a supported database, consider updating these to reflect the broader scope — this matters for discoverability (npm search, GitHub).

packages/server/src/middlewares/error-handler.ts (1)

34-35: Prefer checking e.name instead of e.message.includes() for MongoDB errors.

MongoDB driver errors have a name property (e.g., "MongoNetworkError", "MongoServerSelectionError"). Relying on e.message.includes() is fragile — the class name isn't guaranteed to appear in the message text.

Suggested improvement
 		const isConnectionError =
 			e.message.includes("ECONNREFUSED") ||
 			e.message.includes("connection refused") ||
 			e.message.includes("timeout expired") ||
 			e.message.includes("Connection terminated") ||
-			e.message.includes("MongoNetworkError") ||
-			e.message.includes("MongoServerSelectionError") ||
+			e.name === "MongoNetworkError" ||
+			e.name === "MongoServerSelectionError" ||
 			(e instanceof DatabaseError && e.code?.startsWith("08")); // Connection exception class
packages/core/src/hooks/use-databases-list.ts (1)

63-83: Duplicate selectedDatabase initialization — both in queryFn and the useEffect.

Lines 69-71 (inside queryFn) and lines 77-83 (in the useEffect) both set selectedDatabase when it's not already set. The useEffect is redundant since the queryFn already handles this. Having both paths makes the initialization logic harder to follow and could lead to unnecessary re-renders.

Consider removing the useEffect (lines 77-83) since the queryFn already handles it, or consolidate all side effects into the useEffect and keep the queryFn pure.

packages/core/src/components/runnr-tab/cdoe-editor.tsx (1)

324-332: Callback dependencies will cause excessive editor recreation.

onQueryChange, onUnsavedChanges, and onExecuteQuery are in the dependency array. Unless every caller wraps these in useCallback, any parent re-render will produce new function references, destroying and recreating the entire Monaco editor instance. This is a pre-existing issue, but adding language to the deps is fine — the editor should indeed be recreated when the language changes.

Consider using refs for the callback props to avoid tearing down the editor on every parent render, or ensure all callbacks passed to this component are memoized.

packages/server/src/mongo-manager.ts (2)

1-71: No graceful shutdown for the MongoClient.

There's no exported closeMongoClient() function. If the server shuts down (SIGTERM, hot reload in dev), the connection stays open until the process exits. Consider exporting a close helper for clean shutdown hooks.


62-64: Tighten isValidObjectId to accept only canonical 24-character hex ObjectId strings.

ObjectId.isValid() returns true for any 12-character string—including non-hex strings like "surveillance" (which is 12 bytes)—not just valid hex ObjectIds. Since this function is used as a type guard when coercing user-supplied IDs, enforce the canonical format:

return typeof value === "string" && /^[0-9a-fA-F]{24}$/.test(value);

Alternatively, validate and roundtrip: ObjectId.isValid(value) && new ObjectId(value).toHexString() === value.

packages/server/src/utils/system-prompt-generator.ts (1)

131-133: Minor: dbTypeLabel mapping is duplicated.

The same schema.dbType === "pg" ? "PostgreSQL" : schema.dbType logic exists at both Line 8 and Line 132. Consider extracting it into a small helper (e.g., getDbTypeLabel(dbType)) to keep them in sync.

packages/server/src/dao/mongo/records.dao.ts (3)

54-64: Update field values in $set are not coerced through toMongoId.

The PK value is coerced to ObjectId on Line 69 when the field is _id, but if a user updates a field that contains an ObjectId reference (e.g., a foreign-key-like field pointing to another collection), the value in updateSet (Line 64) is stored as a plain string. This may be acceptable for now — just flagging in case downstream consumers expect ObjectId types for reference fields.


67-83: Sequential updateOne calls — acceptable for a studio tool, but consider bulkWrite for larger batches.

Each grouped PK triggers a separate updateOne round trip. If this is only used from the UI for editing a handful of rows at a time, it's fine. For future-proofing, collection.bulkWrite() would reduce round trips.


88-103: deleteMongoRecords assumes all primaryKeys entries share the same columnName.

Line 96 reads columnName only from the first entry. If the array ever contains mixed column names, records for non-first columns would be silently skipped. This is likely fine given the UI sends deletes for a single collection's PK at a time, but a defensive check or assertion could prevent subtle bugs if the contract changes.

packages/server/src/dao/mongo/database-list.dao.ts (1)

38-40: Nit: getMongoCurrentDatabase is declared async but performs no async work.

getMongoDbName() is synchronous. The async qualifier is unnecessary — though it's harmless since it just wraps the return in a resolved Promise.

packages/server/src/routes/records.routes.ts (1)

124-128: forceDeleteMongoRecords is a passthrough to deleteMongoRecords.

Since MongoDB doesn't enforce foreign-key constraints, the "force" variant is semantically identical to the regular delete. This is fine, but consider adding a brief inline comment (here or in the DAO) to document why the two paths converge for MongoDB — it'll save future readers from wondering if it's an oversight.

packages/server/src/dao/table-details-schema.ts (2)

10-12: Merge the two @/db-manager.js imports into one.

Lines 10 and 11 both import from the same module. Consolidate them.

✏️ Proposed fix
-import { getDbPool } from "@/db-manager.js";
-import { getDbType } from "@/db-manager.js";
+import { getDbPool, getDbType } from "@/db-manager.js";

130-134: Wire maxTables through to getMongoDatabaseSchema to limit collection processing.

The getMongoDatabaseSchema call only forwards includeSampleData, ignoring includeDescriptions and maxTables. Omitting includeDescriptions is intentional since MongoDB collections have no PG-style descriptions, but maxTables could still be useful to limit the number of collections inspected for large databases. The function accepts maxTables in its signature but never uses it—currently all collections are fetched via listCollections().toArray() without filtering.

packages/server/src/routes/tables.routes.ts (1)

68-73: MongoDB collection creation silently discards column definitions.

When dbType === "mongodb", only body.tableName is forwarded; any column definitions the client sent via createTableSchema are ignored. This is correct for schemaless MongoDB, but the client has no feedback that columns were disregarded. Consider returning a note in the response or documenting this behavior in the API to avoid confusion.

packages/server/src/dao/mongo/schema.dao.ts (1)

5-15: convertColumn drops optional Column fields.

The converter only maps name, type, nullable, and isPrimaryKey, silently discarding foreignKey, description, and enumValues even if they were present in the input. This is fine for current MongoDB column inference (which doesn't produce those fields), but consider using a spread or explicit pass-through so future additions aren't silently lost.

packages/server/src/dao/mongo/query.dao.ts (2)

60-67: Parsed JSON payload is not runtime-validated against MongoQueryPayload.

JSON.parse(query) as MongoQueryPayload is a type-only assertion — it doesn't verify the shape at runtime. While lines 69-73 check for collection and operation, other fields (filter, pipeline, document, update) are trusted blindly. Malformed values (e.g., filter being a string instead of an object) will surface as cryptic MongoDB driver errors rather than clear 400 responses.

Consider adding a Zod schema for MongoQueryPayload to validate the parsed JSON, consistent with how the rest of the codebase validates request inputs.


95-99: Aggregate pipeline is passed directly without sanitization.

$out, $merge, and $unionWith stages could write to or read from arbitrary collections. This is consistent with the raw-SQL design (PostgreSQL path also allows unrestricted DDL/DML), but worth noting for anyone auditing security — this endpoint effectively grants full database access.

packages/server/src/dao/mongo/tables.dao.ts (3)

177-184: Sequential estimatedDocumentCount calls for each collection.

Each collection's count is awaited serially. For databases with many collections, this can be slow. Consider parallelizing with Promise.all:

Proposed refactor
-	const results: TableInfoSchemaType[] = [];
-	for (const collection of collections) {
-		const name = collection.name;
-		const rowCount = await mongoDb.collection(name).estimatedDocumentCount();
-		results.push({ tableName: name, rowCount });
-	}
-
-	return results;
+	return Promise.all(
+		collections.map(async (col) => ({
+			tableName: col.name,
+			rowCount: await mongoDb.collection(col.name).estimatedDocumentCount(),
+		})),
+	);

346-346: Hard-coded 10,000 document limit for export with no streaming.

All 10k documents are loaded into memory at once, normalized, then returned as a single response. For collections with large documents, this could cause significant memory pressure. Consider making the limit configurable or adding a streaming/chunked export path.


307-320: Minor TOCTOU between existence check and creation.

Between listCollections (line 315) and createCollection (line 319), a concurrent request could create the same collection, causing createCollection to throw with a different error than the friendly HTTPException. This is low-risk in practice, but wrapping createCollection in a try/catch that catches the duplicate-collection error would make this fully robust.

@husamql3 husamql3 changed the base branch from main to stage February 15, 2026 09:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants