Skip to content

Conversation

@czirker
Copy link

@czirker czirker commented Jan 31, 2024

Note

Updates connector packages to AWS SDK v3, bumps shared deps (leo-connector-common v5, leo-sdk v7), and aligns peer deps; includes version bumps (e.g., mongo 4.0.1) and regenerated Postgres lockfile.

  • Dependencies:
    • Migrate from aws-sdk v2 to AWS SDK v3 packages (e.g., @aws-sdk/client-lambda) across connectors.
    • Bump leo-connector-common to ^5.x and add/update leo-sdk to ^7.1.x/peer >=7.1.x.
    • Update MySQL and Oracle connectors to ^5.0.0-awsv3 of leo-connector-common.
    • Update templates to require leo-sdk >=7.1.0-awsv3.
  • Versioning/Lockfiles:
    • mongo/package.json: version to 4.0.1 and dependency/peer updates.
    • postgres/package-lock.json: updated to 5.0.0-awsv3 with AWS SDK v3 dependency tree.
    • Other package.json tweaks to align versions and ranges.

Written by Cursor Bugbot for commit 10b4209. This will update automatically on new commits. Configure here.

czirker and others added 30 commits January 31, 2024 13:52
* Remove old S3 files after the events are flushed to the queue
* Allow a transform that can either return nothing to be written, 1 event, or multiple events.
@ch-snyk-sa
Copy link

ch-snyk-sa commented May 7, 2025

Snyk checks have failed. 5 issues have been found so far.

Status Scanner Critical High Medium Low Total (5)
Licenses 0 0 0 0 0 issues
Code Security 0 0 0 0 0 issues
Open Source Security 0 0 5 0 5 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

mariagr-chub and others added 15 commits May 12, 2025 14:46
chore: upgrade js-beautify version  in  leo-connector-entity-table to fix vulnerability
For some reason, the OpenSearch project client does NOT add the correct `Accept-Encoding` gzip headers by default. You have to tell it to do that by setting `suggestCompression` to `true`. Without this, large ES response bodies will take 5x - 6x longer to download and can really mess up response times and concurrency because of the added time to download the full response.
- Use a proxy to preserve OpenSearch client methods
It will only accept `scroll_id` now.
- added the ability to filter the stream before loading to DynamoDB (for fanout)
- Added support for FastJSON parsing (pass along the whole event if it was parsed initially by FastJSON.
…d-fast-json-support

ES-2352 - improvements to entity table connector
});
} else {
done(err);
done(err, meta);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Inconsistent Upload Payload Breaks Backward Compatibility

The S3 upload logic in stream and streamParallel now returns an inconsistent payload. The new Promise-based Upload.done() includes file only on success and uploadError only on failure, unlike the previous s3.upload callback that always provided both. This breaks backward compatibility for downstream consumers.

Additional Locations (1)

Fix in Cursor Fix in Web

- updated `leo-logger` and `leo-sdk` in connectors
if (needsComparison) {
if (JSON.stringify(data.payload.old || {}) === JSON.stringify(data.payload.new || {})) {
return resolve(null);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Data Consistency Varies by Storage Location

The needsComparison flag only triggers when data is fetched from S3, but the comparison skips events when old and new are identical. This means non-S3 records with identical old/new values still emit events, while S3-backed identical records are filtered out, creating inconsistent behavior based on storage location rather than data content.

Fix in Cursor Fix in Web


if (s3Updates.length > 0) {
logger.info(`finished writing ${s3Updates.length} records to DynamoDB`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Log Message Falsely Claims Completion

The log message says "finished writing records to DynamoDB" but appears before the batchWrite call executes. The message should say "finished writing records to S3" or be moved after the DynamoDB write completes to accurately reflect what operation finished.

Fix in Cursor Fix in Web

oldS3Files.push(image._s3);
} catch (e) {
return reject(e);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: S3 Deletes Referenced Files, Causing Data Loss

When an old S3 file is fetched and added to oldS3Files at line 341, but the comparison at lines 375-378 determines old and new are identical and returns null, the S3 file still gets deleted at the end. This causes data loss because the file is deleted even though it's still referenced in DynamoDB (since the event was filtered and no update occurred).

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants