Logic to write results data to S3 when size > 90% of 400KB #27
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #26
Dynamo DB writes are failing for large plans
Description of changes:
drs-plan-automation/cfn/lambda/drs-plan-automation/src/drs_automation_plan.py:
This change checks the size of the item to be written, and writes it to S3 instead of Dynamo DB when the size is too large.
The item written to Dynamo DB is updated to only include the partition key and sort fields, as well as S3Bucket and S3Key where the item is stored.
I tested this condition by extracting the data in our environment from the CloudWatch logs, which included a dump of the DynamoDB item to be written. I converted this back to json and then wrote it to a file and added it as a new file in the lambda function. I then commented out the majority of the lambda handler code, and replaced it with:
I confirmed the item was written to S3, and the item was written to the DynamoDB table with the updated format.
I performed these steps in a UAT environment, using an existing appId, planId, and executionId in the json payload.
drs-plan-automation/cfn/lambda/drs-plan-automation/template.yaml:
Updated the template for the Lambda function to include a new ENV variable for the S3 Bucket. The bucket that is already created as part of this solution is used for this purpose. I also updated the permission set to include S3 permissions.
drs-plan-automation/cfn/lambda/drs-plan-automation-api/src/app.js:
The API is updated to process records that are stored in S3 when applicable.
Updated the two functions that retrieve results to parse the record looking for the S3Bucket and S3Key fields that are now inserted. If these fields exist, it will read the file from S3 and parse the json, and update the return items.
I confirmed that the Disaster Recovery Accelerator page loads the records that are stored in S3.
drs-plan-automation/cfn/lambda/drs-plan-automation-api/template.yaml
Updated the template for the Lambda function to include permission set to include S3 permissions.
I had originally added the ENV variable for the S3 bucket to the template, but commented this out since the bucket name is stored in the dynamoDB item as well.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.