Releases: nasa/cumulus-orca
v10.1.5
Release v10.1.5
Important information
This release is only compatible with Cumulus v20.3.0 and up.
- Full Change Comparison: v10.1.4...v10.1.5
Changed
- ORCA-1036 - Fixed depreciated API gateway resource path in terraform which was improperly creating the orca API URL.
v10.1.4
Release v10.1.4
Important information
This release is only compatible with Cumulus v20.3.0 and up.
- Full Change Comparison: v10.1.3...v10.1.4
Migration Notes
In the v20.3.0 release, Cumulus upgraded terraform version to 1.12.2. ORCA also needs to upgrade terraform and AWS provider to stay compatible. To upgrade, follow the instructions provided by Cumulus here
Changed
- ORCA-1027 - Updated terraform version to 1.12.2 and AWS provider version to >=5.10 to stay compatible with cumulus release v20.3.0
Security
- ORCA-1010 - Fixed snyk high vulnerabilities found on ORCA website.
v10.1.3
Release v10.1.3
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.1.2...v10.1.3
Migration Notes
AWS X-Ray functionality has been added for the copy_to_orca lambda. For users wanting to utilze X-Ray, set the lambda_xray variable in the tfvars file to Active
Added
- ORCA-885 - Added X-Ray for
copy_to_orcalambda and a variable invariables.tfto enable/disable it as well as updated IAM permissions for use.
Changed
- ORCA-985 - Fixed deprecated argument warning in api-gateway/main.tf by removing
stage_namefromaws_api_gateway_deploymentresource and deployingaws_api_gateway_stageresource instead. - ORCA-992 - Updated moto to version 5.1.2 and updated unit tests.
- ORCA-979 - Updated psycopg2 library to 2.9.10
Fixed
- ORCA-990 - Updated docusaurus to v3.7.0.
v10.1.2
Release v10.1.2
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.1.1...v10.1.2
Migration Notes
Using RDS dedicated instance instead of cluster
If you are using an RDS dedicated instance (which is rare) instead of a v2 cluster, then set deploy_rds_dedicated_instance_role_association = true and deploy_rds_cluster_role_association = false in your orca.tf file while deploying.
If you are getting an Error: operation error EC2: AuthorizeSecurityGroupEgress during deployment, there are two options (delete or import existing security group rule) that you can follow for troubleshooting:
- import the rds security group rule to your stack (sg-xxx is the rds security group id), the orca terraform state can be different from the following in DAACs' stacks.
terraform import module.orca.module.orca.module.orca_lambdas.module.lambda_security_group.aws_security_group_rule.rds_allow_s3_import sg-xxx_egress_tcp_443_443_0.0.0.0/0
- delete the existing SG rule, and let your stack add it.
Server Access Logging (OPTIONAL)
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits.
- To Enable
- Through the AWS Console navigate to S3 and select the bucket you want to enable server access logging on.
- Select Properties
- Scroll down to Server access logging and select edit
- Select Enable
- Select a destination bucket that the logs will be sent to. e.g.
s3://PREFIX-internal/PREFIX-internal-logs/ - Select the preferred log object key format and select Save Changes
Added
- ORCA-981 - Added an ORCA optional variable
deploy_rds_dedicated_instance_role_associationthat should be set to true if users have RDS dedicated instance instead of a cluster. This addition was done because PODAAC uses an RDS dedicated instance.
Changed
- ORCA-940 - Updated bandit package to latest version 1.8.2.
- ORCA-934 - Updated db_comparison instance in
modules/db_compare_instance/main.tfto use Amazon Linux 2023 AMI. - ORCA-402 - Met with ORCA users to discuss ORCA delete functionality open questions and updated the research webpage.
- ORCA-966 - Updated
tasks/db_deploy/db_deploy.pyandtasks/db_deploy/migrationswith.begin()for autocommits. As well as updated unit tests. - ORCA-980 - Modified
tasks/copy_to_archive/sqs_library.pyto generate a unique MessageGroupId to enable thepost_to_cataloglambda to processs SQS messages from the metadata SQS queue faster to prevent build up in the queue. - ORCA-650 - Modified
tasks/get_current_archive_list/get_current_archive_list.pyandtasks/post_to_catalog/post_to_catalog.pyand relevant unit tests to utilize SQLAlchemy fetchone() property to eliminate looping.
Removed
- ORCA-969 - Removed encryption from all ORCA SNS topics since it was causing issue in sending alert notifications.
Fixed
Security
- ORCA-976 - Added gitpython to
requirements-dev.txtfiles and set version to 3.1.41 to resolve snyk vulnerabilities
v10.1.1-beta
added instance role
v10.1.1
Release v10.1.1
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.1.0...v10.1.1
Migration Notes
- The user should update their
orca.tf,variables.tfandterraform.tfvarsfiles with new variables. The following optional variables have been added:- max_pool_connections
- max_concurrency
- lambda_log_retention_in_days
Delete Log Groups
ORCA has added the capability to set log retention on ORCA Lambdas e.g. 30 days, 60 days, 90 days, etc.
- Deployment Steps
- Run the script located at bin/delete_log_groups.py
- These must be deleted before a
terraform applyis ran due to the current log groups being created by AWS by default which retention cannot be modified via Terraform.
- These must be deleted before a
- Set the
lambda_log_retention_in_daysvariable to the number of days which you would like the logs to be retained. e.g.lambda_log_retention_in_days = 30- To set the logs to never expire the variable does not have to be set since it is set to never expire by default, if you would still like the variable to be set to never expire the value can be set at 0 e.g.
lambda_log_retention_in_days = 0
- To set the logs to never expire the variable does not have to be set since it is set to never expire by default, if you would still like the variable to be set to never expire the value can be set at 0 e.g.
- Once these steps are completed a
terraform applycan be executed.
- Run the script located at bin/delete_log_groups.py
Added
- ORCA-904 - Added to integration tests that verifies recovered objects are in the destination bucket.
- ORCA-907 - Added integration test for internal reconciliation at
integration_test/workflow_tests/test_packages/reconciliationand updated documentation with new variables. - LPCUMULUS-1474 - Added log groups that can have set retention periods in
modules/lambdas/main.tfwith a variable to set the retention in days. As well as added a script to delete the log groups AWS creates by default since those cannot be modified by Terraform. - ORCA-957 Added outbound HTTPS security group rule in order for the Internal Reconciliation Workflow to perform the S3 import successfully at
modules/security_groups/main.tf
Changed
- ORCA-918 - Updated
copy_to_archiveandcopy_from_archivelambdas to include two new optional ORCA variablesmax_pool_connectionsandmax_concurrencythat can be used to change parallelism of s3 copy operation. - ORCA-958 - Upgraded flake8, isort and black packages to latest versions in ORCA code.
- ORCA-947 - Updated
request_from_archivelambda to include an optional ORCA variablemax_pool_connectionsthat can be used to change parallelism of s3 copy operation.
Removed
Fixed
- ORCA-939 - Fixed snyk vulnerabilities showing high issues and upgraded docusaurus to v3.6.0.
v10.1.0
Release v10.1.0
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.0.1...v10.1.0
Added
- ORCA-905 - Added integration test for recovery large file.
- ORCA-567 - Specified build scripts to use specific version of pip to resolve any future errors/issues that could be caused by using the latest version of pip.
- ORCA-933 - Added dead letter queue for the Metadata SQS queue in
modules/sqs/main.tf
Changed
- ORCA-900 - Updated aws_lambda_powertools to latest version to resolve errors users were experiencing in older version. Updated boto3 as it is a dependecy of aws_lambda_powertools.
- ORCA-927 - Updated archive architecture to include metadata deadletter queue in
website/static/img/ORCA-Architecture-Archive-Container-Component-Updated.svg - ORCA-937 - Updated get_current_archive_list Lambda to use the gql_tasks_role to resolve database errors when trying to S3 import in
modules/lambdas/main.tf. Updated gql_tasks_role with needed permissions inmodules/graphql_0/main.tf, as well as updated Secrets Manager permissions to allow the role to get DB secret inmodules/secretsmanager/main.tf. - ORCA-942 - Fixed npm tarball error found during ORCA website deployment.
- ORCA-850 - Updated copy_to_archive documentation containing the additional s3 destination property functionality.
- ORCA-774 - Updated Lambdas and GraphQL to Python 3.10.
- ORCA-896 - Updated Bamboo files to use
latesttag oncumulus_orcaDocker image to resolve Bamboo jobs using old images. - 530 - Added explicit
s3:GetObjectTaggingands3:PutObjectTaggingactions to IAMrestore_object_role_policy
Fixed
- ORCA-822 - Fixed nodejs installation error in bamboo CI/CD ORCA distribution docker image.
- ORCA-810 - Fixed db_deploy unit test error in bamboo due to wheel installation during python 3.10 upgrade.
- ORCA-861 - Updated docusaurus to fix Snyk vulnerabilities.
- ORCA-862 - Updated docusaurus to v3.4.0.
- ORCA-890 - Fixed snyk vulnerabilities showing high issues and upgraded docusaurus to v3.5.2
- ORCA-902 - Upgraded bandit to version 1.7.9 to fix snyk vulnerabilities.
- ORCA-937 - Updated get_current_archive_list Lambda to use the gql_tasks_role to resolve database errors when trying to S3 import in modules/lambdas/main.tf. Updated gql_tasks_role with needed permissions in modules/graphql_0/main.tf, as well as updated Secrets Manager permissions to allow the role to get DB secret in modules/secretsmanager/main.tf.
- ORCA-942 - Fixed npm tarball error found during ORCA website deployment.
Removed
- ORCA-933 - Removed S3 credential references that were causing errors in
tasks/get_current_archive_list/get_current_archive_list.pyandtasks/get_current_archive_list/test/unit_tests/test_get_current_archive_list.py
v10.0.1
Release v10.0.1
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.0.0...v10.0.1
Added
- ORCA-920 - Fixed ORCA deployment failure for Cumulus when sharing an RDS cluster due to multiple IAM role association attempts. Added a new boolean variable
deploy_rds_cluster_role_associationwhich can be used to deploy multiple ORCA/cumulus stacks sharing the same RDS cluster in the same account by overwriting it tofalsefor 2nd user.
v10.0.0
Release v10.0.0
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v9.0.5...v10.0.0
Migration Notes
Remove the s3_access_key and s3_secret_key variables from your orca.tf file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance - Run
terraform init - Run
terraform apply - Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text - This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID> - Once connected run the command
cd /home - Once at the
/homedirectory run the command:sh db_compare.sh - When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroyVerify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32 - Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.shfor the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.mdands3-credentials.mddocumentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tfto resolve errors, and added DB role attachment tomodules/graphql_0/main.tf - 530 - Added explicit
s3:GetObjectTaggingands3:PutObjectTaggingactions to IAMrestore_object_role_policy
Deprecated
Removed
- ORCA-793 - Removed
s3_access_keyands3_secret_keyvariables from terraform. - ORCA-795 - Removed
s3_access_keyands3_secret_keyvariables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_keyands3_secret_keyvariables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapterandtasks/orca_recovery_adapteras they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.
v10.0.0-beta
Release v10.0.0-beta
Migration Notes
Remove the s3_access_key and s3_secret_key variables from your orca.tf file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance - Run
terraform init - Run
terraform apply - Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text - This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID> - Once connected run the command
cd /home - Once at the
/homedirectory run the command:sh db_compare.sh - When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroyVerify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32 - Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.shfor the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.mdands3-credentials.mddocumentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tfto resolve errors, and added DB role attachment tomodules/graphql_0/main.tf
Deprecated
Removed
- ORCA-793 - Removed
s3_access_keyands3_secret_keyvariables from terraform. - ORCA-795 - Removed
s3_access_keyands3_secret_keyvariables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_keyands3_secret_keyvariables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapterandtasks/orca_recovery_adapteras they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.