diff --git a/assets/packer/perforce/p4-code-review/README.md b/assets/packer/perforce/p4-code-review/README.md new file mode 100644 index 00000000..e24311ef --- /dev/null +++ b/assets/packer/perforce/p4-code-review/README.md @@ -0,0 +1,365 @@ +# P4 Code Review Packer Template + +This Packer template creates an Amazon Machine Image (AMI) for P4 Code Review (Helix Swarm) on Ubuntu 24.04 LTS. The AMI includes all necessary software pre-installed, with runtime configuration handled automatically during instance launch. + +## Table of Contents + +- [Prerequisites](#prerequisites) +- [Quick Start](#quick-start) +- [What Gets Installed](#what-gets-installed) +- [Building the AMI](#building-the-ami) +- [Finding Your AMI](#finding-your-ami) +- [Next Steps](#next-steps) +- [Troubleshooting](#troubleshooting) + +## Prerequisites + +Before building the AMI, ensure you have: + +1. **AWS CLI** configured with valid credentials: + + ```bash + aws configure + # Verify access + aws sts get-caller-identity + ``` + +2. **Packer** installed (version >= 1.8.0): + + ```bash + packer version + ``` + + If not installed, download from + +3. **VPC Access**: + - Default VPC in your region (Default behaviour) + - OR custom VPC with a public subnet (Can be configured by passing the VPC id through the `vpc_id` variable) + +4. **IAM Permissions**: Your AWS credentials need permissions to: + - Launch EC2 instances + - Create AMIs + - Create/delete security groups + - Create/delete key pairs + +## Quick Start + +From the **repository root**, run: + +```bash +# 1. Navigate to Packer template directory +cd assets/packer/perforce/p4-code-review + +# 2. Initialize Packer (downloads required plugins) +packer init p4_code_review_x86.pkr.hcl + +# 3. Validate the template +packer validate p4_code_review_x86.pkr.hcl + +# 4. Build the AMI (takes ~10-15 minutes) +packer build p4_code_review_x86.pkr.hcl +``` + +At the end of the build, Packer will output the AMI ID: + +```text +==> amazon-ebs.ubuntu2404: AMI: ami-0abc123def456789 +``` + +**Save this AMI ID** - you'll need it for Terraform deployment. + +## What Gets Installed + +The AMI includes a complete P4 Code Review installation: + +### Software Components + +1. **Perforce Repository**: Official Perforce package repository (Ubuntu jammy/22.04 compatible) +2. **PHP 8.x**: PHP runtime with all required extensions: + 1. Core: curl, mbstring, xml, intl, ldap, bcmath + 2. Database: mysql + 3. Graphics: gd + 4. Archive: zip + 5. PECL: igbinary, msgpack, redis +3. **Helix Swarm**: Native DEB installation via `helix-swarm` package +4. **Apache2**: Web server with mod_php and required modules (rewrite, proxy, proxy_fcgi) +5. **PHP-FPM**: FastCGI Process Manager for PHP +6. **helix-swarm-optional** (optional, installed by default): LibreOffice for document preview (.docx, .xlsx, .pptx) and ImageMagick for image preview (.png, .jpg, .tiff, etc.) (~500MB) +7. **AWS CLI v2**: Required for Secrets Manager access and EBS volume operations at runtime +8. **Configuration Script**: `/home/ubuntu/swarm_scripts/swarm_instance_init.sh` for runtime setup (see Runtime Configuration Details below) + +### System Configuration + +- **AppArmor**: Ubuntu's security module (less restrictive by default for `/opt`) +- **Services**: Apache2 and PHP-FPM enabled for automatic startup +- **User**: `swarm` system user created with proper permissions +- **Directories**: `/opt/perforce/swarm` prepared with correct ownership + +### What's NOT Configured Yet + +The following are configured at **deployment** when you launch an instance: + +- P4 Server connection details +- P4 user credentials (fetched from AWS Secrets Manager) +- Redis cache connection +- External hostname/URL +- SSO settings +- EBS volume mounting for persistent data +- Queue worker configuration (cron job and endpoint) +- File permissions for worker processes +- P4 Server extension installation (Swarm triggers) + +### Runtime Configuration Details + +When an EC2 instance launches, the user-data script performs the following steps: + +1. **EBS Volume Attachment**: Finds and attaches the persistent data volume by tags +2. **Filesystem Setup**: Creates ext4 filesystem (first launch) or mounts existing one +3. **Swarm Configuration**: Executes `/home/ubuntu/swarm_scripts/swarm_instance_init.sh` which: + - Retrieves P4 credentials from AWS Secrets Manager + - Runs Perforce's official `configure-swarm.sh` to: + - Connect to P4 Server and validate credentials + - Install Swarm extension on P4 Server (enables event triggers) + - Create initial configuration file + - Set up Apache VirtualHost + - Create cron job for queue workers + - Configures file permissions for queue worker functionality + - Updates configuration with Redis connection details + - Configures queue workers to use localhost endpoint + - Starts Apache and PHP-FPM services + +**Queue Workers**: P4 Code Review requires background workers to process events, send notifications, and index files. These are spawned by a cron job (created by `configure-swarm.sh`) that runs every minute. The runtime configuration ensures workers have proper permissions and connect to the correct endpoint. + +## Building the AMI + +### Option 1: Using Default VPC (Recommended) + +If your AWS region has a default VPC: + +```bash +cd assets/packer/perforce/p4-code-review +packer init p4_code_review_x86.pkr.hcl +packer build p4_code_review_x86.pkr.hcl +``` + +### Option 2: Using Custom VPC + +If you don't have a default VPC, specify your own: + +```bash +packer build \ + -var="region=us-west-2" \ + -var="vpc_id=vpc-xxxxx" \ + -var="subnet_id=subnet-xxxxx" \ + -var="associate_public_ip_address=true" \ + -var="ssh_interface=public_ip" \ + p4_code_review_x86.pkr.hcl +``` + +**Requirements for custom VPC**: + +- Subnet must be in a **public** subnet (has route to Internet Gateway) +- `associate_public_ip_address=true` if subnet doesn't auto-assign public IPs +- Security group allows outbound internet access (for package downloads) + +### Option 3: Using Variables File + +Create a `my-vars.pkrvars.hcl`: + +```hcl +region = "us-west-2" +vpc_id = "vpc-xxxxx" +subnet_id = "subnet-xxxxx" +associate_public_ip_address = true +ssh_interface = "public_ip" +``` + +Then build: + +```bash +packer build -var-file="my-vars.pkrvars.hcl" p4_code_review_x86.pkr.hcl +``` + +### Build Output + +Successful build output looks like: + +```text +==> amazon-ebs.ubuntu2404: Stopping the source instance... +==> amazon-ebs.ubuntu2404: Waiting for the instance to stop... +==> amazon-ebs.ubuntu2404: Creating AMI p4_code_review_ubuntu-20231209123456 from instance i-xxxxx +==> amazon-ebs.ubuntu2404: AMI: ami-0abc123def456789 +==> amazon-ebs.ubuntu2404: Waiting for AMI to become ready... +==> amazon-ebs.ubuntu2404: Terminating the source AWS instance... +Build 'amazon-ebs.ubuntu2404' finished after 12 minutes 34 seconds. + +==> Wait completed after 12 minutes 34 seconds + +==> Builds finished. The artifacts of successful builds are: +--> amazon-ebs.ubuntu2404: AMIs were created: +us-west-2: ami-0abc123def456789 +``` + +**Copy the AMI ID** (e.g., `ami-0abc123def456789`) - you'll need this for Terraform. + +## Finding Your AMI + +### List All P4 Code Review AMIs + +```bash +aws ec2 describe-images \ + --owners self \ + --filters "Name=name,Values=p4_code_review_ubuntu-*" \ + --query 'Images[*].[ImageId,Name,CreationDate]' \ + --output table +``` + +Output: + +```text ++-----------------------------------------------------------------------+ +| DescribeImages | ++----------------------+---------------------------------------+--------+ +| ami-0abc123def456 | p4_code_review_ubuntu-20231209 | 2023...| +| ami-0def456abc789 | p4_code_review_ubuntu-20231208 | 2023...| ++----------------------+---------------------------------------+--------+ +``` + +### Get the Latest AMI + +```bash +aws ec2 describe-images \ + --owners self \ + --filters "Name=name,Values=p4_code_review_ubuntu-*" \ + --query 'Images | sort_by(@, &CreationDate) | [-1].[ImageId,Name,CreationDate]' \ + --output table +``` + +### Get Details About a Specific AMI + +```bash +aws ec2 describe-images --image-ids ami-0abc123def456789 +``` + +## Next Steps + +Now that you have an AMI, proceed to deploy P4 Code Review infrastructure: + +1. **Read the [P4 Code Review Module Documentation](../../../../modules/perforce/modules/p4-code-review/README.md)** + +2. **Follow the deployment guide** in the module README, which covers: + - Creating AWS Secrets Manager secrets for P4 credentials + - Writing Terraform configuration + - Deploying the infrastructure + - Accessing the P4 Code Review web console + +## Troubleshooting + +### "No default VPC available" + +**Error**: Packer fails with "No default VPC for this user" + +**Solution**: Use Option 2 or 3 above to specify your VPC and subnet: + +```bash +packer build \ + -var="vpc_id=vpc-xxxxx" \ + -var="subnet_id=subnet-xxxxx" \ + p4_code_review_x86.pkr.hcl +``` + +### "Unable to connect to instance" + +**Error**: Packer times out connecting to the instance + +**Possible causes**: + +1. Subnet is not public (no route to Internet Gateway) +2. Security group blocks SSH (port 22) +3. No public IP assigned to instance + +**Solution**: Verify your subnet has: + +```bash +# Check if subnet has route to IGW +aws ec2 describe-route-tables \ + --filters "Name=association.subnet-id,Values=subnet-xxxxx" \ + --query 'RouteTables[*].Routes[?GatewayId!=`local`]' +``` + +### "Package installation failed" + +**Error**: APT/DEB errors during build + +**Possible causes**: + +1. No internet access from instance +2. Perforce repository temporarily unavailable +3. Package version conflicts + +**Solution**: + +- Check build instance has outbound internet access +- Try rebuilding (temporary outages resolve themselves) +- Review `/var/log/swarm_setup.log` on build instance + +### "AMI already exists with that name" + +**Error**: "AMI name 'p4_code_review_ubuntu-TIMESTAMP' already exists" + +**This shouldn't happen** (timestamp should be unique), but if it does: + +```bash +# List your AMIs +aws ec2 describe-images --owners self \ + --filters "Name=name,Values=p4_code_review_ubuntu-*" + +# Deregister old AMI if no longer needed +aws ec2 deregister-image --image-id ami-xxxxx +``` + +### Build is slow + +**Normal build time**: 10-15 minutes + +**If taking longer**: + +- Package downloads can be slow depending on region +- Perforce repository might be experiencing high load +- This is normal - be patient + +### Need to debug the build? + +**Enable debug mode to step through each provisioner**: + +```bash +packer build -debug p4_code_review_x86.pkr.hcl +``` + +This will pause before each provisioner step, allowing you to: + +- SSH into the build instance +- Inspect the current state +- Verify installation progress +- Press Enter to continue to the next step + +**Enable detailed logging**: + +```bash +PACKER_LOG=1 packer build p4_code_review_x86.pkr.hcl +``` + +## Additional Resources + +- [Packer Documentation](https://www.packer.io/docs) +- [Perforce Helix Swarm Admin Guide](https://www.perforce.com/manuals/swarm/Content/Swarm/Home-swarm.html) +- [Ubuntu 24.04 LTS Documentation](https://ubuntu.com/server/docs) + +## Questions or Issues? + +If you encounter problems: + +1. Check the troubleshooting section above +2. Review Packer logs with `PACKER_LOG=1` +3. Use `packer build -debug` to step through the build process +4. Verify AWS credentials and permissions diff --git a/assets/packer/perforce/p4-code-review/example.pkrvars.hcl b/assets/packer/perforce/p4-code-review/example.pkrvars.hcl new file mode 100644 index 00000000..d1f165ff --- /dev/null +++ b/assets/packer/perforce/p4-code-review/example.pkrvars.hcl @@ -0,0 +1,17 @@ +# Region where the Packer builder instance will run +region = "us-east-1" + +# VPC for the Packer builder instance (leave commented out to use default VPC) +vpc_id = "vpc-xxxxx" + +# Public subnet for the Packer builder instance (must have internet access for package downloads) +subnet_id = "subnet-xxxxx" + +# Optional: Associate public IP to builder instance (required if subnet doesn't auto-assign public IPs) +# associate_public_ip_address = true + +# Optional: SSH interface for Packer to connect (use "public_ip" for public subnets) +# ssh_interface = "public_ip" + +# Optional: Install helix-swarm-optional package (LibreOffice, ImageMagick for previews, adds ~500MB to AMI) +# install_swarm_optional = true diff --git a/assets/packer/perforce/p4-code-review/p4_code_review_x86.pkr.hcl b/assets/packer/perforce/p4-code-review/p4_code_review_x86.pkr.hcl new file mode 100644 index 00000000..76bd0bee --- /dev/null +++ b/assets/packer/perforce/p4-code-review/p4_code_review_x86.pkr.hcl @@ -0,0 +1,113 @@ +packer { + required_plugins { + amazon = { + version = ">= 0.0.2" + source = "github.com/hashicorp/amazon" + } + } +} + +locals { + timestamp = regex_replace(timestamp(), "[- TZ:]", "") + ami_prefix = "p4_code_review_ubuntu" +} + +data "amazon-ami" "ubuntu" { + filters = { + # Pin to Ubuntu 24.04 LTS (noble) - helix-swarm-optional requires ImageMagick 6 + # which is not available in Ubuntu 25.x+ + name = "ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*" + architecture = "x86_64" + root-device-type = "ebs" + virtualization-type = "hvm" + } + most_recent = true + owners = ["099720109477"] # Canonical + region = var.region +} + +variable "region" { + type = string + default = null +} + +variable "vpc_id" { + type = string + default = null +} + +variable "subnet_id" { + type = string + default = null +} + +variable "associate_public_ip_address" { + type = bool + default = true +} + +variable "ssh_interface" { + type = string + default = "public_ip" +} + +variable "install_swarm_optional" { + type = bool + default = true + description = "Install helix-swarm-optional package (includes LibreOffice for document previews and ImageMagick for image previews). Adds ~500MB to AMI size." +} + +source "amazon-ebs" "ubuntu" { + region = var.region + ami_name = "${local.ami_prefix}-${local.timestamp}" + instance_type = "t3.medium" + + vpc_id = var.vpc_id + subnet_id = var.subnet_id + + associate_public_ip_address = var.associate_public_ip_address + ssh_interface = var.ssh_interface + + source_ami = data.amazon-ami.ubuntu.id + + ssh_username = "ubuntu" +} + +build { + name = "P4_CODE_REVIEW_AWS" + sources = [ + "source.amazon-ebs.ubuntu" + ] + + provisioner "shell" { + inline = [ + "cloud-init status --wait", + "sudo apt-get update", + "sudo apt-get install -y git unzip curl" + ] + } + + provisioner "shell" { + script = "${path.root}/swarm_setup.sh" + execute_command = "sudo sh {{.Path}}" + environment_vars = [ + "INSTALL_SWARM_OPTIONAL=${var.install_swarm_optional}" + ] + } + + provisioner "file" { + source = "${path.root}/swarm_instance_init.sh" + destination = "/tmp/swarm_instance_init.sh" + } + + provisioner "shell" { + inline = ["mkdir -p /home/ubuntu/swarm_scripts", + "sudo mv /tmp/swarm_instance_init.sh /home/ubuntu/swarm_scripts" + ] + } + + provisioner "shell" { + inline = ["sudo chmod +x /home/ubuntu/swarm_scripts/swarm_instance_init.sh"] + } + +} diff --git a/assets/packer/perforce/p4-code-review/swarm_instance_init.sh b/assets/packer/perforce/p4-code-review/swarm_instance_init.sh new file mode 100644 index 00000000..64b611e8 --- /dev/null +++ b/assets/packer/perforce/p4-code-review/swarm_instance_init.sh @@ -0,0 +1,327 @@ +#!/bin/bash + +# P4 Code Review Runtime Configuration Script +# Configures P4 Code Review with P4 Server connection details, Redis cache, and other runtime settings +# This script is called by user-data at instance launch time + +LOG_FILE="/var/log/swarm_instance_init.log" + +log_message() { + echo "$(date) - $1" | tee -a $LOG_FILE +} + +ROOT_UID=0 +if [ "$UID" -ne "$ROOT_UID" ]; then + echo "Must be root to run this script." + exit 1 +fi + +log_message "=========================================" +log_message "Starting P4 Code Review runtime configuration" +log_message "=========================================" + +# Parse command line arguments +P4D_PORT="" +P4CHARSET="none" +SWARM_HOST="" +SWARM_REDIS="" +SWARM_REDIS_PORT="6379" +SWARM_FORCE_EXT="y" +CUSTOM_CONFIG_FILE="" + +# Secret ARN for fetching super user password from AWS Secrets Manager +# The super user is used for both Swarm runtime operations and admin tasks +P4D_SUPER_PASSWD_SECRET_ARN="" + +while [[ $# -gt 0 ]]; do + case $1 in + --p4d-port) + P4D_PORT="$2" + shift 2 + ;; + --p4charset) + P4CHARSET="$2" + shift 2 + ;; + --swarm-host) + SWARM_HOST="$2" + shift 2 + ;; + --swarm-redis) + SWARM_REDIS="$2" + shift 2 + ;; + --swarm-redis-port) + SWARM_REDIS_PORT="$2" + shift 2 + ;; + --swarm-force-ext) + SWARM_FORCE_EXT="$2" + shift 2 + ;; + --custom-config-file) + CUSTOM_CONFIG_FILE="$2" + shift 2 + ;; + --p4d-super-passwd-secret-arn) + P4D_SUPER_PASSWD_SECRET_ARN="$2" + shift 2 + ;; + *) + log_message "Unknown parameter: $1" + shift + ;; + esac +done + +log_message "Configuration parameters:" +log_message "P4D_PORT: $P4D_PORT" +log_message "P4CHARSET: $P4CHARSET" +log_message "SWARM_HOST: $SWARM_HOST" +log_message "SWARM_REDIS: $SWARM_REDIS" +log_message "SWARM_REDIS_PORT: $SWARM_REDIS_PORT" +log_message "SWARM_FORCE_EXT: $SWARM_FORCE_EXT" +log_message "CUSTOM_CONFIG_FILE: $CUSTOM_CONFIG_FILE" + +# Extract hostname from full URL for configure-swarm.sh +# configure-swarm.sh expects just the hostname (it constructs URLs internally) +# SWARM_HOST may contain https://hostname or just hostname +SWARM_HOSTNAME="${SWARM_HOST#https://}" +SWARM_HOSTNAME="${SWARM_HOSTNAME#http://}" +log_message "SWARM_HOSTNAME (for configure-swarm.sh): $SWARM_HOSTNAME" + +# The super user is used for both Swarm runtime operations (-u) and admin tasks (-U) +# This simplifies credential management and works with any authentication configuration +P4D_SUPER="super" + +# Retrieve super user password from AWS Secrets Manager +log_message "Fetching super user password from AWS Secrets Manager" +P4D_SUPER_PASSWD=$(aws secretsmanager get-secret-value --secret-id "$P4D_SUPER_PASSWD_SECRET_ARN" --query SecretString --output text) + +if [ -z "$P4D_SUPER_PASSWD" ]; then + log_message "ERROR: Failed to fetch super user password from AWS Secrets Manager" + exit 1 +fi + +log_message "Successfully fetched credentials" + +# P4 Code Review data directory - stores application data and configuration +SWARM_DATA_PATH="/opt/perforce/swarm/data" +SWARM_CONFIG="${SWARM_DATA_PATH}/config.php" + +# Ensure data directory exists with proper ownership +# Note: configure-swarm.sh will change these, we'll fix them again afterwards +mkdir -p "$SWARM_DATA_PATH" +chown -R swarm:www-data "$SWARM_DATA_PATH" +chmod 775 "$SWARM_DATA_PATH" + +# Run the official P4 Code Review configuration script +# This handles initial setup and P4 Server extension installation +# Using super user for both -u (Swarm user) and -U (admin user) ensures compatibility +# with all authentication configurations (SSO, standard password, etc.) +log_message "Running configure-swarm.sh with super user credentials" + +/opt/perforce/swarm/sbin/configure-swarm.sh \ + -n \ + -p "$P4D_PORT" \ + -u "$P4D_SUPER" \ + -w "$P4D_SUPER_PASSWD" \ + -H "$SWARM_HOSTNAME" \ + -e localhost \ + -X \ + -U "$P4D_SUPER" \ + -W "$P4D_SUPER_PASSWD" || { + log_message "ERROR: configure-swarm.sh failed with exit code $?" + log_message "This likely means P4 server connectivity or permissions issue" + exit 1 + } + +# Note: Swarm extension configuration is handled by configure-swarm.sh above +# The extension is configured with: +# - Swarm-URL: https:// (passed via -H parameter) +# - Swarm-Secure: true (default, enables SSL certificate validation) + +# Configure initial permissions for Swarm data directory +# Note: Queue-specific permissions are set after Apache starts (see below) +log_message "Configuring initial permissions for Swarm data directory" +chown -R swarm:www-data "$SWARM_DATA_PATH" +chmod 775 "$SWARM_DATA_PATH" + +# Ensure p4trust file is readable by Apache worker processes +chmod 644 "$SWARM_DATA_PATH/p4trust" 2>/dev/null || true + +# Swarm application log must be a regular file with group write permissions +if [ -e "$SWARM_DATA_PATH/log" ] && [ ! -f "$SWARM_DATA_PATH/log" ]; then + log_message "Correcting log path to be a regular file" + rm -rf "$SWARM_DATA_PATH/log" +fi +if [ ! -f "$SWARM_DATA_PATH/log" ]; then + touch "$SWARM_DATA_PATH/log" + chown swarm:www-data "$SWARM_DATA_PATH/log" + chmod 664 "$SWARM_DATA_PATH/log" +fi + +# Add swarm-cron user to www-data group for queue worker file access +log_message "Adding swarm-cron user to www-data group" +usermod -aG www-data swarm-cron + +# Update configuration file with runtime settings +log_message "Updating P4 Code Review configuration" + +if [ -f "$SWARM_CONFIG" ]; then + # Backup existing configuration + cp "$SWARM_CONFIG" "${SWARM_CONFIG}.backup.$(date +%s)" + + log_message "Adding Redis configuration to config.php" + + # Use PHP to properly modify the configuration file + php -r " + \$config = include '$SWARM_CONFIG'; + + // Configure Redis connection for session storage and caching + if (!isset(\$config['redis'])) { + \$config['redis'] = array(); + } + \$config['redis']['options'] = array( + 'server' => array( + 'host' => '$SWARM_REDIS', + 'port' => $SWARM_REDIS_PORT, + ), + ); + + // Set external URL for generating links in notifications and emails + if (!isset(\$config['environment'])) { + \$config['environment'] = array(); + } + \$config['environment']['hostname'] = '$SWARM_HOST'; + + // Write back the configuration + file_put_contents('$SWARM_CONFIG', '/dev/null || true + else + log_message "No custom configuration file provided or file is empty" + fi + + chown swarm:www-data "$SWARM_CONFIG" + chmod 664 "$SWARM_CONFIG" + + log_message "Configuration file updated successfully" +else + log_message "ERROR: Config file not found at $SWARM_CONFIG after running configure-swarm.sh" + exit 1 +fi + +# Disable default Apache site so Swarm becomes the default (important for health checks) +log_message "Disabling default Apache site" +a2dissite 000-default || log_message "Default site already disabled" + +# Start Apache web server +log_message "Starting Apache service" +systemctl enable apache2 +systemctl restart apache2 +systemctl status apache2 --no-pager + +# Start PHP-FPM for PHP request handling +if systemctl list-unit-files | grep -q php-fpm; then + log_message "Starting PHP-FPM service" + systemctl enable php-fpm + systemctl start php-fpm + systemctl status php-fpm --no-pager +fi + +# Configure permissions for queue workers and caching +# This must run AFTER Apache starts because Swarm may create directories with restrictive permissions +log_message "Configuring permissions for queue worker functionality" + +# Create queue directories if they don't exist +mkdir -p "$SWARM_DATA_PATH/queue/workers" +mkdir -p "$SWARM_DATA_PATH/queue/tokens" +mkdir -p "$SWARM_DATA_PATH/cache" + +# Set ownership and permissions for queue-related directories +# Workers run as swarm-cron (in www-data group) and need write access +chown -R www-data:www-data "$SWARM_DATA_PATH/queue" +chmod 770 "$SWARM_DATA_PATH/queue" +chmod 770 "$SWARM_DATA_PATH/queue/workers" +chmod 770 "$SWARM_DATA_PATH/queue/tokens" +chown -R www-data:www-data "$SWARM_DATA_PATH/cache" +chmod 775 "$SWARM_DATA_PATH/cache" + +# Configure P4 Code Review background workers for async tasks +# Workers are spawned by cron job created by configure-swarm.sh at /etc/cron.d/helix-swarm +# Update the default worker configuration to use localhost for optimal performance +log_message "Configuring P4 Code Review queue workers" + +SWARM_CRON_CONFIG="/opt/perforce/etc/swarm-cron-hosts.conf" +log_message "Updating worker configuration at $SWARM_CRON_CONFIG" + +# Workers should connect to localhost to avoid routing through load balancer +echo "http://localhost" > "$SWARM_CRON_CONFIG" +chown swarm-cron:swarm-cron "$SWARM_CRON_CONFIG" +chmod 644 "$SWARM_CRON_CONFIG" + +log_message "Queue workers configured to use localhost endpoint" + +# Ensure worker token is properly initialized +# The token file may exist but be empty after configure-swarm.sh runs +log_message "Initializing queue worker authentication token" + +TOKEN_DIR="${SWARM_DATA_PATH}/queue/tokens" + +# Find existing token file +TOKEN_FILE=$(find "$TOKEN_DIR" -type f 2>/dev/null | head -1) + +if [ -n "$TOKEN_FILE" ] && [ -f "$TOKEN_FILE" ]; then + # Check if token file is empty + if [ ! -s "$TOKEN_FILE" ]; then + log_message "Token file exists but is empty, generating new token" + TOKEN_CONTENT=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || echo "$(date +%s)-$(hostname)") + echo "$TOKEN_CONTENT" > "$TOKEN_FILE" + chown www-data:www-data "$TOKEN_FILE" + chmod 644 "$TOKEN_FILE" + log_message "Worker token initialized: $(basename "$TOKEN_FILE")" + else + log_message "Worker token already exists and is valid" + fi +else + log_message "No token file found, creating new one" + TOKEN_NAME=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || echo "swarm-token-$(date +%s)") + TOKEN_FILE="$TOKEN_DIR/$TOKEN_NAME" + TOKEN_CONTENT=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || echo "$(date +%s)-$(hostname)") + echo "$TOKEN_CONTENT" > "$TOKEN_FILE" + chown www-data:www-data "$TOKEN_FILE" + chmod 644 "$TOKEN_FILE" + log_message "Worker token created: $TOKEN_NAME" +fi + +log_message "=========================================" +log_message "P4 Code Review configuration completed" +log_message "P4 Code Review should be accessible at: $SWARM_HOST" +log_message "Data path: $SWARM_DATA_PATH" +log_message "=========================================" diff --git a/assets/packer/perforce/p4-code-review/swarm_setup.sh b/assets/packer/perforce/p4-code-review/swarm_setup.sh new file mode 100644 index 00000000..d4fcfaa4 --- /dev/null +++ b/assets/packer/perforce/p4-code-review/swarm_setup.sh @@ -0,0 +1,150 @@ +#!/bin/bash + +# Log file location +LOG_FILE="/var/log/swarm_setup.log" + +# Function to log messages +log_message() { + echo "$(date) - $1" | tee -a $LOG_FILE +} + +# Constants +ROOT_UID=0 + +# Check if script is run as root +if [ "$UID" -ne "$ROOT_UID" ]; then + echo "Must be root to run this script." + log_message "Script not run as root." + exit 1 +fi + +log_message "Starting P4 Code Review (Swarm) installation" + +# Wait for dpkg lock to be released (unattended-upgrades may be running) +wait_for_apt() { + local max_wait=300 + local wait_time=0 + while fuser /var/lib/dpkg/lock-frontend >/dev/null 2>&1 || fuser /var/lib/apt/lists/lock >/dev/null 2>&1; do + if [ $wait_time -ge $max_wait ]; then + log_message "ERROR: Timed out waiting for apt lock after ${max_wait}s" + exit 1 + fi + log_message "Waiting for apt lock to be released..." + sleep 5 + wait_time=$((wait_time + 5)) + done +} + +log_message "Waiting for any background package operations to complete" +wait_for_apt + +# Update package lists +log_message "Updating package lists" +apt-get update + +# Install required dependencies +log_message "Installing required dependencies" +apt-get install -y software-properties-common gnupg2 wget apt-transport-https ca-certificates unzip curl + +# Install AWS CLI v2 +log_message "Installing AWS CLI v2" +( + cd /tmp || exit 1 + curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" + unzip -q awscliv2.zip + ./aws/install + rm -rf aws awscliv2.zip +) + +# Add Perforce repository +log_message "Adding Perforce repository" +wget -qO - https://package.perforce.com/perforce.pubkey | gpg --dearmor | tee /usr/share/keyrings/perforce-archive-keyring.gpg > /dev/null +echo "deb [signed-by=/usr/share/keyrings/perforce-archive-keyring.gpg] http://package.perforce.com/apt/ubuntu noble release" | tee /etc/apt/sources.list.d/perforce.list + +# Update package lists with new repository +log_message "Updating package lists with Perforce repository" +apt-get update + +# Check if PHP 8.x is available natively +log_message "Checking for PHP 8.x availability" +if apt-cache show php8.3 &>/dev/null || apt-cache show php8.1 &>/dev/null; then + log_message "PHP 8.x available natively, using system packages" +else + log_message "PHP 8.x not available natively, adding ondrej/php PPA" + add-apt-repository -y ppa:ondrej/php + apt-get update +fi + +# Determine which PHP 8.x version to install +if apt-cache show php8.3 &>/dev/null; then + PHP_VERSION="8.3" +elif apt-cache show php8.1 &>/dev/null; then + PHP_VERSION="8.1" +else + log_message "ERROR: No PHP 8.x version available" + exit 1 +fi + +log_message "Installing Apache2 and PHP ${PHP_VERSION} with required extensions" +apt-get install -y apache2 \ + php${PHP_VERSION} php${PHP_VERSION}-fpm php${PHP_VERSION}-cli php${PHP_VERSION}-common \ + php${PHP_VERSION}-curl php${PHP_VERSION}-gd php${PHP_VERSION}-intl php${PHP_VERSION}-ldap php${PHP_VERSION}-mbstring \ + php${PHP_VERSION}-mysql php${PHP_VERSION}-xml php${PHP_VERSION}-zip php${PHP_VERSION}-bcmath \ + libapache2-mod-php${PHP_VERSION} + +# Install PHP PECL extensions +log_message "Installing PHP PECL extensions" +apt-get install -y php${PHP_VERSION}-igbinary php${PHP_VERSION}-msgpack php${PHP_VERSION}-redis + +# Install Helix Swarm +log_message "Installing Helix Swarm" +apt-get install -y helix-swarm + +# Install helix-swarm-optional package (LibreOffice, ImageMagick) +if [ "${INSTALL_SWARM_OPTIONAL:-true}" = "true" ]; then + log_message "Installing helix-swarm-optional package" + apt-get install -y helix-swarm-optional || log_message "helix-swarm-optional package not available, skipping" +else + log_message "Skipping helix-swarm-optional installation" +fi + +# Enable required Apache modules +log_message "Enabling required Apache modules" +a2enmod rewrite +a2enmod proxy +a2enmod proxy_fcgi +a2enmod setenvif + +# Enable PHP-FPM configuration for Apache +log_message "Configuring PHP-FPM for Apache" +a2enconf php*-fpm + +# Enable and configure Apache +log_message "Enabling Apache service" +systemctl enable apache2 + +# Enable and configure PHP-FPM +log_message "Enabling PHP-FPM service" +systemctl enable php${PHP_VERSION}-fpm + +# Create swarm user if it doesn't exist (package may have already created it) +if ! id -u swarm > /dev/null 2>&1; then + log_message "Creating swarm user" + useradd -r -s /bin/bash swarm +fi + +# Set proper ownership on Swarm directories +log_message "Setting ownership on Swarm directories" +chown -R swarm:swarm /opt/perforce/swarm || log_message "Swarm directory ownership already set" + +# Configure AppArmor for Swarm (Ubuntu uses AppArmor instead of SELinux) +if command -v aa-status > /dev/null 2>&1; then + log_message "AppArmor is active" + # AppArmor is less restrictive by default for /opt + # Additional configuration can be added here if needed +else + log_message "AppArmor not found, skipping AppArmor configuration" +fi + +log_message "P4 Code Review (Swarm) installation completed successfully" +log_message "Configuration will be done at runtime via swarm_instance_init.sh" diff --git a/assets/packer/perforce/p4-server/README.md b/assets/packer/perforce/p4-server/README.md index 7b9d82d3..8da59a60 100644 --- a/assets/packer/perforce/p4-server/README.md +++ b/assets/packer/perforce/p4-server/README.md @@ -40,17 +40,49 @@ An instance that is provisioned with this AMI will not automatically deploy a P4 ``` bash #!/bin/bash /home/ec2-user/cloud-game-development-toolkit/p4_configure.sh \ - \ - \ - \ - \ - \ - \ - \ - + --p4d_type p4d_master \ + --hx_depots /dev/sdf \ + --hx_metadata /dev/sdg \ + --hx_logs /dev/sdh \ + --super_password \ + --admin_username \ + --admin_password \ + --fqdn perforce.example.com \ + --auth https://auth.perforce.example.com ``` -As you can see, there are quite a few parameters that need to be passed to the `p4_configure.sh` script. We recommend using the [Perforce module](../../../../modules/perforce/README.md) for this reason. +### Script Options + +| Option | Description | +|--------|-------------| +| `--p4d_type` | P4 Server type: `p4d_master`, `p4d_replica`, or `p4d_edge` | +| `--hx_depots` | Path/device for P4 Server depots volume | +| `--hx_metadata` | Path/device for P4 Server metadata volume | +| `--hx_logs` | Path/device for P4 Server logs volume | +| `--super_password` | AWS Secrets Manager secret ID for service account (super) password | +| `--admin_username` | AWS Secrets Manager secret ID for admin account username | +| `--admin_password` | AWS Secrets Manager secret ID for admin account password | +| `--fqdn` | Fully Qualified Domain Name for the P4 Server | +| `--auth` | P4Auth URL (optional) | +| `--case_sensitive` | Case sensitivity: `0` (insensitive) or `1` (sensitive, default) | +| `--unicode` | Enable Unicode mode: `true` or `false` | +| `--selinux` | Update SELinux labels: `true` or `false` | +| `--plaintext` | Disable SSL: `true` or `false` | +| `--fsxn_password` | AWS Secrets Manager secret ID for FSxN password | +| `--fsxn_svm_name` | FSxN Storage Virtual Machine name | +| `--fsxn_management_ip` | FSxN management IP address | + +### User Configuration + +The script creates two Perforce users: + +1. **Service Account (`super`)**: Always created with username "super". Used internally by P4 Code Review (Helix Swarm) and other tooling. Password provided via `--super_password`. + +2. **Admin Account**: Created with the username provided via `--admin_username`. This is the account for human administrators. Password provided via `--admin_password`. + +Both users have full super privileges and are added to the `unlimited_timeout` group. + +We recommend using the [Perforce module](../../../../modules/perforce/README.md) to manage these configurations through Terraform. ## Important Notes diff --git a/assets/packer/perforce/p4-server/p4_configure.sh b/assets/packer/perforce/p4-server/p4_configure.sh index 21ae006c..22d7ac4b 100644 --- a/assets/packer/perforce/p4-server/p4_configure.sh +++ b/assets/packer/perforce/p4-server/p4_configure.sh @@ -238,7 +238,7 @@ prepare_iscsi_volume() { sleep $interval elapsed=$((elapsed + interval)) done - if [ ! -e $VOLUME]; then + if [ ! -e $VOLUME ]; then log_message "The device $VOLUME does not exist. Exiting." exit 1 fi @@ -315,10 +315,11 @@ log_message "Starting the p4 configure script." print_help() { echo "Usage: $0 [OPTIONS]" echo "Options:" - echo " --p4d_type Specify the type of P4 Server (p4d_master, p4d_replica, p4d_edge)" - echo " --username AWS Secrets Manager secret ID for the P4 Server admin username" - echo " --password AWS Secrets Manager secret ID for the P4 Server admin password" - echo " --auth P4Auth URL" + echo " --p4d_type Specify the type of P4 Server (p4d_master, p4d_replica, p4d_edge)" + echo " --super_password AWS Secrets Manager secret ID for the service account (super) password" + echo " --admin_username AWS Secrets Manager secret ID for the admin account username" + echo " --admin_password AWS Secrets Manager secret ID for the admin account password" + echo " --auth P4Auth URL" echo " --fqdn Fully Qualified Domain Name for the P4 Server" echo " --hx_logs Path for P4 Server logs" echo " --hx_metadata Path for P4 Server metadata" @@ -334,7 +335,7 @@ print_help() { } # Parse command-line options -OPTS=$(getopt -o '' --long p4d_type:,username:,password:,auth:,fqdn:,hx_logs:,hx_metadata:,hx_depots:,case_sensitive:,unicode:,selinux:,plaintext:,fsxn_password:,fsxn_svm_name:,fsxn_management_ip:,help -n 'parse-options' -- "$@") +OPTS=$(getopt -o '' --long p4d_type:,super_password:,admin_username:,admin_password:,auth:,fqdn:,hx_logs:,hx_metadata:,hx_depots:,case_sensitive:,unicode:,selinux:,plaintext:,fsxn_password:,fsxn_svm_name:,fsxn_management_ip:,help -n 'parse-options' -- "$@") if [ $? != 0 ]; then log_message "Failed to parse options" @@ -360,12 +361,16 @@ while true; do ;; esac ;; - --username) - P4D_ADMIN_USERNAME_SECRET_ID="$2" + --super_password) + SUPER_PASSWORD_SECRET_ID="$2" shift 2 ;; - --password) - P4D_ADMIN_PASS_SECRET_ID="$2" + --admin_username) + ADMIN_USERNAME_SECRET_ID="$2" + shift 2 + ;; + --admin_password) + ADMIN_PASSWORD_SECRET_ID="$2" shift 2 ;; --auth) @@ -462,12 +467,19 @@ if [[ "$P4D_TYPE" != "p4d_master" && "$P4D_TYPE" != "p4d_replica" && "$P4D_TYPE" exit 1 fi -# Fetch credentials for admin user from secrets manager -P4D_ADMIN_USERNAME=$(resolve_aws_secret $P4D_ADMIN_USERNAME_SECRET_ID) -P4D_ADMIN_PASS=$(resolve_aws_secret $P4D_ADMIN_PASS_SECRET_ID) +# Fetch credentials from secrets manager +# Service account (super) - used for internal tooling and Swarm extension +SUPER_PASSWORD=$(resolve_aws_secret $SUPER_PASSWORD_SECRET_ID) +# Admin account - for human administrators +ADMIN_USERNAME=$(resolve_aws_secret $ADMIN_USERNAME_SECRET_ID) +ADMIN_PASSWORD=$(resolve_aws_secret $ADMIN_PASSWORD_SECRET_ID) +# FSxN credentials (if applicable) FSXN_PASSWORD=$(resolve_aws_secret $FSXN_PASS) ONTAP_USER="fsxadmin" +log_message "Service account: super" +log_message "Admin account: $ADMIN_USERNAME" + # Function to perform operations perform_operations() { log_message "Performing operations for mounting and syncing directories." @@ -606,15 +618,15 @@ if [ ! -f "$SDP_Setup_Script_Config" ]; then exit 1 fi -# Update Perforce super user password in configuration -sed -i "s/^P4ADMINPASS=.*/P4ADMINPASS=$P4D_ADMIN_PASS/" "$SDP_Setup_Script_Config" +# Update Perforce service account (super) password in configuration +sed -i "s/^P4ADMINPASS=.*/P4ADMINPASS=$SUPER_PASSWORD/" "$SDP_Setup_Script_Config" log_message "Updated P4ADMINPASS in $SDP_Setup_Script_Config." -# Update Perforce super user password in configuration -sed -i "s/^ADMINUSER=.*/ADMINUSER=$P4D_ADMIN_USERNAME/" "$SDP_Setup_Script_Config" +# Update Perforce admin user to "super" for initial setup +sed -i "s/^ADMINUSER=.*/ADMINUSER=super/" "$SDP_Setup_Script_Config" -log_message "Updated ADMINUSER in $SDP_Setup_Script_Config." +log_message "Updated ADMINUSER to 'super' in $SDP_Setup_Script_Config." # Check if p4d_master server and update sitetags @@ -691,7 +703,7 @@ else P4PORT=ssl:1666 fi -P4USER=$P4D_ADMIN_USERNAME +P4USER=super #probably need to copy p4 binary to the /usr/bin or add to the path variable to avoid running with a full path adding: #permissions for lal users: @@ -716,14 +728,14 @@ fi if [ -f "$SDP_Live_Checkpoint" ]; then chmod +x "$SDP_Live_Checkpoint" - sudo -u "$P4USER" "$SDP_Live_Checkpoint" 1 + sudo -u perforce "$SDP_Live_Checkpoint" 1 else echo "Setup script (SDP_Live_Checkpoint) not found." fi if [ -f "$SDP_Offline_Recreate" ]; then chmod +x "$SDP_Offline_Recreate" - sudo -u "$P4USER" "$SDP_Offline_Recreate" 1 + sudo -u perforce "$SDP_Offline_Recreate" 1 else echo "Setup script (SDP_Offline_Recreate) not found." fi @@ -731,11 +743,68 @@ fi # initialize crontab for user perforce # fixing broken crontab on SDP, cron runs on minute schedule */60 is incorrect sed -i 's#\*/60#0#g' /p4/p4.crontab.1 -sudo -u "$P4USER" crontab /p4/p4.crontab.1 +sudo -u perforce crontab /p4/p4.crontab.1 # verify sdp installation should warn about missing license only: /hxdepots/p4/common/bin/verify_sdp.sh 1 +# Establish SSL trust for perforce user before running p4 commands +sudo -u perforce p4 -p "$P4PORT" trust -y + +# Login as super user for admin operations +echo "$SUPER_PASSWORD" | sudo -u perforce p4 -p "$P4PORT" -u super login + +# Ensure super user is a standard user type (not service account) +# This allows super to pass p4 protects validation required by tools like Swarm +log_message "Ensuring super user is standard type" +sudo -u perforce p4 -p "$P4PORT" -u super user -o super | \ + sed 's/^Type:.*/Type: standard/' > /tmp/super_user.txt +cat /tmp/super_user.txt | sudo -u perforce p4 -p "$P4PORT" -u super user -i -f +rm -f /tmp/super_user.txt + +# Create admin user for human administrators +log_message "Creating admin user: $ADMIN_USERNAME" + +# Create user spec +cat > /tmp/admin_user.txt < /dev/null +echo " super user $ADMIN_USERNAME * //..." >> /tmp/protect.txt +cat /tmp/protect.txt | sudo -u perforce p4 -p "$P4PORT" -u super protect -i + +# Clean up +rm -f /tmp/admin_user.txt /tmp/protect.txt + +log_message "Admin user $ADMIN_USERNAME created successfully" + +# Create a group with unlimited ticket timeout for service integrations (e.g., Swarm) +# This prevents ticket expiration issues for automated systems +log_message "Creating unlimited_timeout group for service integrations" + +cat > /tmp/unlimited_timeout_group.txt < [existing\_ecs\_cluster\_name](#input\_existing\_ecs\_cluster\_name) | The name of an existing ECS cluster to use for the Perforce server. If omitted a new cluster will be created. | `string` | `null` | no | | [existing\_security\_groups](#input\_existing\_security\_groups) | A list of existing security group IDs to attach to the shared network load balancer. | `list(string)` | `[]` | no | | [p4\_auth\_config](#input\_p4\_auth\_config) | # General
name: "The string including in the naming of resources related to P4Auth. Default is 'p4-auth'."

project\_prefix : "The project prefix for the P4Auth service. Default is 'cgd'."

environment : "The environment where the P4Auth service will be deployed. Default is 'dev'."

enable\_web\_based\_administration: "Whether to de enable web based administration. Default is 'true'."

debug : "Whether to enable debug mode for the P4Auth service. Default is 'false'."

fully\_qualified\_domain\_name : "The FQDN for the P4Auth Service. This is used for the P4Auth's Perforce configuration."


# Compute
cluster\_name : "The name of the ECS cluster where the P4Auth service will be deployed. Cluster is not created if this variable is null."

container\_name : "The name of the P4Auth service container. Default is 'p4-auth-container'."

container\_port : "The port on which the P4Auth service will be listening. Default is '3000'."

container\_cpu : "The number of CPU units to reserve for the P4Auth service container. Default is '1024'."

container\_memory : "The number of CPU units to reserve for the P4Auth service container. Default is '4096'."

pd4\_port : "The full URL you will use to access the P4 Depot in clients such P4V and P4Admin. Note, this typically starts with 'ssl:' and ends with the default port of ':1666'."


# Storage & Logging
cloudwatch\_log\_retention\_in\_days : "The number of days to retain the P4Auth service logs in CloudWatch. Default is 365 days."


# Networking
create\_defaults\_sgs : "Whether to create default security groups for the P4Auth service."

internal : "Set this flag to true if you do not want the P4Auth service to have a public IP."

create\_default\_role : "Whether to create the P4Auth default IAM Role. Default is set to true."

custom\_role : "ARN of a custom IAM Role you wish to use with P4Auth."

admin\_username\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4Auth Administrator username."

admin\_password\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4Auth Administrator password."


# - SCIM -
p4d\_super\_user\_arn : "If you would like to use SCIM to provision users and groups, you need to set this variable to the ARN of an AWS Secrets Manager secret containing the super user username for p4d."

p4d\_super\_user\_password\_arn : "If you would like to use SCIM to provision users and groups, you need to set this variable to the ARN of an AWS Secrets Manager secret containing the super user password for p4d."

scim\_bearer\_token\_arn : "If you would like to use SCIM to provision users and groups, you need to set this variable to the ARN of an AWS Secrets Manager secret containing the bearer token."

extra\_env : "Extra configuration environment variables to set on the p4 auth svc container." |
object({
# - General -
name = optional(string, "p4-auth")
project_prefix = optional(string, "cgd")
environment = optional(string, "dev")
enable_web_based_administration = optional(bool, true)
debug = optional(bool, false)
fully_qualified_domain_name = string

# - Compute -
container_name = optional(string, "p4-auth-container")
container_port = optional(number, 3000)
container_cpu = optional(number, 1024)
container_memory = optional(number, 4096)
p4d_port = optional(string, null)

# - Storage & Logging -
cloudwatch_log_retention_in_days = optional(number, 365)

# - Networking & Security -
service_subnets = optional(list(string), null)
create_default_sgs = optional(bool, true)
existing_security_groups = optional(list(string), [])
internal = optional(bool, false)

certificate_arn = optional(string, null)
create_default_role = optional(bool, true)
custom_role = optional(string, null)
admin_username_secret_arn = optional(string, null)
admin_password_secret_arn = optional(string, null)

# SCIM
p4d_super_user_arn = optional(string, null)
p4d_super_user_password_arn = optional(string, null)
scim_bearer_token_arn = optional(string, null)
extra_env = optional(map(string), null)
})
| `null` | no | -| [p4\_code\_review\_config](#input\_p4\_code\_review\_config) | # General
name: "The string including in the naming of resources related to P4 Code Review. Default is 'p4-code-review'."

project\_prefix : "The project prefix for the P4 Code Review service. Default is 'cgd'."

environment : "The environment where the P4 Code Review service will be deployed. Default is 'dev'."

debug : "Whether to enable debug mode for the P4 Code Review service. Default is 'false'."

fully\_qualified\_domain\_name : "The FQDN for the P4 Code Review Service. This is used for the P4 Code Review's Perforce configuration."


# Compute
container\_name : "The name of the P4 Code Review service container. Default is 'p4-code-review-container'."

container\_port : "The port on which the P4 Code Review service will be listening. Default is '3000'."

container\_cpu : "The number of CPU units to reserve for the P4 Code Review service container. Default is '1024'."

container\_memory : "The number of CPU units to reserve for the P4 Code Review service container. Default is '4096'."

pd4\_port : "The full URL you will use to access the P4 Depot in clients such P4V and P4Admin. Note, this typically starts with 'ssl:' and ends with the default port of ':1666'."

p4charset : "The P4CHARSET environment variable to set in the P4 Code Review container."

existing\_redis\_connection : "The existing Redis connection for the P4 Code Review service."


# Storage & Logging
cloudwatch\_log\_retention\_in\_days : "The number of days to retain the P4 Code Review service logs in CloudWatch. Default is 365 days."


# Networking & Security
create\_default\_sgs : "Whether to create default security groups for the P4 Code Review service."

internal : "Set this flag to true if you do not want the P4 Code Review service to have a public IP."

create\_default\_role : "Whether to create the P4 Code Review default IAM Role. Default is set to true."

custom\_role : "ARN of a custom IAM Role you wish to use with P4 Code Review."

super\_user\_password\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review Administrator username."

super\_user\_username\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review Administrator password."

p4d\_p4\_code\_review\_user\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's username."

p4d\_p4\_code\_review\_password\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's password."

p4d\_p4\_code\_review\_user\_password\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's password."

enable\_sso : "Whether to enable SSO for the P4 Code Review service. Default is set to false."

config\_php\_source : "Used as the ValueFrom for P4CR's config.php. Contents should be base64 encoded, and will be combined with the generated config.php via array\_replace\_recursive."


# Caching
elasticache\_node\_count : "The number of Elasticache nodes to create for the P4 Code Review service. Default is '1'."

elasticache\_node\_type : "The type of Elasticache node to create for the P4 Code Review service. Default is 'cache.t4g.micro'." |
object({
# General
name = optional(string, "p4-code-review")
project_prefix = optional(string, "cgd")
environment = optional(string, "dev")
debug = optional(bool, false)
fully_qualified_domain_name = string

# Compute
container_name = optional(string, "p4-code-review-container")
container_port = optional(number, 80)
container_cpu = optional(number, 1024)
container_memory = optional(number, 4096)
p4d_port = optional(string, null)
p4charset = optional(string, null)
existing_redis_connection = optional(object({
host = string
port = number
}), null)

# Storage & Logging
cloudwatch_log_retention_in_days = optional(number, 365)

# Networking & Security
create_default_sgs = optional(bool, true)
existing_security_groups = optional(list(string), [])
internal = optional(bool, false)
service_subnets = optional(list(string), null)

create_default_role = optional(bool, true)
custom_role = optional(string, null)

super_user_password_secret_arn = optional(string, null)
super_user_username_secret_arn = optional(string, null)
p4_code_review_user_password_secret_arn = optional(string, null)
p4_code_review_user_username_secret_arn = optional(string, null)
enable_sso = optional(string, false)
config_php_source = optional(string, null)

# Caching
elasticache_node_count = optional(number, 1)
elasticache_node_type = optional(string, "cache.t4g.micro")
})
| `null` | no | -| [p4\_server\_config](#input\_p4\_server\_config) | # - General -
name: "The string including in the naming of resources related to P4 Server. Default is 'p4-server'"

project\_prefix: "The project prefix for this workload. This is appended to the beginning of most resource names."

environment: "The current environment (e.g. dev, prod, etc.)"

auth\_service\_url: "The URL for the P4Auth Service."

fully\_qualified\_domain\_name = "The FQDN for the P4 Server. This is used for the P4 Server's Perforce configuration."


# - Compute -
lookup\_existing\_ami : "Whether to lookup the existing Perforce P4 Server AMI."

ami\_prefix: "The AMI prefix to use for the AMI that will be created for P4 Server."

instance\_type: "The instance type for Perforce P4 Server. Defaults to c6g.large."

instance\_architecture: "The architecture of the P4 Server instance. Allowed values are 'arm64' or 'x86\_64'."

IMPORTANT: "Ensure the instance family of the instance type you select supports the instance\_architecture you select. For example, 'c6in' instance family only works for 'x86\_64' architecture, not 'arm64'. For a full list of this mapping, see the AWS Docs for EC2 Naming Conventions: https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-type-names.html"

p4\_server\_type: "The Perforce P4 Server server type. Valid values are 'p4d\_commit' or 'p4d\_replica'."

unicode: "Whether to enable Unicode configuration for P4 Server the -xi flag for p4d. Set to true to enable Unicode support."

selinux: "Whether to apply SELinux label updates for P4 Server. Don't enable this if SELinux is disabled on your target operating system."

case\_sensitive: "Whether or not the server should be case insensitive (Server will run '-C1' mode), or if the server will run with case sensitivity default of the underlying platform. False enables '-C1' mode. Default is set to true."

plaintext: "Whether to enable plaintext authentication for P4 Server. This is not recommended for production environments unless you are using a load balancer for TLS termination. Default is set to false."


# - Storage -
storage\_type: "The type of backing store. Valid values are either 'EBS' or 'FSxN'"

depot\_volume\_size: "The size of the depot volume in GiB. Defaults to 128 GiB."

metadata\_volume\_size: "The size of the metadata volume in GiB. Defaults to 32 GiB."

logs\_volume\_size: "The size of the logs volume in GiB. Defaults to 32 GiB."


# - Networking & Security -
instance\_subnet\_id: "The subnet where the P4 Server instance will be deployed."

instance\_private\_ip: "The private IP address to assign to the P4 Server."

create\_default\_sg : "Whether to create a default security group for the P4 Server instance."

existing\_security\_groups: "A list of existing security group IDs to attach to the P4 Server load balancer."

internal: "Set this flag to true if you do not want the P4 Server instance to have a public IP."

super\_user\_password\_secret\_arn: "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's username here. Otherwise, the default of 'perforce' will be used."

super\_user\_username\_secret\_arn: "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's password here."

create\_default\_role: "Optional creation of P4 Server default IAM Role with SSM managed instance core policy attached. Default is set to true."

custom\_role: "ARN of a custom IAM Role you wish to use with P4 Server." |
object({
# General
name = optional(string, "p4-server")
project_prefix = optional(string, "cgd")
environment = optional(string, "dev")
auth_service_url = optional(string, null)
fully_qualified_domain_name = string

# Compute
lookup_existing_ami = optional(bool, true)
ami_prefix = optional(string, "p4_al2023")

instance_type = optional(string, "c6i.large")
instance_architecture = optional(string, "x86_64")
p4_server_type = optional(string, null)

unicode = optional(bool, false)
selinux = optional(bool, false)
case_sensitive = optional(bool, true)
plaintext = optional(bool, false)

# Storage
storage_type = optional(string, "EBS")
depot_volume_size = optional(number, 128)
metadata_volume_size = optional(number, 32)
logs_volume_size = optional(number, 32)

# Networking & Security
instance_subnet_id = optional(string, null)
instance_private_ip = optional(string, null)
create_default_sg = optional(bool, true)
existing_security_groups = optional(list(string), [])
internal = optional(bool, false)

super_user_password_secret_arn = optional(string, null)
super_user_username_secret_arn = optional(string, null)

create_default_role = optional(bool, true)
custom_role = optional(string, null)

# FSxN
fsxn_password = optional(string, null)
fsxn_filesystem_security_group_id = optional(string, null)
protocol = optional(string, null)
fsxn_region = optional(string, null)
fsxn_management_ip = optional(string, null)
fsxn_svm_name = optional(string, null)
amazon_fsxn_svm_id = optional(string, null)
fsxn_aws_profile = optional(string, null)
})
| `null` | no | +| [p4\_code\_review\_config](#input\_p4\_code\_review\_config) | # General
name: "The string including in the naming of resources related to P4 Code Review. Default is 'p4-code-review'."

project\_prefix : "The project prefix for the P4 Code Review service. Default is 'cgd'."

environment : "The environment where the P4 Code Review service will be deployed. Default is 'dev'."

fully\_qualified\_domain\_name : "The FQDN for the P4 Code Review Service. This is used for the P4 Code Review's Perforce configuration."


# Compute
application\_port : "The port on which the P4 Code Review service will be listening. Default is '80'."

instance\_type : "EC2 instance type for running P4 Code Review. Default is 'm5.large'."

ami\_id : "Optional AMI ID for P4 Code Review. If not provided, will use the latest Packer-built AMI."

p4d\_port : "The full URL you will use to access the P4 Depot in clients such P4V and P4Admin. Note, this typically starts with 'ssl:' and ends with the default port of ':1666'."

p4charset : "The P4CHARSET environment variable to set for the P4 Code Review instance."

existing\_redis\_connection : "The existing Redis connection for the P4 Code Review service."


# Storage & Logging
cloudwatch\_log\_retention\_in\_days : "The number of days to retain the P4 Code Review service logs in CloudWatch. Default is 365 days."

ebs\_volume\_size : "Size in GB for the EBS volume that stores P4 Code Review data. Default is '20'."

ebs\_volume\_type : "EBS volume type for P4 Code Review data storage. Default is 'gp3'."

ebs\_volume\_encrypted : "Enable encryption for the EBS volume storing P4 Code Review data. Default is 'true'."

ebs\_availability\_zone : "Availability zone for the EBS volume. Must match the EC2 instance AZ."


# Networking & Security
create\_default\_sgs : "Whether to create default security groups for the P4 Code Review service."

internal : "Set this flag to true if you do not want the P4 Code Review service to have a public IP."

instance\_subnet\_id : "The subnet ID where the EC2 instance will be launched. Should be a private subnet for security."

super\_user\_password\_secret\_arn : "Optionally provide the ARN of an AWS Secret for the P4 Server super user password. The super user is used for both Swarm runtime operations and administrative tasks."

custom\_config : "JSON string with additional Swarm configuration to merge with the generated config.php. Use this for SSO/SAML setup, notifications, Jira integration, etc."


# Caching
elasticache\_node\_count : "The number of Elasticache nodes to create for the P4 Code Review service. Default is '1'."

elasticache\_node\_type : "The type of Elasticache node to create for the P4 Code Review service. Default is 'cache.t4g.micro'." |
object({
# General
name = optional(string, "p4-code-review")
project_prefix = optional(string, "cgd")
environment = optional(string, "dev")
fully_qualified_domain_name = string

# Compute
application_port = optional(number, 80)
instance_type = optional(string, "m5.large")
ami_id = optional(string, null)
p4d_port = optional(string, null)
p4charset = optional(string, null)
existing_redis_connection = optional(object({
host = string
port = number
}), null)

# Storage & Logging
cloudwatch_log_retention_in_days = optional(number, 365)
ebs_volume_size = optional(number, 20)
ebs_volume_type = optional(string, "gp3")
ebs_volume_encrypted = optional(bool, true)
ebs_availability_zone = optional(string, null)

# Networking & Security
create_default_sgs = optional(bool, true)
existing_security_groups = optional(list(string), [])
internal = optional(bool, false)
service_subnets = optional(list(string), null)
instance_subnet_id = string

super_user_password_secret_arn = optional(string, null)
custom_config = optional(string, null)

# Caching
elasticache_node_count = optional(number, 1)
elasticache_node_type = optional(string, "cache.t4g.micro")
})
| `null` | no | +| [p4\_server\_config](#input\_p4\_server\_config) | # - General -
name: "The string including in the naming of resources related to P4 Server. Default is 'p4-server'"

project\_prefix: "The project prefix for this workload. This is appended to the beginning of most resource names."

environment: "The current environment (e.g. dev, prod, etc.)"

auth\_service\_url: "The URL for the P4Auth Service."

fully\_qualified\_domain\_name = "The FQDN for the P4 Server. This is used for the P4 Server's Perforce configuration."


# - Compute -
lookup\_existing\_ami : "Whether to lookup the existing Perforce P4 Server AMI."

ami\_prefix: "The AMI prefix to use for the AMI that will be created for P4 Server."

instance\_type: "The instance type for Perforce P4 Server. Defaults to c6g.large."

instance\_architecture: "The architecture of the P4 Server instance. Allowed values are 'arm64' or 'x86\_64'."

IMPORTANT: "Ensure the instance family of the instance type you select supports the instance\_architecture you select. For example, 'c6in' instance family only works for 'x86\_64' architecture, not 'arm64'. For a full list of this mapping, see the AWS Docs for EC2 Naming Conventions: https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-type-names.html"

p4\_server\_type: "The Perforce P4 Server server type. Valid values are 'p4d\_commit' or 'p4d\_replica'."

unicode: "Whether to enable Unicode configuration for P4 Server the -xi flag for p4d. Set to true to enable Unicode support."

selinux: "Whether to apply SELinux label updates for P4 Server. Don't enable this if SELinux is disabled on your target operating system."

case\_sensitive: "Whether or not the server should be case insensitive (Server will run '-C1' mode), or if the server will run with case sensitivity default of the underlying platform. False enables '-C1' mode. Default is set to true."

plaintext: "Whether to enable plaintext authentication for P4 Server. This is not recommended for production environments unless you are using a load balancer for TLS termination. Default is set to false."


# - Storage -
storage\_type: "The type of backing store. Valid values are either 'EBS' or 'FSxN'"

depot\_volume\_size: "The size of the depot volume in GiB. Defaults to 128 GiB."

metadata\_volume\_size: "The size of the metadata volume in GiB. Defaults to 32 GiB."

logs\_volume\_size: "The size of the logs volume in GiB. Defaults to 32 GiB."


# - Networking & Security -
instance\_subnet\_id: "The subnet where the P4 Server instance will be deployed."

instance\_private\_ip: "The private IP address to assign to the P4 Server."

create\_default\_sg : "Whether to create a default security group for the P4 Server instance."

existing\_security\_groups: "A list of existing security group IDs to attach to the P4 Server load balancer."

internal: "Set this flag to true if you do not want the P4 Server instance to have a public IP."

admin\_username: "Username for the Perforce admin account. The 'super' service account is always created automatically for internal tooling. Default is 'perforce'."

admin\_password\_secret\_arn: "Optional ARN of existing Secrets Manager secret for admin password. If not provided, a password will be auto-generated."

create\_default\_role: "Optional creation of P4 Server default IAM Role with SSM managed instance core policy attached. Default is set to true."

custom\_role: "ARN of a custom IAM Role you wish to use with P4 Server." |
object({
# General
name = optional(string, "p4-server")
project_prefix = optional(string, "cgd")
environment = optional(string, "dev")
auth_service_url = optional(string, null)
fully_qualified_domain_name = string

# Compute
lookup_existing_ami = optional(bool, true)
ami_prefix = optional(string, "p4_al2023")

instance_type = optional(string, "c6i.large")
instance_architecture = optional(string, "x86_64")
p4_server_type = optional(string, null)

unicode = optional(bool, false)
selinux = optional(bool, false)
case_sensitive = optional(bool, true)
plaintext = optional(bool, false)

# Storage
storage_type = optional(string, "EBS")
depot_volume_size = optional(number, 128)
metadata_volume_size = optional(number, 32)
logs_volume_size = optional(number, 32)

# Networking & Security
instance_subnet_id = optional(string, null)
instance_private_ip = optional(string, null)
create_default_sg = optional(bool, true)
existing_security_groups = optional(list(string), [])
internal = optional(bool, false)

admin_username = optional(string, "perforce")
admin_password_secret_arn = optional(string, null)

create_default_role = optional(bool, true)
custom_role = optional(string, null)

# FSxN
fsxn_password = optional(string, null)
fsxn_filesystem_security_group_id = optional(string, null)
protocol = optional(string, null)
fsxn_region = optional(string, null)
fsxn_management_ip = optional(string, null)
fsxn_svm_name = optional(string, null)
amazon_fsxn_svm_id = optional(string, null)
fsxn_aws_profile = optional(string, null)
})
| `null` | no | | [project\_prefix](#input\_project\_prefix) | The project prefix for this workload. This is appended to the beginning of most resource names. | `string` | `"cgd"` | no | | [route53\_private\_hosted\_zone\_name](#input\_route53\_private\_hosted\_zone\_name) | The name of the private Route53 Hosted Zone for the Perforce resources. | `string` | `null` | no | | [s3\_enable\_force\_destroy](#input\_s3\_enable\_force\_destroy) | Enables force destroy for the S3 bucket for both the shared NLB and shared ALB access log storage. Defaults to true. | `bool` | `true` | no | @@ -247,19 +255,17 @@ packer build perforce_x86.pkr.hcl | [p4\_code\_review\_alb\_dns\_name](#output\_p4\_code\_review\_alb\_dns\_name) | The DNS name of the P4 Code Review ALB. | | [p4\_code\_review\_alb\_security\_group\_id](#output\_p4\_code\_review\_alb\_security\_group\_id) | Security group associated with the P4 Code Review load balancer. | | [p4\_code\_review\_alb\_zone\_id](#output\_p4\_code\_review\_alb\_zone\_id) | The hosted zone ID of the P4 Code Review ALB. | -| [p4\_code\_review\_default\_role\_id](#output\_p4\_code\_review\_default\_role\_id) | The default role for the P4 Code Review service task | -| [p4\_code\_review\_execution\_role\_id](#output\_p4\_code\_review\_execution\_role\_id) | The default role for the P4 Code Review service task | -| [p4\_code\_review\_perforce\_cluster\_name](#output\_p4\_code\_review\_perforce\_cluster\_name) | Name of the ECS cluster hosting P4 Code Review. | -| [p4\_code\_review\_service\_security\_group\_id](#output\_p4\_code\_review\_service\_security\_group\_id) | Security group associated with the ECS service running P4 Code Review. | +| [p4\_code\_review\_service\_security\_group\_id](#output\_p4\_code\_review\_service\_security\_group\_id) | Security group associated with P4 Code Review application. | | [p4\_code\_review\_target\_group\_arn](#output\_p4\_code\_review\_target\_group\_arn) | The service target group for the P4 Code Review. | +| [p4\_server\_admin\_password\_secret\_arn](#output\_p4\_server\_admin\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the admin account password. | +| [p4\_server\_admin\_username\_secret\_arn](#output\_p4\_server\_admin\_username\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the admin account username. | | [p4\_server\_eip\_id](#output\_p4\_server\_eip\_id) | The ID of the Elastic IP associated with your P4 Server instance. | | [p4\_server\_eip\_public\_ip](#output\_p4\_server\_eip\_public\_ip) | The public IP of your P4 Server instance. | | [p4\_server\_instance\_id](#output\_p4\_server\_instance\_id) | Instance ID for the P4 Server instance | | [p4\_server\_lambda\_link\_name](#output\_p4\_server\_lambda\_link\_name) | The name of the Lambda link for the P4 Server instance to use with FSxN. | | [p4\_server\_private\_ip](#output\_p4\_server\_private\_ip) | Private IP for the P4 Server instance | | [p4\_server\_security\_group\_id](#output\_p4\_server\_security\_group\_id) | The default security group of your P4 Server instance. | -| [p4\_server\_super\_user\_password\_secret\_arn](#output\_p4\_server\_super\_user\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding your P4 Server super user's username. | -| [p4\_server\_super\_user\_username\_secret\_arn](#output\_p4\_server\_super\_user\_username\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding your P4 Server super user's password. | +| [p4\_server\_super\_password\_secret\_arn](#output\_p4\_server\_super\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the service account (super) password. | | [shared\_application\_load\_balancer\_arn](#output\_shared\_application\_load\_balancer\_arn) | The ARN of the shared application load balancer. | | [shared\_network\_load\_balancer\_arn](#output\_shared\_network\_load\_balancer\_arn) | The ARN of the shared network load balancer. | diff --git a/modules/perforce/assets/media/diagrams/p4-code-review-architecture.png b/modules/perforce/assets/media/diagrams/p4-code-review-architecture.png index 329d09ed..546e43f8 100644 Binary files a/modules/perforce/assets/media/diagrams/p4-code-review-architecture.png and b/modules/perforce/assets/media/diagrams/p4-code-review-architecture.png differ diff --git a/modules/perforce/examples/create-resources-complete/main.tf b/modules/perforce/examples/create-resources-complete/main.tf index 2b64079e..d9839325 100644 --- a/modules/perforce/examples/create-resources-complete/main.tf +++ b/modules/perforce/examples/create-resources-complete/main.tf @@ -1,6 +1,9 @@ module "terraform-aws-perforce" { source = "../../" + # Ensure module is destroyed before IGW to prevent "mapped public address" errors + depends_on = [aws_internet_gateway.igw] + # - Shared - project_prefix = local.project_prefix vpc_id = aws_vpc.perforce_vpc.id @@ -51,13 +54,15 @@ module "terraform-aws-perforce" { name = "p4-code-review" fully_qualified_domain_name = local.p4_code_review_fully_qualified_domain_name existing_security_groups = [aws_security_group.allow_my_ip.id] - debug = true # optional to use for debugging. Default is false if omitted - deregistration_delay = 0 service_subnets = aws_subnet.private_subnets[*].id - # Allow ECS tasks to be immediately deregistered from target group. Helps to prevent race conditions during `terraform destroy` + instance_subnet_id = aws_subnet.private_subnets[0].id - # Configuration - enable_sso = true + # SSO Configuration - uses HAS for authentication + custom_config = jsonencode({ + p4 = { + sso = "optional" # "optional" allows both SSO and password login, "enabled" forces SSO + } + }) } } diff --git a/modules/perforce/main.tf b/modules/perforce/main.tf index f499c696..946f270a 100644 --- a/modules/perforce/main.tf +++ b/modules/perforce/main.tf @@ -16,8 +16,8 @@ module "p4_server" { var.p4_auth_config != null ? ( var.create_route53_private_hosted_zone ? - "auth.${aws_route53_zone.perforce_private_hosted_zone[0].name}" : - module.p4_auth[0].alb_dns_name + "https://auth.${aws_route53_zone.perforce_private_hosted_zone[0].name}" : + "https://${module.p4_auth[0].alb_dns_name}" ) : null ) @@ -49,16 +49,16 @@ module "p4_server" { fsxn_management_ip = var.p4_server_config.fsxn_management_ip # Networking & Security - vpc_id = var.vpc_id - instance_subnet_id = var.p4_server_config.instance_subnet_id - instance_private_ip = var.p4_server_config.instance_private_ip - create_default_sg = var.p4_server_config.create_default_sg - existing_security_groups = var.p4_server_config.existing_security_groups - internal = var.p4_server_config.internal - super_user_password_secret_arn = var.p4_server_config.super_user_password_secret_arn - super_user_username_secret_arn = var.p4_server_config.super_user_username_secret_arn - create_default_role = var.p4_server_config.create_default_role - custom_role = var.p4_server_config.custom_role + vpc_id = var.vpc_id + instance_subnet_id = var.p4_server_config.instance_subnet_id + instance_private_ip = var.p4_server_config.instance_private_ip + create_default_sg = var.p4_server_config.create_default_sg + existing_security_groups = var.p4_server_config.existing_security_groups + internal = var.p4_server_config.internal + admin_username = var.p4_server_config.admin_username + admin_password_secret_arn = var.p4_server_config.admin_password_secret_arn + create_default_role = var.p4_server_config.create_default_role + custom_role = var.p4_server_config.custom_role } @@ -125,18 +125,12 @@ module "p4_code_review" { # General name = var.p4_code_review_config.name project_prefix = var.p4_code_review_config.project_prefix - debug = var.p4_code_review_config.debug fully_qualified_domain_name = var.p4_code_review_config.fully_qualified_domain_name - cluster_name = ( - var.existing_ecs_cluster_name != null ? - var.existing_ecs_cluster_name : - aws_ecs_cluster.perforce_web_services_cluster[0].name - ) - container_name = var.p4_code_review_config.container_name - container_port = var.p4_code_review_config.container_port - container_cpu = var.p4_code_review_config.container_cpu - container_memory = var.p4_code_review_config.container_memory + # Compute + application_port = var.p4_code_review_config.application_port + instance_type = var.p4_code_review_config.instance_type + ami_id = var.p4_code_review_config.ami_id p4d_port = var.p4_code_review_config.p4d_port != null ? var.p4_code_review_config.p4d_port : local.p4_port p4charset = var.p4_code_review_config.p4charset != null ? var.p4_code_review_config.p4charset : ( var.p4_server_config != null ? ( @@ -146,28 +140,30 @@ module "p4_code_review" { existing_redis_connection = var.p4_code_review_config.existing_redis_connection # Storage & Logging + ebs_volume_size = var.p4_code_review_config.ebs_volume_size + ebs_volume_type = var.p4_code_review_config.ebs_volume_type + ebs_volume_encrypted = var.p4_code_review_config.ebs_volume_encrypted + ebs_availability_zone = var.p4_code_review_config.ebs_availability_zone enable_alb_access_logs = false cloudwatch_log_retention_in_days = var.p4_code_review_config.cloudwatch_log_retention_in_days # Networking & Security - vpc_id = var.vpc_id - subnets = var.p4_code_review_config.service_subnets + vpc_id = var.vpc_id + subnets = var.p4_code_review_config.service_subnets + instance_subnet_id = var.p4_code_review_config.instance_subnet_id create_application_load_balancer = false internal = var.p4_code_review_config.internal - create_default_role = var.p4_code_review_config.create_default_role - custom_role = var.p4_code_review_config.custom_role + # ElastiCache Redis + elasticache_node_count = var.p4_code_review_config.elasticache_node_count + elasticache_node_type = var.p4_code_review_config.elasticache_node_type - super_user_password_secret_arn = module.p4_server[0].super_user_password_secret_arn - super_user_username_secret_arn = module.p4_server[0].super_user_username_secret_arn - p4_code_review_user_password_secret_arn = module.p4_server[0].super_user_password_secret_arn - p4_code_review_user_username_secret_arn = module.p4_server[0].super_user_username_secret_arn + super_user_password_secret_arn = module.p4_server[0].super_password_secret_arn - enable_sso = var.p4_code_review_config.enable_sso - config_php_source = var.p4_code_review_config.config_php_source + custom_config = var.p4_code_review_config.custom_config - depends_on = [aws_ecs_cluster.perforce_web_services_cluster[0]] + depends_on = [module.p4_server] } ################################################# diff --git a/modules/perforce/modules/p4-code-review/README.md b/modules/perforce/modules/p4-code-review/README.md index 3d11d59b..6e9a6fc0 100644 --- a/modules/perforce/modules/p4-code-review/README.md +++ b/modules/perforce/modules/p4-code-review/README.md @@ -1,44 +1,43 @@ # P4 Code Review Submodule -[P4 Code Review](https://www.perforce.com/products/helix-swarm) is a free code review tool for projects hosted in [P4 Server](https://www.perforce.com/products/helix-core/aws). This module deploys P4 Code Review as a service on AWS Elastic Container Service using the [publicly available image from Dockerhub](https://hub.docker.com/r/perforce/helix-swarm). +[P4 Code Review](https://www.perforce.com/products/helix-swarm) is a free code review tool for projects hosted in [P4 Server](https://www.perforce.com/products/helix-core/aws). This module deploys P4 Code Review on an EC2 Auto Scaling Group using a custom AMI built with [Packer](../../../../assets/packer/perforce/p4-code-review/README.md). P4 Code Review also relies on a Redis cache. The module provisions a single node AWS Elasticache Redis OSS cluster and configures connectivity for the P4 Code Review service. This module deploys the following resources: -- An Elastic Container Service (ECS) cluster backed by AWS Fargate. This can also be created externally and passed in via the `cluster_name` variable. -- An ECS service running the latest P4 Code Review container ([perforce/helix-swarm](https://hub.docker.com/r/perforce/helix-swarm)) available. +- An EC2 Auto Scaling Group running the P4 Code Review AMI (built using the [Packer template](../../../../assets/packer/perforce/p4-code-review/README.md)). +- A persistent EBS volume for P4 Code Review data that survives instance replacement. - An Application Load Balancer for TLS termination of the P4 Code Review service. - A single node [AWS Elasticache Redis OSS](https://aws.amazon.com/elasticache/redis/) cluster. -- Supporting resources such as Cloudwatch log groups, IAM roles, and security groups. +- Supporting resources such as CloudWatch log groups, IAM roles, and security groups. ## Architecture -![P4 Code Review Submodule Architecture](../../assets/media/diagrams/p4-code-review-architecture.png) +![P4 Code Review Architecture](../../assets/media/diagrams/p4-code-review-architecture.png) ## Prerequisites -P4 Code Review needs to be able to connect to a P4 Server. P4 Code Review leverages the same authentication mechanism as P4 Server, and needs to install required plugins on the upstream P4 Server instance during setup. This happens automatically, but P4 Code Review requires an administrative user's credentials to be able to initially connect. These credentials are provided to the module through variables specifying AWS Secrets Manager secrets, and then pulled into the P4 Code Review container during startup. See the `p4d_super_user_arn`, `p4d_super_user_password_arn`, `p4d_swarm_user_arn`, and `p4d_swarm_password_arn` variables below for more details. +P4 Code Review needs to be able to connect to a P4 Server. P4 Code Review leverages the same authentication mechanism as P4 Server, and needs to install required plugins on the upstream P4 Server instance during setup. This happens automatically using the P4 Server's `super` user credentials, which are provided to the module through the `super_user_password_secret_arn` variable and pulled into the P4 Code Review instance during startup. -The [P4 Server submodule](../p4-server/README.md) creates an administrative user on initial deployment, and stores the credentials in AWS Secrets manager. The ARN of the credentials secret is then made available as a Terraform output from the module, and can be referenced elsewhere. The is done by default by the parent Perforce module. +The [P4 Server submodule](../p4-server/README.md) creates the `super` user on initial deployment and stores the password in AWS Secrets Manager. The ARN of the secret is then made available as a Terraform output from the module and can be referenced elsewhere. This is done by default by the parent Perforce module. -Should you need to manually create the administrative user secret the following AWS CLI command may prove useful: +Should you need to manually create the super user password secret, the following AWS CLI command may prove useful: ```bash aws secretsmanager create-secret \ - --name P4CodeReviewSuperUser \ - --description "P4 Code Review Super User" \ - --secret-string "{\"username\":\"swarm\",\"password\":\"EXAMPLE-PASSWORD\"}" + --name P4SuperUserPassword \ + --description "P4 Server Super User Password" \ + --secret-string "EXAMPLE-PASSWORD" ``` -You can then provide these credentials as variables when you define the P4 Code Review module in your Terraform configurations (the parent Perforce module does this for you): +You can then provide this credential as a variable when you define the P4 Code Review module in your Terraform configurations (the parent Perforce module does this for you): ```hcl module "p4_code_review" { source = "modules/perforce/modules/p4-code-review" ... - p4d_super_user_arn = "arn:aws:secretsmanager:::secret:P4CodeReviewSuperUser-a1b2c3:username::" - p4d_super_user_password_arn = "arn:aws:secretsmanager:::secret:P4CodeReviewSuperUser-a1b2c3:password::" + super_user_password_secret_arn = "arn:aws:secretsmanager:::secret:P4SuperUserPassword-a1b2c3" } ``` @@ -50,6 +49,123 @@ If you're running into issues with P4 Code Review, here are some common log file - `/opt/perforce/swarm/data/configure-swarm.log`: errors coming from p4cr configuration - `/opt/perforce/swarm/data/log`: errors from the p4cr runtime +## Custom Configuration + +The `custom_config` variable allows you to pass additional configuration to P4 Code Review as a JSON string. This configuration is merged with the generated `config.php` using PHP's `array_replace_recursive` function at instance startup. + +This can be used to configure: + +- SSO/SAML authentication +- Email notifications +- Jira integration +- Project settings +- And any other [Swarm configuration option](https://www.perforce.com/manuals/swarm/Content/Swarm/admin.configuration.html) + +### Example: SSO/SAML with Auth0 + +SSO/SAML configuration requires two parts: + +1. **`p4.sso`** - Enables the SSO login option. Values: + - `"disabled"` - No SSO, only password login (default) + - `"optional"` - Both SSO and password login available + - `"enabled"` - SSO only, no password login + +2. **`saml`** - The SAML technical configuration (IdP/SP settings, certificates) + +```hcl +module "p4_code_review" { + source = "modules/perforce/modules/p4-code-review" + # ... other required variables ... + + custom_config = jsonencode({ + # Enable SSO login option + p4 = { + sso = "optional" + } + # SAML configuration + saml = { + header = "Log in with SSO" + sp = { + entityId = "https://swarm.example.com" + assertionConsumerService = { + url = "https://swarm.example.com/saml/acs" + } + singleLogoutService = { + url = "https://swarm.example.com/saml/sls" + } + NameIDFormat = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" + } + idp = { + entityId = "urn:your-auth0-domain" + singleSignOnService = { + url = "https://your-auth0-domain/samlp/YOUR_CLIENT_ID" + } + singleLogoutService = { + url = "https://your-auth0-domain/samlp/YOUR_CLIENT_ID/logout" + } + x509cert = "YOUR_IDP_CERTIFICATE_HERE" + } + } + }) +} +``` + +### Example: Email Notifications + +```hcl +module "p4_code_review" { + source = "modules/perforce/modules/p4-code-review" + # ... other required variables ... + + custom_config = jsonencode({ + mail = { + transport = { + host = "smtp.example.com" + port = 587 + security = "tls" + } + sender = "swarm@example.com" + } + }) +} +``` + +### Example: Jira Integration + +```hcl +module "p4_code_review" { + source = "modules/perforce/modules/p4-code-review" + # ... other required variables ... + + custom_config = jsonencode({ + jira = { + host = "https://your-company.atlassian.net" + user = "jira-user@example.com" + password = "your-api-token" + job_field = "customfield_10001" + } + }) +} +``` + +### Combining Multiple Configurations + +You can combine multiple configuration sections in a single `custom_config`: + +```hcl +custom_config = jsonencode({ + saml = { + # SSO configuration... + } + mail = { + # Email configuration... + } + jira = { + # Jira configuration... + } +}) +``` + ## Requirements @@ -75,21 +191,20 @@ No modules. | Name | Type | |------|------| -| [aws_cloudwatch_log_group.log_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) | resource | +| [aws_autoscaling_group.swarm_asg](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group) | resource | +| [aws_cloudwatch_log_group.application_log_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) | resource | | [aws_cloudwatch_log_group.redis_service_log_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) | resource | -| [aws_ecs_cluster.cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_cluster) | resource | -| [aws_ecs_cluster_capacity_providers.cluster_fargate_providers](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_cluster_capacity_providers) | resource | -| [aws_ecs_service.service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service) | resource | -| [aws_ecs_task_definition.task_definition](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition) | resource | +| [aws_ebs_volume.swarm_data](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_volume) | resource | | [aws_elasticache_cluster.cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster) | resource | | [aws_elasticache_subnet_group.subnet_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_subnet_group) | resource | -| [aws_iam_policy.default_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | +| [aws_iam_instance_profile.ec2_instance_profile](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile) | resource | +| [aws_iam_policy.ebs_attachment_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | | [aws_iam_policy.secrets_manager_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | -| [aws_iam_role.default_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | -| [aws_iam_role.task_execution_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | -| [aws_iam_role_policy_attachment.default_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | -| [aws_iam_role_policy_attachment.p4_auth_task_execution_role_ecs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | -| [aws_iam_role_policy_attachment.p4_auth_task_execution_role_secrets_manager](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | +| [aws_iam_role.ec2_instance_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | +| [aws_iam_role_policy_attachment.ec2_instance_role_ebs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | +| [aws_iam_role_policy_attachment.ec2_instance_role_secrets_manager](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | +| [aws_iam_role_policy_attachment.ec2_instance_role_ssm](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | +| [aws_launch_template.swarm_instance](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template) | resource | | [aws_lb.alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb) | resource | | [aws_lb_listener.alb_https_listener](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_listener) | resource | | [aws_lb_target_group.alb_target_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group) | resource | @@ -98,62 +213,64 @@ No modules. | [aws_s3_bucket_policy.alb_access_logs_bucket_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_policy) | resource | | [aws_s3_bucket_public_access_block.access_logs_bucket_public_block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_public_access_block) | resource | | [aws_security_group.alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | -| [aws_security_group.ecs_service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | +| [aws_security_group.application](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | +| [aws_security_group.ec2_instance](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | | [aws_security_group.elasticache](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | -| [aws_vpc_security_group_egress_rule.alb_outbound_to_ecs_service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | -| [aws_vpc_security_group_egress_rule.ecs_service_outbound_to_internet_ipv4](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | -| [aws_vpc_security_group_egress_rule.ecs_service_outbound_to_internet_ipv6](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | -| [aws_vpc_security_group_ingress_rule.ecs_service_inbound_alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | -| [aws_vpc_security_group_ingress_rule.elasticache_inbound_from_ecs_service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | +| [aws_vpc_security_group_egress_rule.alb_outbound_to_application](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | +| [aws_vpc_security_group_egress_rule.application_outbound_to_internet_ipv4](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | +| [aws_vpc_security_group_egress_rule.application_outbound_to_internet_ipv6](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | +| [aws_vpc_security_group_egress_rule.ec2_instance_outbound_to_internet_ipv4](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | +| [aws_vpc_security_group_egress_rule.ec2_instance_outbound_to_internet_ipv6](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | +| [aws_vpc_security_group_ingress_rule.alb_inbound_from_application](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | +| [aws_vpc_security_group_ingress_rule.application_inbound_alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | +| [aws_vpc_security_group_ingress_rule.elasticache_inbound_from_application](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | | [random_string.alb_access_logs_bucket_suffix](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource | | [random_string.p4_code_review](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource | -| [aws_ecs_cluster.cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_cluster) | data source | +| [aws_ami.p4_code_review](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) | data source | +| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source | | [aws_elb_service_account.main](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/elb_service_account) | data source | | [aws_iam_policy_document.access_logs_bucket_alb_write](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | -| [aws_iam_policy_document.default_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | -| [aws_iam_policy_document.ecs_tasks_trust_relationship](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | +| [aws_iam_policy_document.ebs_attachment_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | +| [aws_iam_policy_document.ec2_instance_trust_relationship](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_iam_policy_document.secrets_manager_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | +| [aws_subnet.instance_subnet](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet) | data source | ## Inputs | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [p4\_code\_review\_user\_password\_secret\_arn](#input\_p4\_code\_review\_user\_password\_secret\_arn) | Optionally provide the ARN of an AWS Secret for the p4d P4 Code Review password. | `string` | n/a | yes | -| [p4\_code\_review\_user\_username\_secret\_arn](#input\_p4\_code\_review\_user\_username\_secret\_arn) | Optionally provide the ARN of an AWS Secret for the p4d P4 Code Review username. | `string` | n/a | yes | -| [subnets](#input\_subnets) | A list of subnets to deploy the P4 Code Review ECS Service into. Private subnets are recommended. | `list(string)` | n/a | yes | -| [super\_user\_password\_secret\_arn](#input\_super\_user\_password\_secret\_arn) | Optionally provide the ARN of an AWS Secret for the p4d super user password. | `string` | n/a | yes | -| [super\_user\_username\_secret\_arn](#input\_super\_user\_username\_secret\_arn) | Optionally provide the ARN of an AWS Secret for the p4d super user username. | `string` | n/a | yes | +| [instance\_subnet\_id](#input\_instance\_subnet\_id) | The subnet ID where the EC2 instance will be launched. Should be a private subnet for security. | `string` | n/a | yes | +| [subnets](#input\_subnets) | A list of subnets for ElastiCache Redis deployment. Private subnets are recommended. | `list(string)` | n/a | yes | +| [super\_user\_password\_secret\_arn](#input\_super\_user\_password\_secret\_arn) | ARN of the AWS Secrets Manager secret containing the P4 super user password. The super user is used for both Swarm runtime operations and administrative tasks. | `string` | n/a | yes | | [vpc\_id](#input\_vpc\_id) | The ID of the existing VPC you would like to deploy P4 Code Review into. | `string` | n/a | yes | | [alb\_access\_logs\_bucket](#input\_alb\_access\_logs\_bucket) | ID of the S3 bucket for P4 Code Review ALB access log storage. If access logging is enabled and this is null the module creates a bucket. | `string` | `null` | no | | [alb\_access\_logs\_prefix](#input\_alb\_access\_logs\_prefix) | Log prefix for P4 Code Review ALB access logs. If null the project prefix and module name are used. | `string` | `null` | no | | [alb\_subnets](#input\_alb\_subnets) | A list of subnets to deploy the load balancer into. Public subnets are recommended. | `list(string)` | `[]` | no | +| [ami\_id](#input\_ami\_id) | Optional AMI ID for P4 Code Review. If not provided, will use the latest Packer-built AMI with name pattern 'p4\_code\_review\_ubuntu-*'. | `string` | `null` | no | | [application\_load\_balancer\_name](#input\_application\_load\_balancer\_name) | The name of the P4 Code Review ALB. Defaults to the project prefix and module name. | `string` | `null` | no | +| [application\_port](#input\_application\_port) | The port that P4 Code Review listens on. Used for ALB target group configuration. | `number` | `80` | no | | [certificate\_arn](#input\_certificate\_arn) | The TLS certificate ARN for the P4 Code Review service load balancer. | `string` | `null` | no | | [cloudwatch\_log\_retention\_in\_days](#input\_cloudwatch\_log\_retention\_in\_days) | The log retention in days of the cloudwatch log group for P4 Code Review. | `string` | `365` | no | -| [cluster\_name](#input\_cluster\_name) | The name of the cluster to deploy the P4 Code Review service into. Defaults to null and a cluster will be created. | `string` | `null` | no | -| [config\_php\_source](#input\_config\_php\_source) | Used as the ValueFrom for P4CR's config.php. Contents should be base64 encoded, and will be combined with the generated config.php via array\_replace\_recursive. | `string` | `null` | no | -| [container\_cpu](#input\_container\_cpu) | The CPU allotment for the P4 Code Review container. | `number` | `1024` | no | -| [container\_memory](#input\_container\_memory) | The memory allotment for the P4 Code Review container. | `number` | `2048` | no | -| [container\_name](#input\_container\_name) | The name of the P4 Code Review container. | `string` | `"p4-code-review-container"` | no | -| [container\_port](#input\_container\_port) | The container port that P4 Code Review runs on. | `number` | `80` | no | | [create\_application\_load\_balancer](#input\_create\_application\_load\_balancer) | This flag controls the creation of an application load balancer as part of the module. | `bool` | `true` | no | -| [create\_default\_role](#input\_create\_default\_role) | Optional creation of P4 Code Review Default IAM Role. Default is set to true. | `bool` | `true` | no | -| [custom\_role](#input\_custom\_role) | ARN of the custom IAM Role you wish to use with P4 Code Review. | `string` | `null` | no | -| [debug](#input\_debug) | Debug flag to enable execute command on service for container access. | `bool` | `false` | no | +| [custom\_config](#input\_custom\_config) | JSON string with additional Swarm configuration to merge with the generated config.php. Use this for SSO/SAML setup, notifications, Jira integration, etc. See README for examples. | `string` | `null` | no | | [deregistration\_delay](#input\_deregistration\_delay) | The amount of time to wait for in-flight requests to complete while deregistering a target. The range is 0-3600 seconds. | `number` | `30` | no | +| [ebs\_availability\_zone](#input\_ebs\_availability\_zone) | Availability zone for the EBS volume. Must match the EC2 instance AZ. If not provided, will use the AZ of the instance\_subnet\_id. | `string` | `null` | no | +| [ebs\_volume\_encrypted](#input\_ebs\_volume\_encrypted) | Enable encryption for the EBS volume storing P4 Code Review data. | `bool` | `true` | no | +| [ebs\_volume\_size](#input\_ebs\_volume\_size) | Size in GB for the EBS volume that stores P4 Code Review data (/opt/perforce/swarm/data). This volume persists across instance replacement. | `number` | `20` | no | +| [ebs\_volume\_type](#input\_ebs\_volume\_type) | EBS volume type for P4 Code Review data storage. | `string` | `"gp3"` | no | | [elasticache\_node\_count](#input\_elasticache\_node\_count) | Number of cache nodes to provision in the Elasticache cluster. | `number` | `1` | no | | [elasticache\_node\_type](#input\_elasticache\_node\_type) | The type of nodes provisioned in the Elasticache cluster. | `string` | `"cache.t4g.micro"` | no | | [enable\_alb\_access\_logs](#input\_enable\_alb\_access\_logs) | Enables access logging for the P4 Code Review ALB. Defaults to false. | `bool` | `false` | no | | [enable\_alb\_deletion\_protection](#input\_enable\_alb\_deletion\_protection) | Enables deletion protection for the P4 Code Review ALB. Defaults to true. | `bool` | `false` | no | -| [enable\_sso](#input\_enable\_sso) | Set this to true if using SSO for P4 Code Review authentication. | `bool` | `false` | no | | [existing\_redis\_connection](#input\_existing\_redis\_connection) | The connection specifications to use for an existing Redis deployment. |
object({
host = string
port = number
})
| `null` | no | | [existing\_security\_groups](#input\_existing\_security\_groups) | A list of existing security group IDs to attach to the P4 Code Review load balancer. | `list(string)` | `[]` | no | | [fully\_qualified\_domain\_name](#input\_fully\_qualified\_domain\_name) | The fully qualified domain name that P4 Code Review should use for internal URLs. | `string` | `null` | no | +| [instance\_type](#input\_instance\_type) | EC2 instance type for running P4 Code Review. Swarm requires persistent storage and runs natively on EC2. | `string` | `"m5.large"` | no | | [internal](#input\_internal) | Set this flag to true if you do not want the P4 Code Review service load balancer to have a public IP. | `bool` | `false` | no | | [name](#input\_name) | The name attached to P4 Code Review module resources. | `string` | `"p4-code-review"` | no | -| [p4charset](#input\_p4charset) | The P4CHARSET environment variable to set in the P4 Code Review container. | `string` | `"none"` | no | -| [p4d\_port](#input\_p4d\_port) | The P4D\_PORT environment variable where P4 Code Review should look for P4 Code Review. Defaults to 'ssl:perforce:1666' | `string` | `"ssl:perforce:1666"` | no | +| [p4charset](#input\_p4charset) | The P4CHARSET environment variable to set for the P4 Code Review instance. | `string` | `"none"` | no | +| [p4d\_port](#input\_p4d\_port) | The P4D\_PORT environment variable where P4 Code Review should look for P4 Server. Defaults to 'ssl:perforce:1666' | `string` | `"ssl:perforce:1666"` | no | | [project\_prefix](#input\_project\_prefix) | The project prefix for this workload. This is appended to the beginning of most resource names. | `string` | `"cgd"` | no | | [s3\_enable\_force\_destroy](#input\_s3\_enable\_force\_destroy) | Enables force destroy for the S3 bucket for P4 Code Review access log storage. Defaults to true. | `bool` | `true` | no | | [tags](#input\_tags) | Tags to apply to resources. | `map(any)` |
{
"IaC": "Terraform",
"ModuleBy": "CGD-Toolkit",
"ModuleName": "p4-code-review",
"ModuleSource": "https://github.com/aws-games/cloud-game-development-toolkit/tree/main/modules/perforce",
"RootModuleName": "terraform-aws-perforce"
}
| no | @@ -165,10 +282,11 @@ No modules. | [alb\_dns\_name](#output\_alb\_dns\_name) | The DNS name of the P4 Code Review ALB | | [alb\_security\_group\_id](#output\_alb\_security\_group\_id) | Security group associated with the P4 Code Review load balancer | | [alb\_zone\_id](#output\_alb\_zone\_id) | The hosted zone ID of the P4 Code Review ALB | -| [cluster\_name](#output\_cluster\_name) | Name of the ECS cluster hosting P4 Code Review | -| [default\_role\_id](#output\_default\_role\_id) | The default role for the service task | -| [execution\_role\_id](#output\_execution\_role\_id) | The default role for the service task | -| [service\_security\_group\_id](#output\_service\_security\_group\_id) | Security group associated with the ECS service running P4 Code Review | -| [target\_group\_arn](#output\_target\_group\_arn) | The service target group for P4 Code Review | +| [application\_security\_group\_id](#output\_application\_security\_group\_id) | Security group associated with the P4 Code Review application | +| [autoscaling\_group\_name](#output\_autoscaling\_group\_name) | The name of the Auto Scaling Group for P4 Code Review | +| [ebs\_volume\_id](#output\_ebs\_volume\_id) | The ID of the EBS volume storing P4 Code Review persistent data | +| [instance\_profile\_arn](#output\_instance\_profile\_arn) | The ARN of the IAM instance profile for P4 Code Review EC2 instances | +| [launch\_template\_id](#output\_launch\_template\_id) | The ID of the launch template for P4 Code Review instances | +| [target\_group\_arn](#output\_target\_group\_arn) | The target group ARN for P4 Code Review | diff --git a/modules/perforce/modules/p4-code-review/alb.tf b/modules/perforce/modules/p4-code-review/alb.tf index ca2ff00d..9b2ac8c3 100644 --- a/modules/perforce/modules/p4-code-review/alb.tf +++ b/modules/perforce/modules/p4-code-review/alb.tf @@ -42,13 +42,14 @@ resource "aws_lb" "alb" { resource "aws_lb_target_group" "alb_target_group" { #checkov:skip=CKV_AWS_378: Using ALB for TLS termination name = "${local.name_prefix}-tg" - port = var.container_port + port = local.application_port protocol = "HTTP" - target_type = "ip" + target_type = "instance" vpc_id = var.vpc_id - deregistration_delay = var.deregistration_delay # Fix LB listener from failing to be deleted because targets are still registered. + deregistration_delay = var.deregistration_delay + health_check { - path = "/login" # must match path in the health check in the ECS service that references this target group + path = "/login" protocol = "HTTP" matcher = "200" port = "traffic-port" @@ -63,14 +64,13 @@ resource "aws_lb_target_group" "alb_target_group" { Name = "${local.name_prefix}-tg" } ) - } ########################################## # Application Load Balancer | Listeners ########################################## -# HTTPS listener for p4_auth ALB +# HTTPS listener for P4 Code Review ALB resource "aws_lb_listener" "alb_https_listener" { count = var.create_application_load_balancer ? 1 : 0 load_balancer_arn = aws_lb.alb[0].arn @@ -89,8 +89,6 @@ resource "aws_lb_listener" "alb_https_listener" { Name = "${local.name_prefix}-tg-listener" } ) - - depends_on = [aws_ecs_service.service] } diff --git a/modules/perforce/modules/p4-code-review/data.tf b/modules/perforce/modules/p4-code-review/data.tf index 7ee9065f..96b80b6f 100644 --- a/modules/perforce/modules/p4-code-review/data.tf +++ b/modules/perforce/modules/p4-code-review/data.tf @@ -1,7 +1,32 @@ data "aws_region" "current" {} -# If cluster name is provided use a data source to access existing resource -data "aws_ecs_cluster" "cluster" { - count = var.cluster_name != null ? 1 : 0 - cluster_name = var.cluster_name +data "aws_caller_identity" "current" {} + +# Get the latest P4 Code Review AMI built by Packer +# Only used if ami_id variable is not provided +data "aws_ami" "p4_code_review" { + count = var.ami_id != null ? 0 : 1 + most_recent = true + owners = ["self"] + + filter { + name = "name" + values = ["p4_code_review_ubuntu-*"] + } + + filter { + name = "state" + values = ["available"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } +} + +# Lookup subnet details to determine availability zone for EBS volume +# EBS volumes must be in the same AZ as the EC2 instance +data "aws_subnet" "instance_subnet" { + id = var.instance_subnet_id } diff --git a/modules/perforce/modules/p4-code-review/ec2.tf b/modules/perforce/modules/p4-code-review/ec2.tf new file mode 100644 index 00000000..d7396249 --- /dev/null +++ b/modules/perforce/modules/p4-code-review/ec2.tf @@ -0,0 +1,159 @@ +########################################## +# EBS Volume for Persistent Storage +########################################## +# This volume stores /opt/perforce/swarm/data including the queue directory +# It persists across container and instance restarts +# Tagged so it can be automatically reattached to a new instance if the current one fails + +resource "aws_ebs_volume" "swarm_data" { + #checkov:skip=CKV_AWS_189:Customer-managed KMS key is optional; default AWS encryption enabled + #checkov:skip=CKV_AWS_3:Encryption is enabled via var.ebs_volume_encrypted (defaults to true) + availability_zone = local.ebs_availability_zone + size = var.ebs_volume_size + type = var.ebs_volume_type + encrypted = var.ebs_volume_encrypted + + tags = merge(var.tags, + { + Name = "${local.name_prefix}-data-volume" + SwarmDataVolume = "true" # Used by user data script to find this volume + ModuleIdentifier = local.module_identifier + Purpose = "perforce-swarm-persistent-storage" + ManagedBy = "terraform" + AutoAttachToSwarmInstance = "true" + } + ) + + lifecycle { + prevent_destroy = false # Set to true in production to prevent accidental deletion + } +} + + +########################################## +# Launch Template +########################################## +# Defines the EC2 instance configuration +# Includes user data script that automatically attaches and mounts the EBS volume + +resource "aws_launch_template" "swarm_instance" { + name_prefix = "${local.name_prefix}-" + image_id = local.selected_ami_id + instance_type = var.instance_type + + iam_instance_profile { + arn = aws_iam_instance_profile.ec2_instance_profile.arn + } + + vpc_security_group_ids = [ + aws_security_group.ec2_instance.id, + aws_security_group.application.id + ] + + # User data script handles EBS volume attachment, mounting, and Swarm configuration + user_data = base64encode(templatefile("${path.module}/user-data.sh.tpl", { + region = data.aws_region.current.name + device_name = local.ebs_device_name + mount_path = local.host_data_path + module_identifier = local.module_identifier + p4d_port = var.p4d_port + p4charset = var.p4charset + swarm_host = "https://${var.fully_qualified_domain_name}" + swarm_redis = var.existing_redis_connection != null ? var.existing_redis_connection.host : aws_elasticache_cluster.cluster[0].cache_nodes[0].address + swarm_redis_port = var.existing_redis_connection != null ? tostring(var.existing_redis_connection.port) : tostring(aws_elasticache_cluster.cluster[0].cache_nodes[0].port) + swarm_force_ext = "y" + super_user_password_secret_arn = var.super_user_password_secret_arn + custom_config = var.custom_config + })) + + metadata_options { + http_endpoint = "enabled" + http_tokens = "required" # Enforce IMDSv2 + http_put_response_hop_limit = 1 + } + + monitoring { + enabled = true + } + + tag_specifications { + resource_type = "instance" + tags = merge(var.tags, + { + Name = "${local.name_prefix}-instance" + SwarmInstance = "true" + ManagedBy = "terraform" + } + ) + } + + tag_specifications { + resource_type = "volume" + tags = merge(var.tags, + { + Name = "${local.name_prefix}-root-volume" + ManagedBy = "terraform" + } + ) + } + + tags = merge(var.tags, + { + Name = "${local.name_prefix}-launch-template" + } + ) +} + + +########################################## +# Auto Scaling Group +########################################## +# Single-instance ASG provides automatic instance replacement if it fails +# Min=1, Max=1 ensures only one instance runs at a time (Swarm doesn't scale horizontally) + +resource "aws_autoscaling_group" "swarm_asg" { + name_prefix = "${local.name_prefix}-asg-" + min_size = 1 + max_size = 1 + desired_capacity = 1 + vpc_zone_identifier = [var.instance_subnet_id] + + target_group_arns = [aws_lb_target_group.alb_target_group.arn] + + launch_template { + id = aws_launch_template.swarm_instance.id + version = "$Latest" + } + + health_check_type = "ELB" + health_check_grace_period = 600 # 10 minutes for instance to boot, attach volume, and configure Swarm + + # Ensure instance is in the same AZ as the EBS volume + # availability_zones is set implicitly by vpc_zone_identifier + + tag { + key = "Name" + value = "${local.name_prefix}-instance" + propagate_at_launch = true + } + + tag { + key = "ManagedBy" + value = "terraform-asg" + propagate_at_launch = true + } + + tag { + key = "SwarmInstance" + value = "true" + propagate_at_launch = true + } + + lifecycle { + create_before_destroy = true + } + + depends_on = [ + aws_ebs_volume.swarm_data + ] +} diff --git a/modules/perforce/modules/p4-code-review/elasticache.tf b/modules/perforce/modules/p4-code-review/elasticache.tf index d4320dad..3208ae97 100644 --- a/modules/perforce/modules/p4-code-review/elasticache.tf +++ b/modules/perforce/modules/p4-code-review/elasticache.tf @@ -7,6 +7,7 @@ resource "aws_elasticache_subnet_group" "subnet_group" { # Single Node Elasticache Cluster for P4 Code Review resource "aws_elasticache_cluster" "cluster" { + #checkov:skip=CKV_AWS_134:Automatic backups optional; Swarm cache is ephemeral and can be rebuilt count = var.existing_redis_connection != null ? 0 : 1 cluster_id = "${local.name_prefix}-elasticache-redis-cluster" engine = "redis" diff --git a/modules/perforce/modules/p4-code-review/iam.tf b/modules/perforce/modules/p4-code-review/iam.tf index 4feb2dda..ab2de314 100644 --- a/modules/perforce/modules/p4-code-review/iam.tf +++ b/modules/perforce/modules/p4-code-review/iam.tf @@ -10,151 +10,129 @@ resource "random_string" "p4_code_review" { ########################################## -# Trust Relationships +# Policies ########################################## -# ECS - Tasks -data "aws_iam_policy_document" "ecs_tasks_trust_relationship" { +# Secrets Manager Policy Document for EC2 instances +data "aws_iam_policy_document" "secrets_manager_policy" { statement { - effect = "Allow" - actions = ["sts:AssumeRole"] - principals { - type = "Service" - identifiers = ["ecs-tasks.amazonaws.com"] - } + effect = "Allow" + actions = [ + "secretsmanager:GetSecretValue", + "secretsmanager:DescribeSecret" + ] + resources = [ + var.super_user_password_secret_arn, + ] } } +# Secrets Manager Policy +resource "aws_iam_policy" "secrets_manager_policy" { + name = "${local.name_prefix}-secrets-manager-policy" + description = "Policy granting permissions for ${local.name_prefix} EC2 instance to access Secrets Manager." + policy = data.aws_iam_policy_document.secrets_manager_policy.json + + tags = merge(var.tags, + { + Name = "${local.name_prefix}-secrets-manager-policy" + } + ) +} + ########################################## -# Policies +# EC2 Instance Role ########################################## -# Default Policy Document -data "aws_iam_policy_document" "default_policy" { - count = var.create_default_role ? 1 : 0 - # ECS +# EC2 - Instance Trust Relationship +data "aws_iam_policy_document" "ec2_instance_trust_relationship" { statement { - sid = "ECSExec" - effect = "Allow" - actions = [ - "ssmmessages:OpenDataChannel", - "ssmmessages:OpenControlChannel", - "ssmmessages:CreateDataChannel", - "ssmmessages:CreateControlChannel" - ] - resources = [ - "*" - ] + effect = "Allow" + actions = ["sts:AssumeRole"] + principals { + type = "Service" + identifiers = ["ec2.amazonaws.com"] + } } +} + +# EBS Volume Attachment Policy +data "aws_iam_policy_document" "ebs_attachment_policy" { + # Describe operations require wildcard - AWS doesn't support resource-level permissions for these statement { + sid = "EBSDescribeOperations" effect = "Allow" actions = [ - "secretsmanager:ListSecrets", - "secretsmanager:ListSecretVersionIds", - "secretsmanager:GetRandomPassword", - "secretsmanager:GetSecretValue", - "secretsmanager:DescribeSecret", - "secretsmanager:BatchGetSecretValue" - ] - resources = [ - var.super_user_username_secret_arn, - var.super_user_password_secret_arn, - var.p4_code_review_user_username_secret_arn, - var.p4_code_review_user_password_secret_arn, + "ec2:DescribeVolumes", + "ec2:DescribeInstances" ] + resources = ["*"] } -} -# Secrets Manager Policy Document -data "aws_iam_policy_document" "secrets_manager_policy" { - # ssm + # Attach/detach operations scoped to the specific Swarm data volume statement { + sid = "EBSVolumeAttachDetach" effect = "Allow" actions = [ - "secretsmanager:ListSecrets", - "secretsmanager:ListSecretVersionIds", - "secretsmanager:GetRandomPassword", - "secretsmanager:GetSecretValue", - "secretsmanager:DescribeSecret", - "secretsmanager:BatchGetSecretValue" + "ec2:AttachVolume", + "ec2:DetachVolume" ] resources = [ - var.super_user_username_secret_arn, - var.super_user_password_secret_arn, - var.p4_code_review_user_username_secret_arn, - var.p4_code_review_user_password_secret_arn, + aws_ebs_volume.swarm_data.arn, + "arn:aws:ec2:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:instance/*" ] } } -# Default Policy -resource "aws_iam_policy" "default_policy" { - count = var.create_default_role ? 1 : 0 - - name = "${local.name_prefix}-default-policy" - description = "Policy granting permissions for ${local.name_prefix}." - policy = data.aws_iam_policy_document.default_policy[0].json +resource "aws_iam_policy" "ebs_attachment_policy" { + name = "${local.name_prefix}-ebs-attachment-policy" + description = "Policy granting permissions for EC2 instance to attach EBS volumes." + policy = data.aws_iam_policy_document.ebs_attachment_policy.json tags = merge(var.tags, { - Name = "${local.name_prefix}-default-policy" + Name = "${local.name_prefix}-ebs-attachment-policy" } ) } -# Secrets Manager Policy -resource "aws_iam_policy" "secrets_manager_policy" { - name = "${local.name_prefix}-secrets-manager-policy" - description = "Policy granting permissions for ${local.name_prefix} task execution role to access Secrets Manager." - policy = data.aws_iam_policy_document.secrets_manager_policy.json +# EC2 Instance Role +resource "aws_iam_role" "ec2_instance_role" { + name = "${local.name_prefix}-ec2-instance-role" + assume_role_policy = data.aws_iam_policy_document.ec2_instance_trust_relationship.json tags = merge(var.tags, { - Name = "${local.name_prefix}-secrets-manager-policy" + Name = "${local.name_prefix}-ec2-instance-role" } ) } +# Attach SSM Managed Instance Core (for SSM Session Manager access) +resource "aws_iam_role_policy_attachment" "ec2_instance_role_ssm" { + role = aws_iam_role.ec2_instance_role.name + policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" +} -########################################## -# Roles -########################################## -resource "aws_iam_role" "default_role" { - # Default Role - count = var.create_default_role ? 1 : 0 - name = "${local.name_prefix}-default-role" - assume_role_policy = data.aws_iam_policy_document.ecs_tasks_trust_relationship.json - - tags = merge(var.tags, - { - Name = "${local.name_prefix}-default-role" - } - ) +# Attach EBS Attachment Policy (for attaching persistent data volume) +resource "aws_iam_role_policy_attachment" "ec2_instance_role_ebs" { + role = aws_iam_role.ec2_instance_role.name + policy_arn = aws_iam_policy.ebs_attachment_policy.arn } -resource "aws_iam_role_policy_attachment" "default_role" { - count = var.create_default_role ? 1 : 0 - role = aws_iam_role.default_role[0].name - policy_arn = aws_iam_policy.default_policy[0].arn +# Attach Secrets Manager Policy (for retrieving P4 credentials) +resource "aws_iam_role_policy_attachment" "ec2_instance_role_secrets_manager" { + role = aws_iam_role.ec2_instance_role.name + policy_arn = aws_iam_policy.secrets_manager_policy.arn } -# Task Execution Role -resource "aws_iam_role" "task_execution_role" { - name = "${local.name_prefix}-task-execution-role" - assume_role_policy = data.aws_iam_policy_document.ecs_tasks_trust_relationship.json +# Instance Profile +resource "aws_iam_instance_profile" "ec2_instance_profile" { + name = "${local.name_prefix}-ec2-instance-profile" + role = aws_iam_role.ec2_instance_role.name tags = merge(var.tags, { - Name = "${local.name_prefix}-task-execution-role" + Name = "${local.name_prefix}-ec2-instance-profile" } ) } - -resource "aws_iam_role_policy_attachment" "p4_auth_task_execution_role_ecs" { - role = aws_iam_role.task_execution_role.name - policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" -} - -resource "aws_iam_role_policy_attachment" "p4_auth_task_execution_role_secrets_manager" { - role = aws_iam_role.task_execution_role.name - policy_arn = aws_iam_policy.secrets_manager_policy.arn -} diff --git a/modules/perforce/modules/p4-code-review/locals.tf b/modules/perforce/modules/p4-code-review/locals.tf index 52e24253..9cdd8547 100644 --- a/modules/perforce/modules/p4-code-review/locals.tf +++ b/modules/perforce/modules/p4-code-review/locals.tf @@ -1,11 +1,21 @@ locals { - image = "perforce/helix-swarm" # cannot change this until the Perforce Helix Swarm Image is updated to use the new naming for P4 Code Review - name_prefix = "${var.project_prefix}-${var.name}" - data_volume_name = "helix-swarm-data" # cannot change this until the Perforce Helix Swarm Image is updated to use the new naming for P4 Code Review - data_path = "/opt/perforce/swarm/data" # cannot change this until the Perforce Helix Swarm Image is updated to use the new naming for P4 Code Review + # AMI selection: use provided ami_id or auto-detect latest Packer-built AMI + selected_ami_id = var.ami_id != null ? var.ami_id : data.aws_ami.p4_code_review[0].id + # Module identifier for resource tagging + module_identifier = "${var.project_prefix}-${var.name}" + name_prefix = "${var.project_prefix}-${var.name}" + + # Application configuration + application_port = var.application_port + + # ElastiCache Redis configuration elasticache_redis_port = 6379 elasticache_redis_engine_version = "7.0" elasticache_redis_parameter_group_name = "default.redis7" + # EC2 and EBS configuration + ebs_availability_zone = var.ebs_availability_zone != null ? var.ebs_availability_zone : data.aws_subnet.instance_subnet.availability_zone + host_data_path = "/opt/perforce/swarm/data" + ebs_device_name = "/dev/xvdf" } diff --git a/modules/perforce/modules/p4-code-review/main.tf b/modules/perforce/modules/p4-code-review/main.tf index 1df9b3e9..42c8b109 100644 --- a/modules/perforce/modules/p4-code-review/main.tf +++ b/modules/perforce/modules/p4-code-review/main.tf @@ -1,279 +1,13 @@ ########################################## -# ECS | Cluster +# CloudWatch | Application Logging ########################################## -# If cluster name is not provided create a new cluster -resource "aws_ecs_cluster" "cluster" { - count = var.cluster_name != null ? 0 : 1 - name = "${local.name_prefix}-cluster" - - setting { - name = "containerInsights" - value = "enabled" - } - - tags = merge(var.tags, - { - Name = "${local.name_prefix}-cluster" - } - ) -} - - -########################################## -# ECS Cluster | Capacity Providers -########################################## -# If cluster name is not provided create a new cluster capacity providers -resource "aws_ecs_cluster_capacity_providers" "cluster_fargate_providers" { - count = var.cluster_name != null ? 0 : 1 - cluster_name = aws_ecs_cluster.cluster[0].name - - capacity_providers = ["FARGATE"] - - default_capacity_provider_strategy { - base = 1 - weight = 100 - capacity_provider = "FARGATE" - } -} - - -########################################## -# ECS | Task Definition -########################################## -resource "aws_ecs_task_definition" "task_definition" { - family = "${local.name_prefix}-task-definition" - requires_compatibilities = ["FARGATE"] - network_mode = "awsvpc" - cpu = var.container_cpu - memory = var.container_memory - - #checkov:skip=CKV_AWS_97: Task definition secrets are managed via AWS Secrets Manager - - volume { - name = local.data_volume_name - } - - container_definitions = jsonencode( - [ - { - name = var.container_name, - image = local.image, - cpu = var.container_cpu, - memory = var.container_memory, - essential = true, - portMappings = [ - { - containerPort = var.container_port, - hostPort = var.container_port - protocol = "tcp" - } - ] - healthCheck = { - # command = ["CMD-SHELL", "pwd || exit 1"] - command = ["CMD-SHELL", "curl -f http://localhost:${var.container_port}/login || exit 1"] - startPeriod = 30 - } - logConfiguration = { - logDriver = "awslogs" - options = { - awslogs-group = aws_cloudwatch_log_group.log_group.name - awslogs-region = data.aws_region.current.name - awslogs-stream-prefix = "${local.name_prefix}-service" - } - } - secrets = [ - { - name = "P4D_SUPER", - valueFrom = var.super_user_username_secret_arn - }, - { - name = "P4D_SUPER_PASSWD", - valueFrom = var.super_user_password_secret_arn - }, - { - name = "SWARM_USER" # cannot change this until the Perforce Helix Swarm Image is updated to use the new naming for P4 Code Review - valueFrom = var.p4_code_review_user_username_secret_arn - }, - { - name = "SWARM_PASSWD" # cannot change this until the Perforce Helix Swarm Image is updated to use the new naming for P4 Code Review - valueFrom = var.p4_code_review_user_password_secret_arn - } - ] - environment = [ - { - name = "P4CHARSET" - value = var.p4charset - }, - { - name = "P4D_PORT", - value = var.p4d_port - }, - { - name = "SWARM_HOST" - value = var.fully_qualified_domain_name - }, - { - name = "SWARM_REDIS" # cannot update naming until the Perforce container image is updated - value = var.existing_redis_connection != null ? var.existing_redis_connection.host : aws_elasticache_cluster.cluster[0].cache_nodes[0].address - }, - { - name = "SWARM_REDIS_PORT" # cannot update naming until the Perforce container image is updated - value = var.existing_redis_connection != null ? tostring(var.existing_redis_connection.port) : tostring(aws_elasticache_cluster.cluster[0].cache_nodes[0].port) - }, - { - name = "SWARM_FORCE_EXT" - value = "y" - } - ], - readonlyRootFilesystem = false - #checkov:skip=CKV_AWS_81: Read-only root filesystem disabled for application requirements - mountPoints = [ - { - sourceVolume = local.data_volume_name - containerPath = local.data_path - readOnly = false - } - ] - }, - { - name = "${var.container_name}-config" - image = "public.ecr.aws/debian/debian:13-slim" - essential = false - // Only run this command if enable_sso is set - command = [ - "bash", - "-ce", - <<-EOF - cd ${local.data_path} - - # Prepare the config files - mv config.php config.gen.php - echo $CONFIG_PHP | base64 --decode | tee config.php - - %{if var.config_php_source != null} - echo $CONFIG_USER_PHP | base64 --decode | tee config.user.php - %{endif} - - # Clear the cache to force a re-configure - rm -rf cache - EOF - ] - - secrets = [ - { - name = "CONFIG_USER_PHP" - valueFrom = var.config_php_source - }, - ] - environment = [ - { - name = "CONFIG_PHP" - value = base64encode(templatefile("${path.module}/assets/config.php.tftpl", { - enable_sso = var.enable_sso, - })) - } - ] - readonly_root_filesystem = false - #checkov:skip=CKV_AWS_81: Read-only root filesystem disabled for configuration container requirements - - logConfiguration = { - logDriver = "awslogs" - options = { - awslogs-group = aws_cloudwatch_log_group.log_group.name - awslogs-region = data.aws_region.current.name - awslogs-stream-prefix = "${local.name_prefix}-service-config" - } - } - mountPoints = [ - { - sourceVolume = local.data_volume_name - containerPath = local.data_path - } - ] - dependsOn = [ - { - containerName = var.container_name - condition = "HEALTHY" - } - ] - } - ] - ) - - task_role_arn = var.custom_role != null ? var.custom_role : aws_iam_role.default_role[0].arn - execution_role_arn = aws_iam_role.task_execution_role.arn - - runtime_platform { - operating_system_family = "LINUX" - cpu_architecture = "X86_64" - } - - tags = merge(var.tags, - { - Name = "${local.name_prefix}-task-definition" - } - ) -} - - -########################################## -# ECS | Service -########################################## -resource "aws_ecs_service" "service" { - name = "${local.name_prefix}-service" - - cluster = var.cluster_name != null ? data.aws_ecs_cluster.cluster[0].arn : aws_ecs_cluster.cluster[0].arn - task_definition = aws_ecs_task_definition.task_definition.arn - launch_type = "FARGATE" - desired_count = "1" # P4 Code Review does not support horizontal scaling, so desired container count is fixed at 1 - # Allow ECS to delete a service even if deregistration is taking time. This is to prevent the ALB listener in the parent module from failing to be deleted in the event that all registered targets (ECS services) haven't been destroyed yet. - force_new_deployment = var.debug - enable_execute_command = var.debug - - # wait_for_steady_state = true - - load_balancer { - target_group_arn = aws_lb_target_group.alb_target_group.arn - container_name = var.container_name - container_port = var.container_port - } - - network_configuration { - subnets = var.subnets - security_groups = [aws_security_group.ecs_service.id] - } - - tags = merge(var.tags, - { - Name = "${local.name_prefix}-service" - } - ) - - # lifecycle { - # create_before_destroy = true - # ignore_changes = [desired_count] # Let Application Auto Scaling manage this - # } - - timeouts { - create = "20m" - } - - - - depends_on = [aws_elasticache_cluster.cluster, aws_lb_target_group.alb_target_group] -} - - -########################################## -# CloudWatch | Redis Logging -########################################## -resource "aws_cloudwatch_log_group" "log_group" { +resource "aws_cloudwatch_log_group" "application_log_group" { #checkov:skip=CKV_AWS_158: KMS Encryption disabled by default - name = "${local.name_prefix}-log-group" + name = "${local.name_prefix}-application-log-group" retention_in_days = var.cloudwatch_log_retention_in_days tags = merge(var.tags, { - Name = "${local.name_prefix}-log-group" + Name = "${local.name_prefix}-application-log-group" } ) } diff --git a/modules/perforce/modules/p4-code-review/outputs.tf b/modules/perforce/modules/p4-code-review/outputs.tf index 54a621ab..98280fe1 100644 --- a/modules/perforce/modules/p4-code-review/outputs.tf +++ b/modules/perforce/modules/p4-code-review/outputs.tf @@ -1,6 +1,6 @@ -output "service_security_group_id" { - value = aws_security_group.ecs_service.id - description = "Security group associated with the ECS service running P4 Code Review" +output "application_security_group_id" { + value = aws_security_group.application.id + description = "Security group associated with the P4 Code Review application" } output "alb_security_group_id" { @@ -8,11 +8,6 @@ output "alb_security_group_id" { description = "Security group associated with the P4 Code Review load balancer" } -output "cluster_name" { - value = var.cluster_name != null ? var.cluster_name : aws_ecs_cluster.cluster[0].name - description = "Name of the ECS cluster hosting P4 Code Review" -} - output "alb_dns_name" { value = var.create_application_load_balancer ? aws_lb.alb[0].dns_name : null description = "The DNS name of the P4 Code Review ALB" @@ -25,15 +20,25 @@ output "alb_zone_id" { output "target_group_arn" { value = aws_lb_target_group.alb_target_group.arn - description = "The service target group for P4 Code Review" + description = "The target group ARN for P4 Code Review" +} + +output "instance_profile_arn" { + value = aws_iam_instance_profile.ec2_instance_profile.arn + description = "The ARN of the IAM instance profile for P4 Code Review EC2 instances" +} + +output "launch_template_id" { + value = aws_launch_template.swarm_instance.id + description = "The ID of the launch template for P4 Code Review instances" } -output "default_role_id" { - value = var.create_default_role ? aws_iam_role.default_role[0].id : null - description = "The default role for the service task" +output "autoscaling_group_name" { + value = aws_autoscaling_group.swarm_asg.name + description = "The name of the Auto Scaling Group for P4 Code Review" } -output "execution_role_id" { - value = aws_iam_role.task_execution_role.id - description = "The default role for the service task" +output "ebs_volume_id" { + value = aws_ebs_volume.swarm_data.id + description = "The ID of the EBS volume storing P4 Code Review persistent data" } diff --git a/modules/perforce/modules/p4-code-review/sg.tf b/modules/perforce/modules/p4-code-review/sg.tf index 9f637d7b..bf97aaf7 100644 --- a/modules/perforce/modules/p4-code-review/sg.tf +++ b/modules/perforce/modules/p4-code-review/sg.tf @@ -15,55 +15,67 @@ resource "aws_security_group" "alb" { ) } -# Outbound access from ALB to Containers -resource "aws_vpc_security_group_egress_rule" "alb_outbound_to_ecs_service" { +# Inbound HTTPS access to ALB from Application +# Required for Swarm instance to validate itself via external URL when P4 server extension connects back +resource "aws_vpc_security_group_ingress_rule" "alb_inbound_from_application" { count = var.create_application_load_balancer ? 1 : 0 security_group_id = aws_security_group.alb[0].id - description = "Allow outbound traffic from ALB to ${local.name_prefix} ECS service" - referenced_security_group_id = aws_security_group.ecs_service.id - from_port = var.container_port - to_port = var.container_port + description = "Allow HTTPS from ${local.name_prefix} application for self-validation" + referenced_security_group_id = aws_security_group.application.id + from_port = 443 + to_port = 443 + ip_protocol = "tcp" +} + +# Outbound access from ALB to Application +resource "aws_vpc_security_group_egress_rule" "alb_outbound_to_application" { + count = var.create_application_load_balancer ? 1 : 0 + security_group_id = aws_security_group.alb[0].id + description = "Allow outbound traffic from ALB to ${local.name_prefix} application" + referenced_security_group_id = aws_security_group.application.id + from_port = local.application_port + to_port = local.application_port ip_protocol = "tcp" } ######################################## -# ECS Service Security Group +# Application Security Group ######################################## -# Service Security Group (attached to containers) -resource "aws_security_group" "ecs_service" { - name = "${local.name_prefix}-service" +# Application Security Group (attached to EC2 instances) +resource "aws_security_group" "application" { + name = "${local.name_prefix}-application" vpc_id = var.vpc_id - description = "${local.name_prefix} service Security Group" + description = "${local.name_prefix} application Security Group" tags = merge(var.tags, { - Name = "${local.name_prefix}-service" + Name = "${local.name_prefix}-application" } ) } -# Inbound access to Containers from ALB -resource "aws_vpc_security_group_ingress_rule" "ecs_service_inbound_alb" { +# Inbound access to Application from ALB +resource "aws_vpc_security_group_ingress_rule" "application_inbound_alb" { count = var.create_application_load_balancer ? 1 : 0 - security_group_id = aws_security_group.ecs_service.id - description = "Allow inbound traffic from ${local.name_prefix} ALB to ${local.name_prefix} service" + security_group_id = aws_security_group.application.id + description = "Allow inbound traffic from ${local.name_prefix} ALB to ${local.name_prefix} application" referenced_security_group_id = aws_security_group.alb[0].id - from_port = var.container_port - to_port = var.container_port + from_port = local.application_port + to_port = local.application_port ip_protocol = "tcp" } -# Outbound access from Containers to Internet (IPV4) -resource "aws_vpc_security_group_egress_rule" "ecs_service_outbound_to_internet_ipv4" { - security_group_id = aws_security_group.ecs_service.id - description = "Allow outbound traffic from ${local.name_prefix} service to internet (ipv4)" +# Outbound access from Application to Internet (IPV4) +resource "aws_vpc_security_group_egress_rule" "application_outbound_to_internet_ipv4" { + security_group_id = aws_security_group.application.id + description = "Allow outbound traffic from ${local.name_prefix} application to internet (ipv4)" cidr_ipv4 = "0.0.0.0/0" ip_protocol = "-1" # semantically equivalent to all ports } -# Outbound access from Containers to Internet (IPV6) -resource "aws_vpc_security_group_egress_rule" "ecs_service_outbound_to_internet_ipv6" { - security_group_id = aws_security_group.ecs_service.id - description = "Allow outbound traffic from ${local.name_prefix} service to internet (ipv6)" +# Outbound access from Application to Internet (IPV6) +resource "aws_vpc_security_group_egress_rule" "application_outbound_to_internet_ipv6" { + security_group_id = aws_security_group.application.id + description = "Allow outbound traffic from ${local.name_prefix} application to internet (ipv6)" cidr_ipv6 = "::/0" ip_protocol = "-1" # semantically equivalent to all ports } @@ -80,12 +92,44 @@ resource "aws_security_group" "elasticache" { description = "${local.name_prefix} Elasticache Redis Security Group" tags = var.tags } -resource "aws_vpc_security_group_ingress_rule" "elasticache_inbound_from_ecs_service" { +resource "aws_vpc_security_group_ingress_rule" "elasticache_inbound_from_application" { count = var.existing_redis_connection != null ? 0 : 1 security_group_id = aws_security_group.elasticache[0].id description = "Allow inbound traffic from P4 Code Review to Redis" - referenced_security_group_id = aws_security_group.ecs_service.id + referenced_security_group_id = aws_security_group.application.id from_port = local.elasticache_redis_port to_port = local.elasticache_redis_port ip_protocol = "tcp" } + + +######################################## +# EC2 Instance Security Group +######################################## +resource "aws_security_group" "ec2_instance" { + #checkov:skip=CKV2_AWS_5:Security group is attached to EC2 instances in Auto Scaling Group + name = "${local.name_prefix}-ec2-instance" + vpc_id = var.vpc_id + description = "${local.name_prefix} EC2 Instance Security Group" + tags = merge(var.tags, + { + Name = "${local.name_prefix}-ec2-instance" + } + ) +} + +# Outbound access to Internet (IPV4) - Required for AWS API calls and package downloads +resource "aws_vpc_security_group_egress_rule" "ec2_instance_outbound_to_internet_ipv4" { + security_group_id = aws_security_group.ec2_instance.id + description = "Allow outbound traffic from ${local.name_prefix} EC2 instance to internet (ipv4)" + cidr_ipv4 = "0.0.0.0/0" + ip_protocol = "-1" +} + +# Outbound access to Internet (IPV6) - Required for AWS API calls and package downloads +resource "aws_vpc_security_group_egress_rule" "ec2_instance_outbound_to_internet_ipv6" { + security_group_id = aws_security_group.ec2_instance.id + description = "Allow outbound traffic from ${local.name_prefix} EC2 instance to internet (ipv6)" + cidr_ipv6 = "::/0" + ip_protocol = "-1" +} diff --git a/modules/perforce/modules/p4-code-review/user-data.sh.tpl b/modules/perforce/modules/p4-code-review/user-data.sh.tpl new file mode 100644 index 00000000..3f36fd16 --- /dev/null +++ b/modules/perforce/modules/p4-code-review/user-data.sh.tpl @@ -0,0 +1,273 @@ +#!/bin/bash +# User data script for P4 Code Review native EC2 instance +# Handles EBS volume attachment/mounting and Swarm configuration + +set -e +set -o pipefail + +# Configuration variables (injected by Terraform) +REGION="${region}" +DEVICE_NAME="${device_name}" +MOUNT_PATH="${mount_path}" +VOLUME_TAG_KEY="SwarmDataVolume" +VOLUME_TAG_VALUE="true" +MODULE_TAG_VALUE="${module_identifier}" + +# P4 Code Review configuration parameters +P4D_PORT="${p4d_port}" +P4CHARSET="${p4charset}" +SWARM_HOST="${swarm_host}" +SWARM_REDIS="${swarm_redis}" +SWARM_REDIS_PORT="${swarm_redis_port}" +SWARM_FORCE_EXT="${swarm_force_ext}" + +# Secret ARN for AWS Secrets Manager +# The super user is used for both Swarm runtime and admin operations +P4D_SUPER_PASSWD_SECRET_ARN="${super_user_password_secret_arn}" + +# Logging function +log() { + echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a /var/log/swarm-startup.log +} + +log "=========================================" +log "Starting P4 Code Review native EC2 setup" +log "=========================================" + +# 1. Get instance metadata +log "Fetching instance metadata..." +IMDS_TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" 2>/dev/null) +INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $IMDS_TOKEN" -s http://169.254.169.254/latest/meta-data/instance-id) +INSTANCE_AZ=$(curl -H "X-aws-ec2-metadata-token: $IMDS_TOKEN" -s http://169.254.169.254/latest/meta-data/placement/availability-zone) + +log "Instance ID: $INSTANCE_ID" +log "Instance AZ: $INSTANCE_AZ" + +# 2. Find the EBS volume by tags +log "Searching for EBS volume with tags: $VOLUME_TAG_KEY=$VOLUME_TAG_VALUE, ModuleIdentifier=$MODULE_TAG_VALUE" + +VOLUME_ID=$(aws ec2 describe-volumes \ + --region "$REGION" \ + --filters \ + "Name=tag:$VOLUME_TAG_KEY,Values=$VOLUME_TAG_VALUE" \ + "Name=tag:ModuleIdentifier,Values=$MODULE_TAG_VALUE" \ + "Name=availability-zone,Values=$INSTANCE_AZ" \ + --query 'Volumes[0].VolumeId' \ + --output text) + +if [ "$VOLUME_ID" == "None" ] || [ -z "$VOLUME_ID" ]; then + log "ERROR: Could not find EBS volume with required tags in AZ $INSTANCE_AZ" + exit 1 +fi + +log "Found EBS volume: $VOLUME_ID" + +# 3. Check current volume attachment status +VOLUME_INFO=$(aws ec2 describe-volumes \ + --region "$REGION" \ + --volume-ids "$VOLUME_ID" \ + --query 'Volumes[0].{State:State,AttachedInstance:Attachments[0].InstanceId,AttachState:Attachments[0].State}' \ + --output json) + +VOLUME_STATE=$(echo "$VOLUME_INFO" | jq -r '.State') +ATTACHED_INSTANCE=$(echo "$VOLUME_INFO" | jq -r '.AttachedInstance // "none"') +ATTACH_STATE=$(echo "$VOLUME_INFO" | jq -r '.AttachState // "none"') + +log "Volume state: $VOLUME_STATE, Attached to: $ATTACHED_INSTANCE, Attach state: $ATTACH_STATE" + +if [ "$ATTACHED_INSTANCE" == "$INSTANCE_ID" ]; then + log "Volume $VOLUME_ID is already attached to this instance" +elif [ "$ATTACHED_INSTANCE" != "none" ] && [ "$ATTACHED_INSTANCE" != "null" ]; then + log "Volume is attached to different instance $ATTACHED_INSTANCE - checking instance state" + + # Wait for attached instance to terminate (handles ASG replacement race condition) + # ASG launches new instance before terminating old one, so we may need to wait + MAX_WAIT=300 + WAIT_TIME=0 + while [ $WAIT_TIME -lt $MAX_WAIT ]; do + INSTANCE_STATE=$(aws ec2 describe-instances \ + --region "$REGION" \ + --instance-ids "$ATTACHED_INSTANCE" \ + --query 'Reservations[0].Instances[0].State.Name' \ + --output text 2>/dev/null || echo "unknown") + + log "Previous instance $ATTACHED_INSTANCE state: $INSTANCE_STATE" + + if [ "$INSTANCE_STATE" = "terminated" ] || [ "$INSTANCE_STATE" = "unknown" ]; then + log "Previous instance is terminated/unknown, safe to force detach" + break + elif [ "$INSTANCE_STATE" = "shutting-down" ] || [ "$INSTANCE_STATE" = "stopping" ]; then + log "Previous instance is $INSTANCE_STATE, waiting for termination..." + sleep 10 + WAIT_TIME=$((WAIT_TIME + 10)) + elif [ "$INSTANCE_STATE" = "running" ]; then + # Instance still running - could be ASG replacement in progress + # Wait a bit to see if it starts terminating + log "Previous instance still running, waiting to see if ASG terminates it..." + sleep 10 + WAIT_TIME=$((WAIT_TIME + 10)) + else + log "ERROR: Unexpected instance state: $INSTANCE_STATE" + log "Cannot safely detach volume - manual intervention required" + exit 1 + fi + done + + if [ $WAIT_TIME -ge $MAX_WAIT ]; then + log "ERROR: Timed out waiting for previous instance to terminate after $${MAX_WAIT}s" + log "Cannot safely detach volume - manual intervention required" + exit 1 + fi + + aws ec2 detach-volume \ + --region "$REGION" \ + --volume-id "$VOLUME_ID" \ + --force 2>&1 | tee -a /tmp/swarm-setup.log || log "Warning: Force detach may have failed" + + # Wait for detachment with timeout + log "Waiting up to 2 minutes for volume to become available..." + for i in {1..24}; do + CURRENT_STATE=$(aws ec2 describe-volumes --region "$REGION" --volume-ids "$VOLUME_ID" --query 'Volumes[0].State' --output text) + if [ "$CURRENT_STATE" == "available" ]; then + log "Volume is now available" + break + fi + log "Volume state: $CURRENT_STATE (attempt $i/24)" + sleep 5 + done + + log "Attaching volume $VOLUME_ID to instance $INSTANCE_ID at $DEVICE_NAME" + aws ec2 attach-volume \ + --region "$REGION" \ + --volume-id "$VOLUME_ID" \ + --instance-id "$INSTANCE_ID" \ + --device "$DEVICE_NAME" + + log "Waiting for volume attachment..." + aws ec2 wait volume-in-use \ + --region "$REGION" \ + --volume-ids "$VOLUME_ID" + + log "Volume attached successfully" +else + log "Attaching volume $VOLUME_ID to instance $INSTANCE_ID at $DEVICE_NAME" + + aws ec2 attach-volume \ + --region "$REGION" \ + --volume-id "$VOLUME_ID" \ + --instance-id "$INSTANCE_ID" \ + --device "$DEVICE_NAME" + + log "Waiting for volume attachment..." + aws ec2 wait volume-in-use \ + --region "$REGION" \ + --volume-ids "$VOLUME_ID" + + log "Volume attached successfully" +fi + +# 4. Find the actual device name (NVMe instances use different naming) +log "Looking for attached device..." +ACTUAL_DEVICE="" +for i in {1..30}; do + # Try the original device name first + if [ -e "$DEVICE_NAME" ]; then + ACTUAL_DEVICE="$DEVICE_NAME" + log "Found device at $ACTUAL_DEVICE" + break + fi + + # Look for NVMe device by volume ID symlink + NVME_LINK="/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_$${VOLUME_ID//-/}" + if [ -L "$NVME_LINK" ]; then + ACTUAL_DEVICE=$(readlink -f "$NVME_LINK") + log "Found NVMe device via symlink: $ACTUAL_DEVICE" + break + fi + + log "Attempt $i/30: Device not yet available, waiting..." + sleep 2 +done + +if [ -z "$ACTUAL_DEVICE" ]; then + log "ERROR: Could not find attached device after 60 seconds" + log "Expected: $DEVICE_NAME or /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_$${VOLUME_ID//-/}" + exit 1 +fi + +DEVICE_NAME="$ACTUAL_DEVICE" +log "Using device: $DEVICE_NAME" + +# 5. Check if the device has a filesystem, if not create one +log "Checking filesystem on $DEVICE_NAME..." +if ! blkid "$DEVICE_NAME" > /dev/null 2>&1; then + log "No filesystem detected on $DEVICE_NAME, creating ext4 filesystem..." + mkfs -t ext4 "$DEVICE_NAME" + log "Filesystem created successfully" +else + log "Existing filesystem detected on $DEVICE_NAME" +fi + +# 6. Create mount point if it doesn't exist +if [ ! -d "$MOUNT_PATH" ]; then + log "Creating mount point: $MOUNT_PATH" + mkdir -p "$MOUNT_PATH" +fi + +# 7. Mount the volume +log "Mounting $DEVICE_NAME to $MOUNT_PATH..." +mount "$DEVICE_NAME" "$MOUNT_PATH" +log "Volume mounted successfully" + +# 8. Set proper permissions for Swarm +log "Setting permissions on $MOUNT_PATH..." +chmod 755 "$MOUNT_PATH" +chown -R swarm:swarm "$MOUNT_PATH" + +# 9. Add entry to /etc/fstab for automatic mounting on reboot +if ! grep -q "$DEVICE_NAME" /etc/fstab; then + log "Adding entry to /etc/fstab for persistent mounting..." + echo "$DEVICE_NAME $MOUNT_PATH ext4 defaults,nofail 0 2" >> /etc/fstab + log "fstab entry added" +else + log "fstab entry already exists" +fi + +# 10. Verify mount +if mountpoint -q "$MOUNT_PATH"; then + log "SUCCESS: $MOUNT_PATH is mounted" + df -h "$MOUNT_PATH" +else + log "ERROR: $MOUNT_PATH is not mounted" + exit 1 +fi + +# 11. Configure Swarm using the script from the AMI +log "Configuring P4 Code Review with runtime parameters..." + +# Write custom config JSON to file if provided (for swarm_instance_init.sh to merge) +CUSTOM_CONFIG_FILE="/tmp/swarm_custom_config.json" +%{ if custom_config != null && custom_config != "" ~} +cat > "$CUSTOM_CONFIG_FILE" << 'CUSTOM_CONFIG_EOF' +${custom_config} +CUSTOM_CONFIG_EOF +log "Custom config written to $CUSTOM_CONFIG_FILE" +%{ else ~} +log "No custom config provided" +%{ endif ~} + +/home/ubuntu/swarm_scripts/swarm_instance_init.sh \ + --p4d-port "$P4D_PORT" \ + --p4charset "$P4CHARSET" \ + --swarm-host "$SWARM_HOST" \ + --swarm-redis "$SWARM_REDIS" \ + --swarm-redis-port "$SWARM_REDIS_PORT" \ + --swarm-force-ext "$SWARM_FORCE_EXT" \ + --p4d-super-passwd-secret-arn "$P4D_SUPER_PASSWD_SECRET_ARN" \ + --custom-config-file "$CUSTOM_CONFIG_FILE" + +log "=========================================" +log "P4 Code Review native EC2 setup completed successfully" +log "P4 Code Review should be accessible at: https://$SWARM_HOST" +log "Data path: $MOUNT_PATH" +log "=========================================" diff --git a/modules/perforce/modules/p4-code-review/variables.tf b/modules/perforce/modules/p4-code-review/variables.tf index 4c60f26a..78243de0 100644 --- a/modules/perforce/modules/p4-code-review/variables.tf +++ b/modules/perforce/modules/p4-code-review/variables.tf @@ -25,58 +25,25 @@ variable "fully_qualified_domain_name" { default = null } -variable "debug" { - type = bool - default = false - description = "Debug flag to enable execute command on service for container access." -} - - ######################################## # Compute ######################################## -variable "cluster_name" { - type = string - description = "The name of the cluster to deploy the P4 Code Review service into. Defaults to null and a cluster will be created." - default = null -} - -variable "container_name" { - type = string - description = "The name of the P4 Code Review container." - default = "p4-code-review-container" - nullable = false -} - -variable "container_port" { +variable "application_port" { type = number - description = "The container port that P4 Code Review runs on." + description = "The port that P4 Code Review listens on. Used for ALB target group configuration." default = 80 nullable = false } -variable "container_cpu" { - type = number - description = "The CPU allotment for the P4 Code Review container." - default = 1024 - nullable = false -} - -variable "container_memory" { - type = number - description = "The memory allotment for the P4 Code Review container." - default = 2048 -} - variable "p4d_port" { type = string - description = "The P4D_PORT environment variable where P4 Code Review should look for P4 Code Review. Defaults to 'ssl:perforce:1666'" + description = "The P4D_PORT environment variable where P4 Code Review should look for P4 Server. Defaults to 'ssl:perforce:1666'" default = "ssl:perforce:1666" } variable "p4charset" { type = string - description = "The P4CHARSET environment variable to set in the P4 Code Review container." + description = "The P4CHARSET environment variable to set for the P4 Code Review instance." default = "none" } @@ -143,7 +110,7 @@ variable "alb_subnets" { variable "subnets" { type = list(string) - description = "A list of subnets to deploy the P4 Code Review ECS Service into. Private subnets are recommended." + description = "A list of subnets for ElastiCache Redis deployment. Private subnets are recommended." } variable "create_application_load_balancer" { @@ -196,50 +163,17 @@ variable "certificate_arn" { } } -variable "create_default_role" { - type = bool - description = "Optional creation of P4 Code Review Default IAM Role. Default is set to true." - default = true -} - -variable "custom_role" { - type = string - description = "ARN of the custom IAM Role you wish to use with P4 Code Review." - default = null -} - -variable "super_user_username_secret_arn" { - type = string - description = "Optionally provide the ARN of an AWS Secret for the p4d super user username." -} - variable "super_user_password_secret_arn" { type = string - description = "Optionally provide the ARN of an AWS Secret for the p4d super user password." -} - -variable "p4_code_review_user_username_secret_arn" { - type = string - description = "Optionally provide the ARN of an AWS Secret for the p4d P4 Code Review username." + description = "ARN of the AWS Secrets Manager secret containing the P4 super user password. The super user is used for both Swarm runtime operations and administrative tasks." } -variable "p4_code_review_user_password_secret_arn" { +variable "custom_config" { type = string - description = "Optionally provide the ARN of an AWS Secret for the p4d P4 Code Review password." -} - -variable "config_php_source" { - type = string - description = "Used as the ValueFrom for P4CR's config.php. Contents should be base64 encoded, and will be combined with the generated config.php via array_replace_recursive." + description = "JSON string with additional Swarm configuration to merge with the generated config.php. Use this for SSO/SAML setup, notifications, Jira integration, etc. See README for examples." default = null } -variable "enable_sso" { - type = bool - default = false - description = "Set this to true if using SSO for P4 Code Review authentication." -} - ###################### # Caching ###################### @@ -260,6 +194,50 @@ variable "elasticache_node_type" { default = "cache.t4g.micro" } +######################################## +# EC2 Instance Configuration +######################################## +variable "ami_id" { + type = string + description = "Optional AMI ID for P4 Code Review. If not provided, will use the latest Packer-built AMI with name pattern 'p4_code_review_ubuntu-*'." + default = null +} + +variable "instance_type" { + type = string + description = "EC2 instance type for running P4 Code Review. Swarm requires persistent storage and runs natively on EC2." + default = "m5.large" +} + +variable "instance_subnet_id" { + type = string + description = "The subnet ID where the EC2 instance will be launched. Should be a private subnet for security." +} + +variable "ebs_volume_size" { + type = number + description = "Size in GB for the EBS volume that stores P4 Code Review data (/opt/perforce/swarm/data). This volume persists across instance replacement." + default = 20 +} + +variable "ebs_volume_type" { + type = string + description = "EBS volume type for P4 Code Review data storage." + default = "gp3" +} + +variable "ebs_volume_encrypted" { + type = bool + description = "Enable encryption for the EBS volume storing P4 Code Review data." + default = true +} + +variable "ebs_availability_zone" { + type = string + description = "Availability zone for the EBS volume. Must match the EC2 instance AZ. If not provided, will use the AZ of the instance_subnet_id." + default = null +} + variable "tags" { type = map(any) diff --git a/modules/perforce/modules/p4-server/README.md b/modules/perforce/modules/p4-server/README.md index 8f7b7cb3..d9e3f6e7 100644 --- a/modules/perforce/modules/p4-server/README.md +++ b/modules/perforce/modules/p4-server/README.md @@ -12,29 +12,47 @@ This module provisions P4 Server on an EC2 Instance with three dedicated EBS vol This module deploys P4 Server on AWS using an Amazon Machine Image (AMI) that is included in the Cloud Game Development Toolkit. You **must** provision this AMI using [Hashicorp Packer](https://www.packer.io/) prior to deploying this module. To get started consult [the documentation for the P4 Server AMI](../../../../assets/packer/perforce/p4-server/README.md). -### Optional +### User Management -You can optionally define the Helix Core super user's credentials prior to deployment. To do so, create a secret for the Helix Core super user's username and password: +This module creates two users with super privileges: -```bash -aws secretsmanager create-secret \ - --name HelixCoreSuperUser \ - --description "Helix Core Super User" \ - --secret-string "{\"username\":\"admin\",\"password\":\"EXAMPLE-PASSWORD\"}" +1. **Service Account (`super`)**: An internal service account used by P4 Code Review (Helix Swarm) and other Perforce tooling. This user is always created automatically with a randomly generated password stored in AWS Secrets Manager. The service account uses password-based authentication (non-SSO). + +2. **Admin Account**: A human administrator account for managing the Perforce server. The username defaults to `perforce` but can be customized via the `admin_username` variable. The password is auto-generated and stored in AWS Secrets Manager, or you can provide your own secret ARN. + +#### Configuring the Admin Account + +By default, an admin user named `perforce` is created: + +```hcl +module "p4_server" { + source = "modules/perforce/modules/p4-server" + ... + # Uses default admin_username = "perforce" + # Password auto-generated and stored in Secrets Manager +} ``` -You can then provide the relevant ARN as variables when you define the Helix Core module in your Terraform configurations: +To customize the admin username: ```hcl -module "perforce_helix_core" { - source = "modules/perforce/helix-core" +module "p4_server" { + source = "modules/perforce/modules/p4-server" ... - helix_core_super_user_username_arn = "arn:aws:secretsmanager:us-west-2:123456789012:secret:HelixCoreSuperUser-a1b2c3:username::" - helix_core_super_user_password_arn = "arn:aws:secretsmanager:us-west-2:123456789012:secret:HelixCoreSuperUser-a1b2c3:password::" + admin_username = "myadmin" } ``` -If you do not provide these the module will create a random Super User and create the secret for you. The ARN of this secret is then available as an output to be referenced elsewhere. +To use an existing password secret: + +```hcl +module "p4_server" { + source = "modules/perforce/modules/p4-server" + ... + admin_username = "myadmin" + admin_password_secret_arn = "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyAdminPassword-a1b2c3" +} +``` @@ -93,8 +111,9 @@ No modules. | [aws_vpc_security_group_egress_rule.link_outbound_fsxn](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | | [aws_vpc_security_group_egress_rule.server_internet](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_egress_rule) | resource | | [aws_vpc_security_group_ingress_rule.fsxn_inbound_link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource | -| [awscc_secretsmanager_secret.super_user_password](https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/secretsmanager_secret) | resource | -| [awscc_secretsmanager_secret.super_user_username](https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/secretsmanager_secret) | resource | +| [awscc_secretsmanager_secret.admin_password](https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/secretsmanager_secret) | resource | +| [awscc_secretsmanager_secret.admin_username](https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/secretsmanager_secret) | resource | +| [awscc_secretsmanager_secret.super_password](https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/secretsmanager_secret) | resource | | [netapp-ontap_lun.depots_volume_lun](https://registry.terraform.io/providers/NetApp/netapp-ontap/latest/docs/resources/lun) | resource | | [netapp-ontap_lun.logs_volume_lun](https://registry.terraform.io/providers/NetApp/netapp-ontap/latest/docs/resources/lun) | resource | | [netapp-ontap_lun.metadata_volume_lun](https://registry.terraform.io/providers/NetApp/netapp-ontap/latest/docs/resources/lun) | resource | @@ -116,6 +135,8 @@ No modules. | [p4\_server\_type](#input\_p4\_server\_type) | The Perforce P4 Server type. | `string` | n/a | yes | | [storage\_type](#input\_storage\_type) | The type of backing store [EBS, FSxN] | `string` | n/a | yes | | [vpc\_id](#input\_vpc\_id) | The VPC where P4 Server should be deployed | `string` | n/a | yes | +| [admin\_password\_secret\_arn](#input\_admin\_password\_secret\_arn) | Optional ARN of existing Secrets Manager secret for admin password. If not provided, a password will be auto-generated. | `string` | `null` | no | +| [admin\_username](#input\_admin\_username) | Username for the Perforce admin account (human user). The 'super' service account is always created automatically for internal tooling. | `string` | `"perforce"` | no | | [amazon\_fsxn\_filesystem\_id](#input\_amazon\_fsxn\_filesystem\_id) | The ID of the existing FSx ONTAP file system to use if storage type is FSxN. | `string` | `null` | no | | [amazon\_fsxn\_svm\_id](#input\_amazon\_fsxn\_svm\_id) | The ID of the Storage Virtual Machine (SVM) for the FSx ONTAP filesystem. | `string` | `null` | no | | [auth\_service\_url](#input\_auth\_service\_url) | The URL for the P4Auth Service. | `string` | `null` | no | @@ -143,22 +164,21 @@ No modules. | [project\_prefix](#input\_project\_prefix) | The project prefix for this workload. This is appended to the beginning of most resource names. | `string` | `"cgd"` | no | | [protocol](#input\_protocol) | Specify the protocol (NFS or ISCSI) | `string` | `null` | no | | [selinux](#input\_selinux) | Whether to apply SELinux label updates for P4 Server. Don't enable this if SELinux is disabled on your target operating system. | `bool` | `false` | no | -| [super\_user\_password\_secret\_arn](#input\_super\_user\_password\_secret\_arn) | If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's password here. | `string` | `null` | no | -| [super\_user\_username\_secret\_arn](#input\_super\_user\_username\_secret\_arn) | If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's username here. Otherwise, the default of 'perforce' will be used. | `string` | `null` | no | -| [tags](#input\_tags) | Tags to apply to resources. | `map(any)` |
{
"IaC": "Terraform",
"ModuleBy": "CGD-Toolkit",
"ModuleName": "p4-server",
"ModuleSource": "https://github.com/aws-games/cloud-game-development-toolkit/tree/main/modules/perforce",
"RootModuleName": "terraform-aws-perforce"
}
| no | +| [tags](#input\_tags) | Tags to apply to resources. | `map(any)` |
{
"IaC": "Terraform",
"ModuleBy": "CGD-Toolkit",
"ModuleName": "p4-server",
"ModuleSource": "https://github.com/aws-games/cloud-game-development-toolkit/tree/main/modules/perforce",
"RootModuleName": "terraform-aws-perforce"
}
| no | | [unicode](#input\_unicode) | Whether to enable Unicode configuration for P4 Server the -xi flag for p4d. Set to true to enable Unicode support. | `bool` | `false` | no | ## Outputs | Name | Description | |------|-------------| +| [admin\_password\_secret\_arn](#output\_admin\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the admin account password. | +| [admin\_username\_secret\_arn](#output\_admin\_username\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the admin account username. | | [eip\_id](#output\_eip\_id) | The ID of the Elastic IP associated with your P4 Server instance. | | [eip\_public\_ip](#output\_eip\_public\_ip) | The public IP of your P4 Server instance. | | [instance\_id](#output\_instance\_id) | Instance ID for the P4 Server instance | | [lambda\_link\_name](#output\_lambda\_link\_name) | Lambda function name for the FSxN Link | | [private\_ip](#output\_private\_ip) | Private IP for the P4 Server instance | | [security\_group\_id](#output\_security\_group\_id) | The default security group of your P4 Server instance. | -| [super\_user\_password\_secret\_arn](#output\_super\_user\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding your P4 Server super user's password. | -| [super\_user\_username\_secret\_arn](#output\_super\_user\_username\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding your P4 Server super user's username. | +| [super\_password\_secret\_arn](#output\_super\_password\_secret\_arn) | The ARN of the AWS Secrets Manager secret holding the service account (super) password. | diff --git a/modules/perforce/modules/p4-server/iam.tf b/modules/perforce/modules/p4-server/iam.tf index eed1212e..b4ea93a0 100644 --- a/modules/perforce/modules/p4-server/iam.tf +++ b/modules/perforce/modules/p4-server/iam.tf @@ -29,8 +29,9 @@ data "aws_iam_policy_document" "default_policy" { "secretsmanager:BatchGetSecretValue" ] resources = compact([ - var.super_user_password_secret_arn == null ? awscc_secretsmanager_secret.super_user_username[0].secret_id : var.super_user_password_secret_arn, - var.super_user_username_secret_arn == null ? awscc_secretsmanager_secret.super_user_password[0].secret_id : var.super_user_username_secret_arn, + local.super_password_secret, + local.admin_username_secret, + local.admin_password_secret, var.storage_type == "FSxN" && var.protocol == "ISCSI" ? var.fsxn_password : null ]) } diff --git a/modules/perforce/modules/p4-server/main.tf b/modules/perforce/modules/p4-server/main.tf index 2affa958..48ab3e60 100644 --- a/modules/perforce/modules/p4-server/main.tf +++ b/modules/perforce/modules/p4-server/main.tf @@ -1,10 +1,9 @@ ########################################## -# Perforce P4 Server Super User +# Service Account (super) ########################################## -resource "awscc_secretsmanager_secret" "super_user_password" { - count = var.super_user_password_secret_arn == null ? 1 : 0 - name = "${local.name_prefix}-SuperUserPassword" - description = "The password for the created P4 Server super user." +resource "awscc_secretsmanager_secret" "super_password" { + name = "${local.name_prefix}-ServiceAccountPassword" + description = "Internal service account password for Perforce tooling (Swarm, etc.)." generate_secret_string = { exclude_numbers = false exclude_punctuation = true @@ -12,11 +11,24 @@ resource "awscc_secretsmanager_secret" "super_user_password" { } } -resource "awscc_secretsmanager_secret" "super_user_username" { - count = var.super_user_username_secret_arn == null ? 1 : 0 - name = "${local.name_prefix}-SuperUserUsername" - description = "The username for the created P4 Server super user." - secret_string = "perforce" +########################################## +# Admin Account +########################################## +resource "awscc_secretsmanager_secret" "admin_username" { + name = "${local.name_prefix}-AdminUsername" + description = "Username for the Perforce admin account." + secret_string = var.admin_username +} + +resource "awscc_secretsmanager_secret" "admin_password" { + count = var.admin_password_secret_arn == null ? 1 : 0 + name = "${local.name_prefix}-AdminPassword" + description = "Password for the Perforce admin account." + generate_secret_string = { + exclude_numbers = false + exclude_punctuation = true + include_space = false + } } @@ -74,8 +86,9 @@ locals { } locals { - username_secret = var.super_user_username_secret_arn == null ? awscc_secretsmanager_secret.super_user_username[0].secret_id : var.super_user_username_secret_arn - password_secret = var.super_user_password_secret_arn == null ? awscc_secretsmanager_secret.super_user_password[0].secret_id : var.super_user_password_secret_arn + super_password_secret = awscc_secretsmanager_secret.super_password.secret_id + admin_username_secret = awscc_secretsmanager_secret.admin_username.secret_id + admin_password_secret = var.admin_password_secret_arn == null ? awscc_secretsmanager_secret.admin_password[0].secret_id : var.admin_password_secret_arn } resource "aws_instance" "server_instance" { ami = data.aws_ami.existing_server_ami.id @@ -88,22 +101,23 @@ resource "aws_instance" "server_instance" { iam_instance_profile = aws_iam_instance_profile.instance_profile.id user_data = templatefile("${path.module}/templates/user_data.tftpl", { - depot_volume_name = local.depot_volume_name - metadata_volume_name = local.metadata_volume_name - logs_volume_name = local.logs_volume_name - p4_server_type = var.p4_server_type - username_secret = local.username_secret - password_secret = local.password_secret - fqdn = var.fully_qualified_domain_name != null ? var.fully_qualified_domain_name : "" - auth_url = var.auth_service_url != null ? var.auth_service_url : "" - is_fsxn = local.is_fsxn - fsxn_password = var.fsxn_password - fsxn_svm_name = var.fsxn_svm_name - fsxn_management_ip = var.fsxn_management_ip - case_sensitive = var.case_sensitive ? 1 : 0 - unicode = var.unicode ? "true" : "false" - selinux = var.selinux ? "true" : "false" - plaintext = var.plaintext ? "true" : "false" + depot_volume_name = local.depot_volume_name + metadata_volume_name = local.metadata_volume_name + logs_volume_name = local.logs_volume_name + p4_server_type = var.p4_server_type + super_password_secret = local.super_password_secret + admin_username_secret = local.admin_username_secret + admin_password_secret = local.admin_password_secret + fqdn = var.fully_qualified_domain_name != null ? var.fully_qualified_domain_name : "" + auth_url = var.auth_service_url != null ? var.auth_service_url : "" + is_fsxn = local.is_fsxn + fsxn_password = var.fsxn_password + fsxn_svm_name = var.fsxn_svm_name + fsxn_management_ip = var.fsxn_management_ip + case_sensitive = var.case_sensitive ? 1 : 0 + unicode = var.unicode ? "true" : "false" + selinux = var.selinux ? "true" : "false" + plaintext = var.plaintext ? "true" : "false" }) vpc_security_group_ids = (var.create_default_sg ? @@ -128,6 +142,12 @@ resource "aws_instance" "server_instance" { Name = "${local.name_prefix}-${var.p4_server_type}-${local.p4_server_az}" }) + # Force destroy-before-create to ensure EBS volumes are detached + # before being re-attached to a new instance (e.g., during AMI updates) + lifecycle { + create_before_destroy = false + } + depends_on = [ netapp-ontap_san_lun-map.depots_lun_map, netapp-ontap_san_lun-map.logs_lun_map, diff --git a/modules/perforce/modules/p4-server/outputs.tf b/modules/perforce/modules/p4-server/outputs.tf index 38b4dece..d038cc3a 100644 --- a/modules/perforce/modules/p4-server/outputs.tf +++ b/modules/perforce/modules/p4-server/outputs.tf @@ -13,18 +13,21 @@ output "security_group_id" { description = "The default security group of your P4 Server instance." } -output "super_user_password_secret_arn" { - value = (var.super_user_password_secret_arn == null ? - awscc_secretsmanager_secret.super_user_password[0].secret_id : - var.super_user_password_secret_arn) - description = "The ARN of the AWS Secrets Manager secret holding your P4 Server super user's password." -} - -output "super_user_username_secret_arn" { - value = (var.super_user_username_secret_arn == null ? - awscc_secretsmanager_secret.super_user_username[0].secret_id : - var.super_user_username_secret_arn) - description = "The ARN of the AWS Secrets Manager secret holding your P4 Server super user's username." +output "super_password_secret_arn" { + value = awscc_secretsmanager_secret.super_password.secret_id + description = "The ARN of the AWS Secrets Manager secret holding the service account (super) password." +} + +output "admin_username_secret_arn" { + value = awscc_secretsmanager_secret.admin_username.secret_id + description = "The ARN of the AWS Secrets Manager secret holding the admin account username." +} + +output "admin_password_secret_arn" { + value = (var.admin_password_secret_arn == null ? + awscc_secretsmanager_secret.admin_password[0].secret_id : + var.admin_password_secret_arn) + description = "The ARN of the AWS Secrets Manager secret holding the admin account password." } output "instance_id" { diff --git a/modules/perforce/modules/p4-server/templates/user_data.tftpl b/modules/perforce/modules/p4-server/templates/user_data.tftpl index d438a591..66e8459b 100644 --- a/modules/perforce/modules/p4-server/templates/user_data.tftpl +++ b/modules/perforce/modules/p4-server/templates/user_data.tftpl @@ -7,8 +7,9 @@ LOGS_VOLUME_NAME=${logs_volume_name} --hx_metadata $METADATA_VOLUME_NAME \ --hx_depots $DEPOT_VOLUME_NAME \ --p4d_type ${p4_server_type} \ - --username ${username_secret} \ - --password ${password_secret} \ + --super_password ${super_password_secret} \ + --admin_username ${admin_username_secret} \ + --admin_password ${admin_password_secret} \ %{ if fqdn != "" ~} --fqdn ${fqdn} \ %{ endif ~} diff --git a/modules/perforce/modules/p4-server/variables.tf b/modules/perforce/modules/p4-server/variables.tf index 86bbbce1..cd20891f 100644 --- a/modules/perforce/modules/p4-server/variables.tf +++ b/modules/perforce/modules/p4-server/variables.tf @@ -238,15 +238,15 @@ variable "internal" { default = false } -variable "super_user_password_secret_arn" { +variable "admin_username" { type = string - description = "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's password here." - default = null + description = "Username for the Perforce admin account (human user). The 'super' service account is always created automatically for internal tooling." + default = "perforce" } -variable "super_user_username_secret_arn" { +variable "admin_password_secret_arn" { type = string - description = "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's username here. Otherwise, the default of 'perforce' will be used." + description = "Optional ARN of existing Secrets Manager secret for admin password. If not provided, a password will be auto-generated." default = null } diff --git a/modules/perforce/outputs.tf b/modules/perforce/outputs.tf index 40df0279..e986a422 100644 --- a/modules/perforce/outputs.tf +++ b/modules/perforce/outputs.tf @@ -24,14 +24,19 @@ output "p4_server_security_group_id" { description = "The default security group of your P4 Server instance." } -output "p4_server_super_user_password_secret_arn" { - value = var.p4_server_config != null ? module.p4_server[0].super_user_password_secret_arn : null - description = "The ARN of the AWS Secrets Manager secret holding your P4 Server super user's username." +output "p4_server_super_password_secret_arn" { + value = var.p4_server_config != null ? module.p4_server[0].super_password_secret_arn : null + description = "The ARN of the AWS Secrets Manager secret holding the service account (super) password." } -output "p4_server_super_user_username_secret_arn" { - value = var.p4_server_config != null ? module.p4_server[0].super_user_username_secret_arn : null - description = "The ARN of the AWS Secrets Manager secret holding your P4 Server super user's password." +output "p4_server_admin_username_secret_arn" { + value = var.p4_server_config != null ? module.p4_server[0].admin_username_secret_arn : null + description = "The ARN of the AWS Secrets Manager secret holding the admin account username." +} + +output "p4_server_admin_password_secret_arn" { + value = var.p4_server_config != null ? module.p4_server[0].admin_password_secret_arn : null + description = "The ARN of the AWS Secrets Manager secret holding the admin account password." } output "p4_server_instance_id" { @@ -78,8 +83,8 @@ output "p4_auth_target_group_arn" { # P4 Code Review output "p4_code_review_service_security_group_id" { - value = var.p4_code_review_config != null ? module.p4_code_review[0].service_security_group_id : null - description = "Security group associated with the ECS service running P4 Code Review." + value = var.p4_code_review_config != null ? module.p4_code_review[0].application_security_group_id : null + description = "Security group associated with P4 Code Review application." } output "p4_code_review_alb_security_group_id" { @@ -87,11 +92,6 @@ output "p4_code_review_alb_security_group_id" { description = "Security group associated with the P4 Code Review load balancer." } -output "p4_code_review_perforce_cluster_name" { - value = var.p4_code_review_config != null ? module.p4_code_review[0].cluster_name : null - description = "Name of the ECS cluster hosting P4 Code Review." -} - output "p4_code_review_alb_dns_name" { value = var.p4_code_review_config != null ? module.p4_code_review[0].alb_dns_name : null description = "The DNS name of the P4 Code Review ALB." @@ -107,15 +107,6 @@ output "p4_code_review_target_group_arn" { description = "The service target group for the P4 Code Review." } -output "p4_code_review_default_role_id" { - value = var.p4_code_review_config != null ? module.p4_code_review[0].default_role_id : null - description = "The default role for the P4 Code Review service task" -} - -output "p4_code_review_execution_role_id" { - value = var.p4_code_review_config != null ? module.p4_code_review[0].execution_role_id : null - description = "The default role for the P4 Code Review service task" -} output "p4_server_lambda_link_name" { value = (var.p4_server_config.storage_type == "FSxN" && var.p4_server_config.protocol == "ISCSI" ? diff --git a/modules/perforce/sg.tf b/modules/perforce/sg.tf index edf76b9f..1a5c5455 100644 --- a/modules/perforce/sg.tf +++ b/modules/perforce/sg.tf @@ -85,12 +85,12 @@ resource "aws_vpc_security_group_ingress_rule" "perforce_web_services_inbound_fr ) } -# Perforce Web Services ALB <-- P4 Server -# Allows Perforce Web Services ALB to receive inbound traffic from P4 Server (needed for authentication using P4Auth extension) +# Perforce Web Services ALB <-- P4 Server (HTTPS) +# Allows Perforce Web Services ALB to receive inbound traffic from P4 Server (needed for P4Auth extension and Swarm triggers) resource "aws_vpc_security_group_ingress_rule" "perforce_web_services_inbound_from_p4_server" { count = (var.create_shared_application_load_balancer && var.create_default_sgs && var.p4_server_config != null ? 1 : 0) security_group_id = aws_security_group.perforce_web_services_alb[0].id - description = "Allows Perforce Web Services ALB to receive inbound traffic from P4 Server. This is used for authentication using the P4Auth extension." + description = "Allows Perforce Web Services ALB to receive inbound traffic from P4 Server. This is used for P4Auth extension authentication and Swarm trigger validation." ip_protocol = "TCP" from_port = 443 to_port = 443 @@ -128,7 +128,7 @@ resource "aws_vpc_security_group_egress_rule" "perforce_alb_outbound_to_p4_code_ from_port = 80 to_port = 80 ip_protocol = "TCP" - referenced_security_group_id = module.p4_code_review[0].service_security_group_id + referenced_security_group_id = module.p4_code_review[0].application_security_group_id } ####################################################################################### @@ -143,7 +143,19 @@ resource "aws_vpc_security_group_ingress_rule" "p4_server_inbound_from_p4_code_r ip_protocol = "TCP" from_port = 1666 to_port = 1666 - referenced_security_group_id = module.p4_code_review[0].service_security_group_id + referenced_security_group_id = module.p4_code_review[0].application_security_group_id +} + +# P4 Server --> Perforce Web Services ALB (HTTPS) +# Allows P4 Server to send HTTPS traffic to Perforce Web Services ALB for Swarm trigger validation +resource "aws_vpc_security_group_egress_rule" "p4_server_outbound_to_perforce_web_services_alb_https" { + count = var.p4_code_review_config != null && var.p4_server_config != null && var.create_default_sgs && var.create_shared_application_load_balancer ? 1 : 0 + security_group_id = module.p4_server[0].security_group_id + description = "Allows P4 Server to send HTTPS traffic to Perforce Web Services ALB for Swarm trigger validation." + from_port = 443 + to_port = 443 + ip_protocol = "TCP" + referenced_security_group_id = aws_security_group.perforce_web_services_alb[0].id } @@ -170,7 +182,7 @@ resource "aws_vpc_security_group_ingress_rule" "p4_auth_inbound_from_perforce_we # Allows P4 Code Review to receive inbound traffic from Perforce Web Services ALB resource "aws_vpc_security_group_ingress_rule" "p4_code_review_inbound_from_perforce_web_services_alb" { count = var.p4_code_review_config != null && var.create_default_sgs && var.create_shared_application_load_balancer ? 1 : 0 - security_group_id = module.p4_code_review[0].service_security_group_id + security_group_id = module.p4_code_review[0].application_security_group_id description = "Allows P4 Code Review to receive inbound traffic from Perforce Web Services ALB." ip_protocol = "TCP" from_port = 80 @@ -183,7 +195,7 @@ resource "aws_vpc_security_group_ingress_rule" "p4_code_review_inbound_from_perf # Allows P4 Code Review to send outbound traffic to P4 Server. resource "aws_vpc_security_group_egress_rule" "p4_code_review_outbound_to_p4_server" { count = var.p4_code_review_config != null && var.p4_server_config != null && var.create_default_sgs ? 1 : 0 - security_group_id = module.p4_code_review[0].service_security_group_id + security_group_id = module.p4_code_review[0].application_security_group_id description = "Allows P4 Code Review to send outbound traffic to P4 Server." from_port = 1666 to_port = 1666 diff --git a/modules/perforce/variables.tf b/modules/perforce/variables.tf index d2e2809f..1ba7cba5 100644 --- a/modules/perforce/variables.tf +++ b/modules/perforce/variables.tf @@ -214,8 +214,8 @@ variable "p4_server_config" { existing_security_groups = optional(list(string), []) internal = optional(bool, false) - super_user_password_secret_arn = optional(string, null) - super_user_username_secret_arn = optional(string, null) + admin_username = optional(string, "perforce") + admin_password_secret_arn = optional(string, null) create_default_role = optional(bool, true) custom_role = optional(string, null) @@ -286,9 +286,9 @@ variable "p4_server_config" { internal: "Set this flag to true if you do not want the P4 Server instance to have a public IP." - super_user_password_secret_arn: "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's username here. Otherwise, the default of 'perforce' will be used." + admin_username: "Username for the Perforce admin account. The 'super' service account is always created automatically for internal tooling. Default is 'perforce'." - super_user_username_secret_arn: "If you would like to manage your own super user credentials through AWS Secrets Manager provide the ARN for the super user's password here." + admin_password_secret_arn: "Optional ARN of existing Secrets Manager secret for admin password. If not provided, a password will be auto-generated." create_default_role: "Optional creation of P4 Server default IAM Role with SSM managed instance core policy attached. Default is set to true." @@ -440,14 +440,12 @@ variable "p4_code_review_config" { name = optional(string, "p4-code-review") project_prefix = optional(string, "cgd") environment = optional(string, "dev") - debug = optional(bool, false) fully_qualified_domain_name = string # Compute - container_name = optional(string, "p4-code-review-container") - container_port = optional(number, 80) - container_cpu = optional(number, 1024) - container_memory = optional(number, 4096) + application_port = optional(number, 80) + instance_type = optional(string, "m5.large") + ami_id = optional(string, null) p4d_port = optional(string, null) p4charset = optional(string, null) existing_redis_connection = optional(object({ @@ -457,22 +455,20 @@ variable "p4_code_review_config" { # Storage & Logging cloudwatch_log_retention_in_days = optional(number, 365) + ebs_volume_size = optional(number, 20) + ebs_volume_type = optional(string, "gp3") + ebs_volume_encrypted = optional(bool, true) + ebs_availability_zone = optional(string, null) # Networking & Security create_default_sgs = optional(bool, true) existing_security_groups = optional(list(string), []) internal = optional(bool, false) service_subnets = optional(list(string), null) + instance_subnet_id = string - create_default_role = optional(bool, true) - custom_role = optional(string, null) - - super_user_password_secret_arn = optional(string, null) - super_user_username_secret_arn = optional(string, null) - p4_code_review_user_password_secret_arn = optional(string, null) - p4_code_review_user_username_secret_arn = optional(string, null) - enable_sso = optional(string, true) - config_php_source = optional(string, null) + super_user_password_secret_arn = optional(string, null) + custom_config = optional(string, null) # Caching elasticache_node_count = optional(number, 1) @@ -489,23 +485,19 @@ variable "p4_code_review_config" { environment : "The environment where the P4 Code Review service will be deployed. Default is 'dev'." - debug : "Whether to enable debug mode for the P4 Code Review service. Default is 'false'." - fully_qualified_domain_name : "The FQDN for the P4 Code Review Service. This is used for the P4 Code Review's Perforce configuration." # Compute - container_name : "The name of the P4 Code Review service container. Default is 'p4-code-review-container'." + application_port : "The port on which the P4 Code Review service will be listening. Default is '80'." - container_port : "The port on which the P4 Code Review service will be listening. Default is '3000'." + instance_type : "EC2 instance type for running P4 Code Review. Default is 'm5.large'." - container_cpu : "The number of CPU units to reserve for the P4 Code Review service container. Default is '1024'." + ami_id : "Optional AMI ID for P4 Code Review. If not provided, will use the latest Packer-built AMI." - container_memory : "The number of CPU units to reserve for the P4 Code Review service container. Default is '4096'." + p4d_port : "The full URL you will use to access the P4 Depot in clients such P4V and P4Admin. Note, this typically starts with 'ssl:' and ends with the default port of ':1666'." - pd4_port : "The full URL you will use to access the P4 Depot in clients such P4V and P4Admin. Note, this typically starts with 'ssl:' and ends with the default port of ':1666'." - - p4charset : "The P4CHARSET environment variable to set in the P4 Code Review container." + p4charset : "The P4CHARSET environment variable to set for the P4 Code Review instance." existing_redis_connection : "The existing Redis connection for the P4 Code Review service." @@ -513,29 +505,25 @@ variable "p4_code_review_config" { # Storage & Logging cloudwatch_log_retention_in_days : "The number of days to retain the P4 Code Review service logs in CloudWatch. Default is 365 days." + ebs_volume_size : "Size in GB for the EBS volume that stores P4 Code Review data. Default is '20'." - # Networking & Security - create_default_sgs : "Whether to create default security groups for the P4 Code Review service." - - internal : "Set this flag to true if you do not want the P4 Code Review service to have a public IP." + ebs_volume_type : "EBS volume type for P4 Code Review data storage. Default is 'gp3'." - create_default_role : "Whether to create the P4 Code Review default IAM Role. Default is set to true." + ebs_volume_encrypted : "Enable encryption for the EBS volume storing P4 Code Review data. Default is 'true'." - custom_role : "ARN of a custom IAM Role you wish to use with P4 Code Review." + ebs_availability_zone : "Availability zone for the EBS volume. Must match the EC2 instance AZ." - super_user_password_secret_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review Administrator username." - super_user_username_secret_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review Administrator password." - - p4d_p4_code_review_user_secret_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's username." + # Networking & Security + create_default_sgs : "Whether to create default security groups for the P4 Code Review service." - p4d_p4_code_review_password_secret_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's password." + internal : "Set this flag to true if you do not want the P4 Code Review service to have a public IP." - p4d_p4_code_review_user_password_arn : "Optionally provide the ARN of an AWS Secret for the P4 Code Review user's password." + instance_subnet_id : "The subnet ID where the EC2 instance will be launched. Should be a private subnet for security." - enable_sso : "Whether to enable SSO for the P4 Code Review service. Default is set to false." + super_user_password_secret_arn : "Optionally provide the ARN of an AWS Secret for the P4 Server super user password. The super user is used for both Swarm runtime operations and administrative tasks." - config_php_source : "Used as the ValueFrom for P4CR's config.php. Contents should be base64 encoded, and will be combined with the generated config.php via array_replace_recursive." + custom_config : "JSON string with additional Swarm configuration to merge with the generated config.php. Use this for SSO/SAML setup, notifications, Jira integration, etc." # Caching