diff --git a/docs/provision-signing-quickref.md b/docs/provision-signing-quickref.md new file mode 100644 index 0000000..5994228 --- /dev/null +++ b/docs/provision-signing-quickref.md @@ -0,0 +1,171 @@ +# Quick Reference: Binary Signing in Provision Scripts + +## Setup (in avocado.yaml) + +```yaml +signing_keys: + my-key: my-key-id + +runtime: + my-runtime: + signing: + key: my-key + checksum_algorithm: sha256 +``` + +## Usage (in provision script) + +### Simple Usage +```bash +avocado-sign-request /opt/_avocado/x86_64/runtimes/my-runtime/binary.bin +``` + +### With Error Handling +```bash +if avocado-sign-request /path/to/binary; then + echo "Signed successfully" +else + echo "Signing failed" + exit 1 +fi +``` + +### Check Availability +```bash +if [ -n "$AVOCADO_SIGNING_ENABLED" ]; then + avocado-sign-request /path/to/binary +fi +``` + +## Exit Codes + +| Code | Meaning | +|------|---------| +| 0 | Success | +| 1 | Signing failed | +| 2 | Signing unavailable | +| 3 | File not found | + +## Environment Variables + +### General Variables +- `$AVOCADO_RUNTIME_BUILD_DIR` - Full path to runtime build directory (e.g., `/opt/_avocado/x86_64/runtimes/`) +- `$AVOCADO_EXT_LIST` - Space-separated list of required extensions +- `$AVOCADO_PROVISION_OUT` - Output directory (if `--out` specified). File ownership automatically fixed to calling user. +- `$AVOCADO_PROVISION_STATE` - Path to state file for persisting state between provision runs (if `--provision-profile` specified). See State File section below. +- `$AVOCADO_STONE_INCLUDE_PATHS` - Stone include paths (if configured) +- `$AVOCADO_STONE_MANIFEST` - Stone manifest path (if configured) + +### Signing Variables +- `$AVOCADO_SIGNING_ENABLED` - Set to "1" when available +- `$AVOCADO_SIGNING_KEY_NAME` - Key name being used +- `$AVOCADO_SIGNING_CHECKSUM` - Algorithm (sha256/blake3) +- `$AVOCADO_SIGNING_SOCKET` - Socket path + +## Output + +Creates `{binary}.sig` file next to the binary: + +``` +/opt/_avocado/x86_64/runtimes/my-runtime/ +├── binary.bin +└── binary.bin.sig ← Created by signing +``` + +## Path Requirements + +Binary must be in one of these locations: +- `/opt/_avocado/{target}/runtimes/{runtime}/...` +- `/opt/_avocado/{target}/output/runtimes/{runtime}/...` + +❌ Won't work: `/tmp/binary`, `/opt/src/binary` +✅ Will work: +- `/opt/_avocado/x86_64/runtimes/my-runtime/binary` +- `/opt/_avocado/x86_64/output/runtimes/my-runtime/binary` + +## Complete Example + +```bash +#!/bin/bash +# avocado-provision-x86_64 + +set -e + +# Use the runtime build directory variable +RUNTIME_DIR="$AVOCADO_RUNTIME_BUILD_DIR" + +# Build binary +make firmware.bin + +# Copy to runtime directory +cp firmware.bin "$RUNTIME_DIR/" + +# Sign it +if command -v avocado-sign-request &> /dev/null; then + if avocado-sign-request "$RUNTIME_DIR/firmware.bin"; then + echo "✓ Signed firmware.bin" + else + echo "✗ Failed to sign firmware.bin" + exit 1 + fi +fi + +# Continue provisioning... +``` + +## Troubleshooting + +**Socket not available?** +- Check signing is configured in avocado.yaml +- Check signing key exists: `avocado signing-keys list` + +**Path validation error?** +- Ensure binary is in `/opt/_avocado/{target}/runtimes/{runtime}/` +- No `..` in path + +**Timeout?** +- Check binary size (signing takes longer for large files) +- Default timeout is 30 seconds + +## State File + +When using a provision profile (`--provision-profile`), you can persist state between provision runs using a JSON state file. + +### Configuration + +```yaml +provision: + production: + state_file: my-state.json # Optional, defaults to provision-{profile}.json + container_args: + - --privileged +``` + +### Usage + +```bash +#!/bin/bash +# In your provision script + +if [ -f "$AVOCADO_PROVISION_STATE" ]; then + echo "Previous state exists, reading..." + DEVICE_ID=$(jq -r '.device_id' "$AVOCADO_PROVISION_STATE") +else + echo "First run, creating state..." + DEVICE_ID=$(uuidgen) +fi + +# Save state for next run +jq -n --arg id "$DEVICE_ID" '{"device_id": $id}' > "$AVOCADO_PROVISION_STATE" +``` + +### How It Works + +1. Before provisioning: If `/` exists, it's copied into the container +2. During provisioning: Script can read/modify `$AVOCADO_PROVISION_STATE` +3. After provisioning: If the file exists in the container, it's copied back to `/` with correct ownership + +## More Information + +See [`docs/provision-signing.md`](provision-signing.md) for complete documentation. + diff --git a/docs/provision-signing.md b/docs/provision-signing.md new file mode 100644 index 0000000..8fe1f8a --- /dev/null +++ b/docs/provision-signing.md @@ -0,0 +1,390 @@ +# Binary Signing During Provisioning + +This document describes how target-specific provision scripts can request binary signing from the host CLI during `avocado provision` execution. + +## Overview + +The Avocado CLI provides a mechanism for provision scripts running inside containers to request binary signing from the host without breaking script execution flow. This is accomplished using Unix domain sockets for bidirectional communication. + +## Architecture + +When you run `avocado provision` for a runtime that has signing configured: + +1. The host CLI starts a signing service listening on a Unix socket +2. The socket and a helper script are mounted into the container +3. Provision scripts can call `avocado-sign-request` to request binary signing +4. The host signs the binary using the configured key +5. The signature is written back to the volume +6. The script continues execution + +## Configuration + +To enable signing during provisioning, configure a signing key for your runtime in `avocado.yaml`: + +```yaml +signing_keys: + my-key: my-key-id + +runtime: + my-runtime: + signing: + key: my-key + checksum_algorithm: sha256 # or blake3 +``` + +## Usage in Provision Scripts + +### Basic Example + +```bash +#!/bin/bash +# avocado-provision-x86_64 script + +set -e + +# Generate a custom binary +echo "Building custom bootloader..." +make -C /opt/src/bootloader custom-bootloader.bin + +# Copy to runtime directory +cp /opt/src/bootloader/custom-bootloader.bin \ + /opt/_avocado/x86_64/runtimes/my-runtime/custom-bootloader.bin + +# Request signing from host +if command -v avocado-sign-request &> /dev/null; then + echo "Requesting signature from host..." + if avocado-sign-request /opt/_avocado/x86_64/runtimes/my-runtime/custom-bootloader.bin; then + echo "Binary signed successfully" + else + echo "Error: Failed to sign binary" + exit 1 + fi +else + echo "Warning: Signing not available" +fi + +# Continue with provisioning... +``` + +### Checking Signing Availability + +```bash +# Check if signing is enabled +if [ -n "$AVOCADO_SIGNING_ENABLED" ]; then + echo "Signing is available" + echo "Using key: $AVOCADO_SIGNING_KEY_NAME" + echo "Algorithm: $AVOCADO_SIGNING_CHECKSUM" +fi +``` + +### Error Handling + +The `avocado-sign-request` helper script returns different exit codes: + +- `0`: Success - binary was signed +- `1`: Signing failed - there was an error during signing +- `2`: Signing unavailable - socket not available +- `3`: File not found - binary doesn't exist + +Example error handling: + +```bash +if ! avocado-sign-request /path/to/binary; then + EXIT_CODE=$? + case $EXIT_CODE in + 1) + echo "Error: Signing failed" + exit 1 + ;; + 2) + echo "Warning: Signing not available, continuing anyway" + ;; + 3) + echo "Error: Binary not found" + exit 1 + ;; + esac +fi +``` + +## Environment Variables + +### General Environment Variables + +The following environment variables are available in provision and build scripts: + +- `AVOCADO_RUNTIME_BUILD_DIR`: Full path to the runtime build directory (e.g., `/opt/_avocado/x86_64/runtimes/`) +- `AVOCADO_EXT_LIST`: Space-separated list of extensions required by the runtime (if any) +- `AVOCADO_PROVISION_OUT`: Output directory path in the container (if `--out` flag is specified). Files written here will have their ownership automatically fixed to match the calling user. +- `AVOCADO_PROVISION_STATE`: Path to a state file for persisting data between provision runs (if `--provision-profile` is specified). See [State File Management](#state-file-management) section below. +- `AVOCADO_STONE_INCLUDE_PATHS`: Stone include paths (if configured for the runtime) +- `AVOCADO_STONE_MANIFEST`: Stone manifest path (if configured for the runtime) + +### Signing-Related Environment Variables + +The following environment variables are available in the container when signing is enabled: + +- `AVOCADO_SIGNING_ENABLED`: Set to `1` when signing is available +- `AVOCADO_SIGNING_SOCKET`: Path to the signing socket (`/run/avocado/sign.sock`) +- `AVOCADO_SIGNING_KEY_NAME`: Name of the signing key being used +- `AVOCADO_SIGNING_CHECKSUM`: Checksum algorithm (`sha256` or `blake3`) + +## Signature Files + +When a binary is successfully signed, a signature file is created with the `.sig` extension: + +``` +/opt/_avocado/x86_64/runtimes/my-runtime/ +├── custom-bootloader.bin +└── custom-bootloader.bin.sig +``` + +The signature file contains JSON with the following structure: + +```json +{ + "version": "1", + "checksum_algorithm": "sha256", + "checksum": "a1b2c3...", + "signature": "d4e5f6...", + "key_name": "my-key", + "keyid": "my-key-id" +} +``` + +## Communication Protocol + +The signing protocol uses line-delimited JSON over Unix domain sockets. + +### Request Format + +```json +{ + "type": "sign_request", + "binary_path": "/opt/_avocado/x86_64/runtimes/my-runtime/custom-binary", + "checksum_algorithm": "sha256" +} +``` + +### Response Format + +Success: +```json +{ + "type": "sign_response", + "success": true, + "signature_path": "/opt/_avocado/x86_64/runtimes/my-runtime/custom-binary.sig", + "signature_content": "{ ... signature JSON ... }", + "error": null +} +``` + +Error: +```json +{ + "type": "sign_response", + "success": false, + "signature_path": null, + "signature_content": null, + "error": "Error message here" +} +``` + +## Security + +- **Path Validation**: Only binaries within the runtime's volume path can be signed +- **Socket Permissions**: Socket file has 0600 permissions (owner only) +- **Read-only Keys**: Signing keys are never exposed to the container +- **No Direct Access**: All signing operations happen on the host + +## Limitations + +- Binary must exist in one of the runtime's directory structures: + - `/opt/_avocado/{target}/runtimes/{runtime}/...` + - `/opt/_avocado/{target}/output/runtimes/{runtime}/...` +- Path traversal (`..`) is not allowed +- Socket operations have a 30-second timeout +- Only one signing operation can be processed at a time per container + +## Troubleshooting + +### "Error: Signing socket not available" + +The signing service is not running. This can happen if: +- No signing key is configured for the runtime +- The socket mount failed +- The signing service failed to start + +Check the host CLI output for errors during provision startup. + +### "Warning: avocado-sign-request not available" + +The helper script was not mounted properly. This should not happen in normal operation. If you see this: +- Ensure you're using the latest version of avocado-cli +- Check that the signing service started successfully (you should see a message about "Starting signing service") +- Try running with `--verbose` flag to see detailed mount information + +### "Error: Binary not found" + +The binary path doesn't exist. Make sure: +- The binary was created successfully +- The path is absolute +- The path points to the correct location in the volume + +### "Error signing binary: Binary path is not within expected runtime directory" + +The binary path must be within one of the allowed runtime directories: +- `/opt/_avocado/{target}/runtimes/{runtime}/...` +- `/opt/_avocado/{target}/output/runtimes/{runtime}/...` + +You cannot sign binaries outside these directories for security reasons. + +**Valid examples:** +- `/opt/_avocado/x86_64/runtimes/my-runtime/firmware.bin` +- `/opt/_avocado/x86_64/output/runtimes/my-runtime/_build/firmware.bin` + +**Invalid examples:** +- `/tmp/firmware.bin` (not in runtime directory) +- `/opt/src/firmware.bin` (source directory, not volume) +- `/opt/_avocado/x86_64/runtimes/other-runtime/binary` (wrong runtime) + +## Example: Complete Provisioning Workflow + +```bash +#!/bin/bash +# avocado-provision-x86_64 script for custom hardware + +set -e + +RUNTIME_DIR="/opt/_avocado/x86_64/runtimes/my-hardware" + +echo "Building firmware for my-hardware..." +cd /opt/src/firmware +make clean +make ARCH=x86_64 + +echo "Copying firmware to runtime directory..." +cp build/firmware.bin "$RUNTIME_DIR/firmware.bin" +cp build/bootloader.bin "$RUNTIME_DIR/bootloader.bin" + +echo "Signing firmware components..." +for binary in firmware.bin bootloader.bin; do + if avocado-sign-request "$RUNTIME_DIR/$binary"; then + echo "✓ Signed $binary" + else + echo "✗ Failed to sign $binary" + exit 1 + fi +done + +echo "Creating provisioning manifest..." +cat > "$RUNTIME_DIR/manifest.json" </`), it is copied into the container at `/opt/_avocado/{target}/output/runtimes/{runtime}/provision-state.json` + +2. **During provisioning**: Your provision script can read and modify the file via the `AVOCADO_PROVISION_STATE` environment variable + +3. **After provisioning**: If the state file exists in the container (even if empty), it is copied back to `/` with the correct ownership (matching the calling user, not root) + +### Usage Example + +```bash +#!/bin/bash +# avocado-provision-x86_64 script with state management + +set -e + +# Check if we have previous state +if [ -f "$AVOCADO_PROVISION_STATE" ]; then + echo "Reading previous provision state..." + PROVISION_COUNT=$(jq -r '.provision_count // 0' "$AVOCADO_PROVISION_STATE") + DEVICE_UUID=$(jq -r '.device_uuid // empty' "$AVOCADO_PROVISION_STATE") + + if [ -z "$DEVICE_UUID" ]; then + DEVICE_UUID=$(uuidgen) + fi +else + echo "First provision run, initializing state..." + PROVISION_COUNT=0 + DEVICE_UUID=$(uuidgen) +fi + +# Increment provision count +PROVISION_COUNT=$((PROVISION_COUNT + 1)) + +echo "Device UUID: $DEVICE_UUID" +echo "Provision count: $PROVISION_COUNT" + +# ... do provisioning work ... + +# Save state for next run +jq -n \ + --arg uuid "$DEVICE_UUID" \ + --argjson count "$PROVISION_COUNT" \ + --arg timestamp "$(date -Iseconds)" \ + '{ + device_uuid: $uuid, + provision_count: $count, + last_provision: $timestamp + }' > "$AVOCADO_PROVISION_STATE" + +echo "State saved successfully" +``` + +### Important Notes + +- The state file is only available when using `--provision-profile` +- The file is stored in your source directory and should be added to `.gitignore` if you don't want to version control it +- File ownership is automatically fixed after provisioning to match the calling user (not root) +- If the state file doesn't exist after provisioning and didn't exist before, no file is created + +## See Also + +- [Signing Keys Documentation](signing-keys.md) - Managing signing keys +- [Runtime Configuration](../README.md#runtime-configuration) - Configuring runtimes +- [Extension Signing](extension-signing.md) - Signing extension images + diff --git a/docs/signing-keys.md b/docs/signing-keys.md index 0d12bdc..76b5dcf 100644 --- a/docs/signing-keys.md +++ b/docs/signing-keys.md @@ -176,12 +176,14 @@ Output: Registered signing keys: my-production-key - Key ID: sha256-7ca821b2d4ac87b3 + Key ID: abc123def456abc123def456abc123def456abc123def456abc123def456abc1 Algorithm: ed25519 Type: file Created: 2025-12-17 15:10:22 UTC ``` +**Note:** Key IDs are the full SHA-256 hash of the public key, base16/hex encoded (64 characters). When you create a key without specifying a `--name`, the key ID is used as the default name. + ### Removing Keys **Remove key reference (hardware key remains intact):** @@ -189,8 +191,8 @@ Registered signing keys: # Remove by name avocado signing-keys remove my-production-key -# Remove by key ID -avocado signing-keys remove sha256-069beb292983492c +# Remove by key ID (full 64-character hex hash) +avocado signing-keys remove abc123def456abc123def456abc123def456abc123def456abc123def456abc1 ``` **Permanently delete hardware key from device (requires confirmation):** @@ -214,13 +216,21 @@ This action cannot be undone. Continue? [y/N]: ### Mapping Keys in avocado.yaml -The `signing_keys` section creates a local mapping between friendly names and key IDs: +The `signing_keys` section creates a local mapping between friendly names and key references. +Key references can be: +- **Key IDs**: The full 64-character hex-encoded SHA-256 hash of the public key +- **Global registry names**: The name used when creating the key with `avocado signing-keys create --name ` ```yaml +# Using key IDs directly (64-character hex hash) +signing_keys: + - production-key: abc123def456abc123def456abc123def456abc123def456abc123def456abc1 + - staging-key: 789012fedcba789012fedcba789012fedcba789012fedcba789012fedcba7890 + +# Or using global registry names (will be resolved to key IDs) signing_keys: - - production-key: sha256-abc123def456 - - staging-key: sha256-789012fedcba - - backup-key: sha256-111222333444 + - production-key: my-production-signing-key # name from global registry + - staging-key: my-staging-key # resolved to key ID at runtime ``` ### Referencing Keys in Runtimes @@ -261,10 +271,10 @@ default_target: qemux86-64 sdk: image: ghcr.io/avocado-framework/avocado-sdk:latest -# Map friendly names to key IDs from global registry +# Map friendly names to key IDs (64-char hex hashes) or global registry names signing_keys: - - production-key: sha256-abc123def456 - - staging-key: sha256-789012fedcba + - production-key: abc123def456abc123def456abc123def456abc123def456abc123def456abc1 + - staging-key: my-staging-key # global registry name, resolved at runtime runtime: production: @@ -293,15 +303,17 @@ The global registry is stored in `keys.json`: { "keys": { "my-production-key": { - "keyid": "sha256-abc123def456", + "keyid": "abc123def456abc123def456abc123def456abc123def456abc123def456abc1", "algorithm": "ed25519", "created_at": "2025-12-17T10:30:00Z", - "uri": "file:///home/user/.config/avocado/signing-keys/sha256-abc123" + "uri": "file:///home/user/.config/avocado/signing-keys/abc123def456abc123def456abc123def456abc123def456abc123def456abc1" } } } ``` +**Note:** The `keyid` is the full SHA-256 hash of the public key, base16/hex encoded (64 characters). If no name is provided when creating a key, the key ID is used as the registry name. + ## API Usage For programmatic access, the following methods are available: @@ -427,7 +439,7 @@ Signature files are JSON format containing: "checksum": "abc123...", "signature": "def456...", "key_name": "production-key", - "keyid": "sha256-abc123def456" + "keyid": "abc123def456abc123def456abc123def456abc123def456abc123def456abc1" } ``` diff --git a/examples/signing-keys-example.yaml b/examples/signing-keys-example.yaml index e54c346..64243fa 100644 --- a/examples/signing-keys-example.yaml +++ b/examples/signing-keys-example.yaml @@ -1,11 +1,15 @@ # Example: Using signing keys with runtime configurations # # This example demonstrates how to: -# 1. Define a local mapping of signing keys (name -> key ID) +# 1. Define a local mapping of signing keys (name -> key ID or global name) # 2. Reference those keys in runtime configurations # # The signing_keys section acts as a bridge between friendly names # and the actual key IDs from the global signing keys registry. +# +# Key IDs are the full SHA-256 hash of the public key, base16/hex encoded (64 characters). +# When you create a key with `avocado signing-keys create`, the key ID is also used +# as the default name if you don't provide a --name argument. default_target: qemux86-64 @@ -13,12 +17,20 @@ sdk: image: docker.io/avocadolinux/avocado-sdk:apollo-edge # Define signing keys with friendly names -# The key IDs (right side) should match keys in the global registry -# managed by `avocado signing-keys` commands +# The values (right side) can be either: +# - A key ID: full 64-character hex-encoded SHA-256 hash of the public key +# - A global registry name: the name used when creating the key with `avocado signing-keys create --name ` +# +# Example with key IDs: signing_keys: - - production-key: sha256-abc123def456 - - staging-key: sha256-789012fedcba - - backup-key: sha256-111222333444 + - production-key: abc123def456abc123def456abc123def456abc123def456abc123def456abc1 + - staging-key: 789012fedcba789012fedcba789012fedcba789012fedcba789012fedcba7890 + - backup-key: 111222333444111222333444111222333444111222333444111222333444dead + +# You can also reference keys by their global registry name: +# signing_keys: +# - production-key: my-production-signing-key # name from global registry +# - staging-key: my-staging-key # will be resolved to key ID runtime: # Production runtime uses the production signing key with blake3 diff --git a/src/commands/build.rs b/src/commands/build.rs index 833e7b8..be2869f 100644 --- a/src/commands/build.rs +++ b/src/commands/build.rs @@ -65,27 +65,30 @@ impl BuildCommand { /// Execute the build command pub async fn execute(&self) -> Result<()> { - // Load the configuration and parse raw TOML - let config = Config::load(&self.config_path) + // Early target validation - load basic config first + let basic_config = Config::load(&self.config_path) .with_context(|| format!("Failed to load config from {}", self.config_path))?; - let content = std::fs::read_to_string(&self.config_path)?; - let parsed: serde_yaml::Value = serde_yaml::from_str(&content)?; - - // Early target validation and logging - fail fast if target is unsupported let target = - crate::utils::target::validate_and_log_target(self.target.as_deref(), &config)?; + crate::utils::target::validate_and_log_target(self.target.as_deref(), &basic_config)?; + + // Load the composed configuration (merges external configs, applies interpolation) + let composed = Config::load_composed(&self.config_path, self.target.as_deref()) + .with_context(|| format!("Failed to load composed config from {}", self.config_path))?; + + let config = &composed.config; + let parsed = &composed.merged_value; // If a specific extension is requested, build only that extension if let Some(ref ext_name) = self.extension { return self - .build_single_extension(&config, &parsed, ext_name, &target) + .build_single_extension(config, parsed, ext_name, &target) .await; } // If a specific runtime is requested, build only that runtime and its dependencies if let Some(ref runtime_name) = self.runtime { return self - .build_single_runtime(&config, &parsed, runtime_name, &target) + .build_single_runtime(config, parsed, runtime_name, &target) .await; } @@ -95,7 +98,7 @@ impl BuildCommand { ); // Determine which runtimes to build based on target - let runtimes_to_build = self.get_runtimes_to_build(&config, &parsed, &target)?; + let runtimes_to_build = self.get_runtimes_to_build(config, parsed, &target)?; if runtimes_to_build.is_empty() { print_info("No runtimes found to build.", OutputLevel::Normal); @@ -105,7 +108,7 @@ impl BuildCommand { // Step 1: Analyze dependencies print_info("Step 1/4: Analyzing dependencies", OutputLevel::Normal); let required_extensions = - self.find_required_extensions(&config, &parsed, &runtimes_to_build, &target)?; + self.find_required_extensions(config, parsed, &runtimes_to_build, &target)?; // Note: SDK compile sections are now compiled on-demand when extensions are built // This prevents duplicate compilation when sdk.compile sections are also extension dependencies @@ -147,17 +150,17 @@ impl BuildCommand { } // Build external extension using its own config - self.build_external_extension(&config, &self.config_path, name, ext_config_path, &target).await.with_context(|| { + self.build_external_extension(config, &self.config_path, name, ext_config_path, &target).await.with_context(|| { format!("Failed to build external extension '{name}' from config '{ext_config_path}'") })?; // Create images for external extension - self.create_external_extension_images(&config, &self.config_path, name, ext_config_path, &target).await.with_context(|| { + self.create_external_extension_images(config, &self.config_path, name, ext_config_path, &target).await.with_context(|| { format!("Failed to create images for external extension '{name}' from config '{ext_config_path}'") })?; // Copy external extension images to output directory so runtime build can find them - self.copy_external_extension_images(&config, name, &target) + self.copy_external_extension_images(config, name, &target) .await .with_context(|| { format!("Failed to copy images for external extension '{name}'") diff --git a/src/commands/ext/build.rs b/src/commands/ext/build.rs index 8785cff..f849ae0 100644 --- a/src/commands/ext/build.rs +++ b/src/commands/ext/build.rs @@ -99,7 +99,8 @@ impl ExtBuildCommand { })?; // Handle compile dependencies with install scripts before building the extension - self.handle_compile_dependencies(&config, &ext_config, &target) + // Pass the ext_config_path so SDK compile sections are loaded from the correct config + self.handle_compile_dependencies(&config, &ext_config, &target, &ext_config_path) .await?; // Get extension types from the types array (defaults to ["sysext", "confext"]) @@ -1232,11 +1233,15 @@ echo "Set proper permissions on authentication files""#, script_lines.join("") } /// Handle compile dependencies with install scripts + /// + /// `sdk_config_path` is the path to the config file that contains the sdk.compile sections. + /// For external extensions, this should be the external config path, not the main config. async fn handle_compile_dependencies( &self, config: &Config, ext_config: &serde_yaml::Value, target: &str, + sdk_config_path: &str, ) -> Result<()> { // Get dependencies from extension configuration let dependencies = ext_config.get("dependencies").and_then(|v| v.as_mapping()); @@ -1300,8 +1305,15 @@ echo "Set proper permissions on authentication files""#, ); // First, run the SDK compile for the specified section + // Use sdk_config_path which points to the config where sdk.compile sections are defined + if self.verbose { + print_info( + &format!("Using config path for SDK compile: {}", sdk_config_path), + OutputLevel::Normal, + ); + } let compile_command = SdkCompileCommand::new( - self.config_path.clone(), + sdk_config_path.to_string(), self.verbose, vec![compile_section.clone()], Some(target.to_string()), @@ -1311,7 +1323,8 @@ echo "Set proper permissions on authentication files""#, compile_command.execute().await.with_context(|| { format!( - "Failed to compile SDK section '{compile_section}' for dependency '{dep_name}'" + "Failed to compile SDK section '{compile_section}' for dependency '{dep_name}'. Config path: {}", + sdk_config_path ) })?; diff --git a/src/commands/ext/install.rs b/src/commands/ext/install.rs index 1d83ece..aa8e3ac 100644 --- a/src/commands/ext/install.rs +++ b/src/commands/ext/install.rs @@ -1,4 +1,4 @@ -use anyhow::Result; +use anyhow::{Context, Result}; use crate::utils::config::{Config, ExtensionLocation}; use crate::utils::container::{RunConfig, SdkContainer}; @@ -37,10 +37,12 @@ impl ExtInstallCommand { } pub async fn execute(&self) -> Result<()> { - // Load the configuration and parse raw TOML - let config = Config::load(&self.config_path)?; - let content = std::fs::read_to_string(&self.config_path)?; - let parsed: serde_yaml::Value = serde_yaml::from_str(&content)?; + // Load the composed configuration (merges external configs, applies interpolation) + let composed = Config::load_composed(&self.config_path, self.target.as_deref()) + .with_context(|| format!("Failed to load composed config from {}", self.config_path))?; + + let config = &composed.config; + let parsed = &composed.merged_value; // Merge container args from config and CLI (similar to SDK commands) let merged_container_args = config.merge_sdk_container_args(self.container_args.as_ref()); @@ -48,10 +50,12 @@ impl ExtInstallCommand { // Get repo_url and repo_release from config let repo_url = config.get_sdk_repo_url(); let repo_release = config.get_sdk_repo_release(); - let target = resolve_target_required(self.target.as_deref(), &config)?; + let target = resolve_target_required(self.target.as_deref(), config)?; - // Determine which extensions to install - let extensions_to_install = if let Some(extension_name) = &self.extension { + // Determine which extensions to install (with their locations) + let extensions_to_install: Vec<(String, ExtensionLocation)> = if let Some(extension_name) = + &self.extension + { // Single extension specified - use comprehensive lookup match config.find_extension_in_dependency_tree( &self.config_path, @@ -71,13 +75,15 @@ impl ExtInstallCommand { } ExtensionLocation::External { name, config_path } => { print_info( - &format!("Found external extension '{name}' in config '{config_path}'"), - OutputLevel::Normal, - ); + &format!( + "Found external extension '{name}' in config '{config_path}'" + ), + OutputLevel::Normal, + ); } } } - vec![extension_name.clone()] + vec![(extension_name.clone(), location)] } None => { print_error( @@ -93,7 +99,17 @@ impl ExtInstallCommand { Some(ext_section) => match ext_section.as_mapping() { Some(table) => table .keys() - .filter_map(|k| k.as_str().map(|s| s.to_string())) + .filter_map(|k| { + k.as_str().map(|s| { + ( + s.to_string(), + ExtensionLocation::Local { + name: s.to_string(), + config_path: self.config_path.clone(), + }, + ) + }) + }) .collect(), None => vec![], }, @@ -109,11 +125,15 @@ impl ExtInstallCommand { return Ok(()); } + let ext_names: Vec<&str> = extensions_to_install + .iter() + .map(|(n, _)| n.as_str()) + .collect(); print_info( &format!( "Installing {} extension(s): {}.", extensions_to_install.len(), - extensions_to_install.join(", ") + ext_names.join(", ") ), OutputLevel::Normal, ); @@ -137,14 +157,14 @@ impl ExtInstallCommand { .and_then(|runtime_config| runtime_config.get("target")) .and_then(|target| target.as_str()) .map(|s| s.to_string()); - let target = resolve_target_required(self.target.as_deref(), &config)?; + let target = resolve_target_required(self.target.as_deref(), config)?; // Use the container helper to run the setup commands let container_helper = SdkContainer::new(); let total = extensions_to_install.len(); // Install each extension - for (index, ext_name) in extensions_to_install.iter().enumerate() { + for (index, (ext_name, ext_location)) in extensions_to_install.iter().enumerate() { if self.verbose { print_debug( &format!("Installing ({}/{}) {}.", index + 1, total, ext_name), @@ -152,10 +172,26 @@ impl ExtInstallCommand { ); } + // Get the config path where this extension is actually defined + let ext_config_path = match ext_location { + ExtensionLocation::Local { config_path, .. } => config_path.clone(), + ExtensionLocation::External { config_path, .. } => { + // Resolve relative path against main config directory + let main_config_dir = std::path::Path::new(&self.config_path) + .parent() + .unwrap_or(std::path::Path::new(".")); + main_config_dir + .join(config_path) + .to_string_lossy() + .to_string() + } + }; + if !self .install_single_extension( - &parsed, + config, ext_name, + &ext_config_path, &container_helper, container_image, &target, @@ -183,8 +219,9 @@ impl ExtInstallCommand { #[allow(clippy::too_many_arguments)] async fn install_single_extension( &self, - config: &serde_yaml::Value, + config: &Config, extension: &str, + ext_config_path: &str, container_helper: &SdkContainer, container_image: &str, target: &str, @@ -246,13 +283,12 @@ impl ExtInstallCommand { } } + // Get merged extension configuration from the correct config file + // This properly handles both local and external extensions + let ext_config = config.get_merged_ext_config(extension, target, ext_config_path)?; + // Install dependencies if they exist - // Check if extension exists in local config (versioned extensions may not be local) - let dependencies = config - .get("ext") - .and_then(|ext| ext.as_mapping()) - .and_then(|ext_table| ext_table.get(extension)) - .and_then(|extension_config| extension_config.get("dependencies")); + let dependencies = ext_config.as_ref().and_then(|ec| ec.get("dependencies")); if let Some(serde_yaml::Value::Mapping(deps_map)) = dependencies { // Build list of packages to install and handle extension dependencies @@ -266,61 +302,71 @@ impl ExtInstallCommand { None => continue, // Skip if package name is not a string }; - // Handle extension dependencies - if let serde_yaml::Value::Mapping(spec_map) = version_spec { - // Skip compile dependencies (identified by dict value with 'compile' key) - if spec_map.contains_key(serde_yaml::Value::String("compile".to_string())) { - continue; + // Handle different dependency types based on value format + match version_spec { + // Simple string version: "package: version" or "package: '*'" + // These are always package repository dependencies + serde_yaml::Value::String(version) => { + if version == "*" { + packages.push(package_name.to_string()); + } else { + packages.push(format!("{package_name}-{version}")); + } } - - // Check for extension dependency - if let Some(ext_name) = spec_map.get("ext").and_then(|v| v.as_str()) { - // Check if this is a versioned extension (has vsn field) - if let Some(version) = spec_map.get("vsn").and_then(|v| v.as_str()) { - extension_dependencies - .push((ext_name.to_string(), Some(version.to_string()))); + // Object/mapping value: need to check what type of dependency + serde_yaml::Value::Mapping(spec_map) => { + // Skip compile dependencies - these are SDK-compiled, not from repo + // Format: { compile: "section-name", install: "script.sh" } + if spec_map.get("compile").is_some() { if self.verbose { - print_info( - &format!("Found versioned extension dependency: {ext_name} version {version}"), + print_debug( + &format!("Skipping compile dependency '{package_name}' (SDK-compiled, not from repo)"), OutputLevel::Normal, ); } + continue; } - // Check if this is an external extension (has config field) - else if let Some(config_path) = - spec_map.get("config").and_then(|v| v.as_str()) - { - extension_dependencies.push((ext_name.to_string(), None)); - if self.verbose { - print_info( - &format!("Found external extension dependency: {ext_name} from config {config_path}"), - OutputLevel::Normal, - ); + + // Check for extension dependency + // Format: { ext: "extension-name" } or { ext: "name", config: "path" } or { ext: "name", vsn: "version" } + if let Some(ext_name) = spec_map.get("ext").and_then(|v| v.as_str()) { + // Check if this is a versioned extension (has vsn field) + if let Some(version) = spec_map.get("vsn").and_then(|v| v.as_str()) { + extension_dependencies + .push((ext_name.to_string(), Some(version.to_string()))); + if self.verbose { + print_info( + &format!("Found versioned extension dependency: {ext_name} version {version}"), + OutputLevel::Normal, + ); + } } - } else { - // Local extension - extension_dependencies.push((ext_name.to_string(), None)); - if self.verbose { - print_info( - &format!("Found local extension dependency: {ext_name}"), - OutputLevel::Normal, - ); + // Check if this is an external extension (has config field) + else if let Some(config_path) = + spec_map.get("config").and_then(|v| v.as_str()) + { + extension_dependencies.push((ext_name.to_string(), None)); + if self.verbose { + print_info( + &format!("Found external extension dependency: {ext_name} from config {config_path}"), + OutputLevel::Normal, + ); + } + } else { + // Local extension + extension_dependencies.push((ext_name.to_string(), None)); + if self.verbose { + print_info( + &format!("Found local extension dependency: {ext_name}"), + OutputLevel::Normal, + ); + } } + continue; // Skip adding to packages list } - continue; // Skip adding to packages list - } - } - // Handle regular package dependencies - match version_spec { - serde_yaml::Value::String(version) => { - if version == "*" { - packages.push(package_name.to_string()); - } else { - packages.push(format!("{package_name}-{version}")); - } - } - serde_yaml::Value::Mapping(spec_map) => { + // Check for explicit version in object format + // Format: { version: "1.0.0" } if let Some(serde_yaml::Value::String(version)) = spec_map.get("version") { if version == "*" { packages.push(package_name.to_string()); @@ -328,6 +374,8 @@ impl ExtInstallCommand { packages.push(format!("{package_name}-{version}")); } } + // If it's a mapping without compile, ext, or version keys, skip it + // (unknown format) } _ => {} } diff --git a/src/commands/install.rs b/src/commands/install.rs index fb89e85..98ce241 100644 --- a/src/commands/install.rs +++ b/src/commands/install.rs @@ -6,7 +6,7 @@ use crate::commands::{ ext::ExtInstallCommand, runtime::RuntimeInstallCommand, sdk::SdkInstallCommand, }; use crate::utils::{ - config::Config, + config::{ComposedConfig, Config}, container::SdkContainer, output::{print_info, print_success, OutputLevel}, target::validate_and_log_target, @@ -65,16 +65,17 @@ impl InstallCommand { /// Execute the install command pub async fn execute(&self) -> Result<()> { - // Load the configuration to check what components exist - let config = Config::load(&self.config_path) + // Early target validation - load basic config first to validate target + let basic_config = Config::load(&self.config_path) .with_context(|| format!("Failed to load config from {}", self.config_path))?; + let _target = validate_and_log_target(self.target.as_deref(), &basic_config)?; - // Early target validation and logging - fail fast if target is unsupported - let _target = validate_and_log_target(self.target.as_deref(), &config)?; + // Load the composed configuration (merges external configs, applies interpolation) + let composed = Config::load_composed(&self.config_path, self.target.as_deref()) + .with_context(|| format!("Failed to load composed config from {}", self.config_path))?; - // Parse the configuration file for runtime/extension analysis - let content = std::fs::read_to_string(&self.config_path)?; - let parsed: serde_yaml::Value = serde_yaml::from_str(&content)?; + let config = &composed.config; + let parsed = &composed.merged_value; print_info( "Starting comprehensive install process...", @@ -103,8 +104,7 @@ impl InstallCommand { ); // Determine which extensions to install based on runtime dependencies and target - let extensions_to_install = - self.find_required_extensions(&config, &self.config_path, &_target)?; + let extensions_to_install = self.find_required_extensions(&composed, &_target)?; if !extensions_to_install.is_empty() { for extension_dep in &extensions_to_install { @@ -144,7 +144,7 @@ impl InstallCommand { } // Install external extension to ${AVOCADO_PREFIX}/extensions/ - self.install_external_extension(&config, &self.config_path, name, ext_config_path, &_target).await.with_context(|| { + self.install_external_extension(config, &self.config_path, name, ext_config_path, &_target).await.with_context(|| { format!("Failed to install external extension '{name}' from config '{ext_config_path}'") })?; } @@ -159,7 +159,7 @@ impl InstallCommand { } // Install versioned extension to its own sysroot - self.install_versioned_extension(&config, name, version, &_target).await.with_context(|| { + self.install_versioned_extension(config, name, version, &_target).await.with_context(|| { format!("Failed to install versioned extension '{name}' version '{version}'") })?; } @@ -170,7 +170,7 @@ impl InstallCommand { } // 3. Install runtime dependencies (filtered by target) - let target_runtimes = self.find_target_relevant_runtimes(&config, &parsed, &_target)?; + let target_runtimes = self.find_target_relevant_runtimes(config, parsed, &_target)?; if target_runtimes.is_empty() { print_info( @@ -226,8 +226,7 @@ impl InstallCommand { /// Find all extensions required by the runtime/target, or all extensions if no runtime/target specified fn find_required_extensions( &self, - config: &Config, - config_path: &str, + composed: &ComposedConfig, target: &str, ) -> Result> { use std::collections::HashSet; @@ -235,12 +234,12 @@ impl InstallCommand { let mut required_extensions = HashSet::new(); let mut visited = HashSet::new(); // For cycle detection - // Read and parse the configuration file - let content = std::fs::read_to_string(config_path)?; - let parsed: serde_yaml::Value = serde_yaml::from_str(&content)?; + let config = &composed.config; + let parsed = &composed.merged_value; + let config_path = &composed.config_path; // First, find which runtimes are relevant for this target - let target_runtimes = self.find_target_relevant_runtimes(config, &parsed, target)?; + let target_runtimes = self.find_target_relevant_runtimes(config, parsed, target)?; if target_runtimes.is_empty() { if self.verbose { @@ -615,12 +614,21 @@ impl InstallCommand { ) })?; - // Process the extension's dependencies (packages, not extension dependencies) + // First, install SDK dependencies from the external extension's config + self.install_external_extension_sdk_deps( + config, + base_config_path, + external_config_path, + target, + ) + .await?; + + // Process the extension's dependencies (packages, not extension or compile dependencies) if let Some(serde_yaml::Value::Mapping(deps_map)) = extension_config.get("dependencies") { if !deps_map.is_empty() { let mut packages = Vec::new(); - // Process package dependencies (not extension dependencies) + // Process package dependencies (not extension or compile dependencies) for (package_name_val, version_spec) in deps_map { // Convert package name from Value to String let package_name = match package_name_val.as_str() { @@ -628,37 +636,46 @@ impl InstallCommand { None => continue, // Skip if package name is not a string }; - // Skip extension dependencies (they have "ext" field) - these are handled separately + // Skip non-package dependencies (extension or compile dependencies) if let serde_yaml::Value::Mapping(spec_map) = version_spec { - if spec_map.contains_key(serde_yaml::Value::String("ext".to_string())) { - continue; // Skip extension dependencies - they're handled by the recursive logic + // Skip extension dependencies (they have "ext" field) - handled by recursive logic + if spec_map.get("ext").is_some() { + continue; + } + // Skip compile dependencies (they have "compile" field) - SDK-compiled, not from repo + if spec_map.get("compile").is_some() { + if self.verbose { + print_info( + &format!("Skipping compile dependency '{package_name}' (SDK-compiled, not from repo)"), + OutputLevel::Normal, + ); + } + continue; } } - // Process package dependencies only - let package_name_and_version = if version_spec.as_str().is_some() { - let version = version_spec.as_str().unwrap(); - if version == "*" { - package_name.to_string() - } else { - format!("{package_name}-{version}") - } - } else if let serde_yaml::Value::Mapping(spec_map) = version_spec { - if let Some(version) = spec_map.get("version") { - let version = version.as_str().unwrap_or("*"); + // Process package dependencies only (simple string versions or version objects) + match version_spec { + serde_yaml::Value::String(version) => { if version == "*" { - package_name.to_string() + packages.push(package_name.to_string()); } else { - format!("{package_name}-{version}") + packages.push(format!("{package_name}-{version}")); } - } else { - package_name.to_string() } - } else { - package_name.to_string() - }; - - packages.push(package_name_and_version); + serde_yaml::Value::Mapping(spec_map) => { + // Only process if it has a "version" key (already checked it doesn't have ext/compile) + if let Some(version) = spec_map.get("version").and_then(|v| v.as_str()) + { + if version == "*" { + packages.push(package_name.to_string()); + } else { + packages.push(format!("{package_name}-{version}")); + } + } + } + _ => {} + } } if !packages.is_empty() { @@ -891,6 +908,189 @@ $DNF_SDK_HOST \ Ok(()) } + + /// Install SDK dependencies from an external extension's config + async fn install_external_extension_sdk_deps( + &self, + config: &Config, + base_config_path: &str, + external_config_path: &str, + target: &str, + ) -> Result<()> { + // Resolve the external config path + let resolved_external_config_path = + config.resolve_path_relative_to_src_dir(base_config_path, external_config_path); + + // Load the external config + let external_config_content = std::fs::read_to_string(&resolved_external_config_path) + .with_context(|| { + format!( + "Failed to read external config file: {}", + resolved_external_config_path.display() + ) + })?; + let mut external_config: serde_yaml::Value = serde_yaml::from_str(&external_config_content) + .with_context(|| { + format!( + "Failed to parse external config file: {}", + resolved_external_config_path.display() + ) + })?; + + // Apply interpolation to the external config + // This resolves templates like {{ config.distro.version }} + crate::utils::interpolation::interpolate_config(&mut external_config, Some(target)) + .with_context(|| { + format!( + "Failed to interpolate external config file: {}", + resolved_external_config_path.display() + ) + })?; + + // Check if the external config has SDK dependencies + let sdk_deps = external_config + .get("sdk") + .and_then(|sdk| sdk.get("dependencies")) + .and_then(|deps| deps.as_mapping()); + + let Some(sdk_deps_map) = sdk_deps else { + if self.verbose { + print_info( + &format!( + "No SDK dependencies found in external config '{external_config_path}'" + ), + OutputLevel::Normal, + ); + } + return Ok(()); + }; + + // Build list of SDK packages to install + let mut sdk_packages = Vec::new(); + for (pkg_name_val, version_spec) in sdk_deps_map { + let pkg_name = match pkg_name_val.as_str() { + Some(name) => name, + None => continue, + }; + + match version_spec { + serde_yaml::Value::String(version) => { + if version == "*" { + sdk_packages.push(pkg_name.to_string()); + } else { + sdk_packages.push(format!("{pkg_name}-{version}")); + } + } + serde_yaml::Value::Mapping(spec_map) => { + if let Some(version) = spec_map.get("version").and_then(|v| v.as_str()) { + if version == "*" { + sdk_packages.push(pkg_name.to_string()); + } else { + sdk_packages.push(format!("{pkg_name}-{version}")); + } + } else { + sdk_packages.push(pkg_name.to_string()); + } + } + _ => { + sdk_packages.push(pkg_name.to_string()); + } + } + } + + if sdk_packages.is_empty() { + return Ok(()); + } + + if self.verbose { + print_info( + &format!( + "Installing {} SDK dependencies from external config '{external_config_path}': {}", + sdk_packages.len(), + sdk_packages.join(", ") + ), + OutputLevel::Normal, + ); + } + + // Get container configuration + let container_image = config.get_sdk_image().ok_or_else(|| { + anyhow::anyhow!("No container image specified in config under 'sdk.image'") + })?; + let merged_container_args = config.merge_sdk_container_args(self.container_args.as_ref()); + let repo_url = config.get_sdk_repo_url(); + let repo_release = config.get_sdk_repo_release(); + + let container_helper = + SdkContainer::from_config(&self.config_path, config)?.verbose(self.verbose); + + // Build DNF install command for SDK dependencies + // Use the same pattern as sdk/install.rs + let yes = if self.force { "-y" } else { "" }; + let dnf_args_str = if let Some(args) = &self.dnf_args { + format!(" {} ", args.join(" ")) + } else { + String::new() + }; + + let install_command = format!( + r#" +RPM_ETCCONFIGDIR=$AVOCADO_SDK_PREFIX \ +RPM_CONFIGDIR=$AVOCADO_SDK_PREFIX/usr/lib/rpm \ +$DNF_SDK_HOST \ + $DNF_SDK_HOST_OPTS \ + $DNF_SDK_REPO_CONF \ + --disablerepo=${{AVOCADO_TARGET}}-target-ext \ + {} \ + install \ + {} \ + {} +"#, + dnf_args_str, + yes, + sdk_packages.join(" ") + ); + + if self.verbose { + print_info( + &format!("Running SDK install command: {install_command}"), + OutputLevel::Normal, + ); + } + + let run_config = crate::utils::container::RunConfig { + container_image: container_image.clone(), + target: target.to_string(), + command: install_command, + verbose: self.verbose, + source_environment: true, + interactive: !self.force, + repo_url, + repo_release, + container_args: merged_container_args, + dnf_args: self.dnf_args.clone(), + disable_weak_dependencies: config.get_sdk_disable_weak_dependencies(), + ..Default::default() + }; + + let success = container_helper.run_in_container(run_config).await?; + + if !success { + return Err(anyhow::anyhow!( + "Failed to install SDK dependencies from external config '{external_config_path}'" + )); + } + + print_info( + &format!( + "Installed {} SDK dependencies from external config '{external_config_path}'.", + sdk_packages.len() + ), + OutputLevel::Normal, + ); + + Ok(()) + } } #[cfg(test)] diff --git a/src/commands/provision.rs b/src/commands/provision.rs index 7b7b31e..de49526 100644 --- a/src/commands/provision.rs +++ b/src/commands/provision.rs @@ -51,7 +51,14 @@ impl ProvisionCommand { self.config.container_args.as_ref(), ); - let runtime_provision_cmd = RuntimeProvisionCommand::new( + // Get state file path from provision profile if available + let state_file = self + .config + .provision_profile + .as_ref() + .map(|profile| config.get_provision_state_file(profile)); + + let mut runtime_provision_cmd = RuntimeProvisionCommand::new( crate::commands::runtime::provision::RuntimeProvisionConfig { runtime_name: self.config.runtime.clone(), config_path: self.config.config_path.clone(), @@ -63,6 +70,7 @@ impl ProvisionCommand { out: self.config.out.clone(), container_args: merged_container_args, dnf_args: self.config.dnf_args.clone(), + state_file, }, ); diff --git a/src/commands/runtime/build.rs b/src/commands/runtime/build.rs index c63f956..b36256a 100644 --- a/src/commands/runtime/build.rs +++ b/src/commands/runtime/build.rs @@ -93,6 +93,12 @@ impl RuntimeBuildCommand { // Get stone include paths if configured let mut env_vars = std::collections::HashMap::new(); + + // Set AVOCADO_VERBOSE=1 when verbose mode is enabled + if self.verbose { + env_vars.insert("AVOCADO_VERBOSE".to_string(), "1".to_string()); + } + if let Some(stone_paths) = config.get_stone_include_paths_for_runtime( &self.runtime_name, &target_arch, @@ -110,6 +116,20 @@ impl RuntimeBuildCommand { env_vars.insert("AVOCADO_STONE_MANIFEST".to_string(), stone_manifest); } + // Set AVOCADO_RUNTIME_BUILD_DIR + env_vars.insert( + "AVOCADO_RUNTIME_BUILD_DIR".to_string(), + format!( + "/opt/_avocado/{}/runtimes/{}", + target_arch, self.runtime_name + ), + ); + + // Set AVOCADO_DISTRO_VERSION if configured + if let Some(distro_version) = config.get_distro_version() { + env_vars.insert("AVOCADO_DISTRO_VERSION".to_string(), distro_version.clone()); + } + let env_vars = if env_vars.is_empty() { None } else { @@ -228,9 +248,7 @@ RUNTIME_EXT=$RUNTIME_EXT_DIR/{ext_name}-{ext_version}.raw RUNTIMES_EXT=$VAR_DIR/lib/avocado/extensions/{ext_name}-{ext_version}.raw if [ -f "$RUNTIME_EXT" ]; then - if ! cmp -s "$RUNTIME_EXT" "$RUNTIMES_EXT" 2>/dev/null; then - ln -f $RUNTIME_EXT $RUNTIMES_EXT - fi + ln -f $RUNTIME_EXT $RUNTIMES_EXT else echo "Missing image for extension {ext_name}-{ext_version}." fi"# @@ -265,9 +283,7 @@ RUNTIME_EXT=$(ls $RUNTIME_EXT_DIR/{ext_name}-*.raw 2>/dev/null | head -n 1) if [ -n "$RUNTIME_EXT" ]; then EXT_FILENAME=$(basename "$RUNTIME_EXT") RUNTIMES_EXT=$VAR_DIR/lib/avocado/extensions/$EXT_FILENAME - if ! cmp -s "$RUNTIME_EXT" "$RUNTIMES_EXT" 2>/dev/null; then - ln -f "$RUNTIME_EXT" "$RUNTIMES_EXT" - fi + ln -f "$RUNTIME_EXT" "$RUNTIMES_EXT" else echo "Missing image for external extension {ext_name}." fi"# @@ -318,6 +334,11 @@ mkdir -p $OUTPUT_DIR RUNTIME_EXT_DIR="$AVOCADO_PREFIX/runtimes/$RUNTIME_NAME/extensions" mkdir -p "$RUNTIME_EXT_DIR" +# Clean up stale extensions to ensure fresh copies +echo "Cleaning up stale extensions..." +rm -f "$RUNTIME_EXT_DIR"/*.raw 2>/dev/null || true +rm -f "$VAR_DIR/lib/avocado/extensions"/*.raw 2>/dev/null || true + # Copy required extension images from global output/extensions to runtime-specific location echo "Copying required extension images to runtime-specific directory..." {} diff --git a/src/commands/runtime/provision.rs b/src/commands/runtime/provision.rs index 2c515d6..152674b 100644 --- a/src/commands/runtime/provision.rs +++ b/src/commands/runtime/provision.rs @@ -2,10 +2,13 @@ use crate::utils::{ config::load_config, container::{RunConfig, SdkContainer}, output::{print_info, print_success, OutputLevel}, + signing_service::{generate_helper_script, SigningService, SigningServiceConfig}, target::resolve_target_required, + volume::VolumeManager, }; use anyhow::{Context, Result}; use std::collections::HashMap; +use std::path::PathBuf; pub struct RuntimeProvisionConfig { pub runtime_name: String, @@ -18,18 +21,25 @@ pub struct RuntimeProvisionConfig { pub out: Option, pub container_args: Option>, pub dnf_args: Option>, + /// Path to state file relative to src_dir for persisting state between provision runs. + /// Resolved from provision profile config or defaults to `provision-{profile}.state`. + pub state_file: Option, } pub struct RuntimeProvisionCommand { config: RuntimeProvisionConfig, + signing_service: Option, } impl RuntimeProvisionCommand { pub fn new(config: RuntimeProvisionConfig) -> Self { - Self { config } + Self { + config, + signing_service: None, + } } - pub async fn execute(&self) -> Result<()> { + pub async fn execute(&mut self) -> Result<()> { // Load configuration let config = load_config(&self.config.config_path)?; let content = std::fs::read_to_string(&self.config.config_path)?; @@ -92,6 +102,29 @@ impl RuntimeProvisionCommand { ); } + // Set AVOCADO_VERBOSE=1 when verbose mode is enabled + if self.config.verbose { + env_vars.insert("AVOCADO_VERBOSE".to_string(), "1".to_string()); + } + + // Set standard avocado environment variables for provision scripts + // AVOCADO_TARGET - Used for all bundle.manifest.[].target values + env_vars.insert("AVOCADO_TARGET".to_string(), target_arch.clone()); + + // AVOCADO_RUNTIME_NAME - Runtime name (e.g., "dev") + env_vars.insert( + "AVOCADO_RUNTIME_NAME".to_string(), + self.config.runtime_name.clone(), + ); + + // AVOCADO_RUNTIME_VERSION - Runtime version from distro.version (e.g., "0.1.0") + if let Some(distro_version) = config.get_distro_version() { + env_vars.insert( + "AVOCADO_RUNTIME_VERSION".to_string(), + distro_version.clone(), + ); + } + // Set AVOCADO_PROVISION_OUT if --out is specified if let Some(out_path) = &self.config.out { // Construct the absolute path from the container's perspective @@ -118,12 +151,64 @@ impl RuntimeProvisionCommand { env_vars.insert("AVOCADO_STONE_MANIFEST".to_string(), stone_manifest); } + // Set AVOCADO_RUNTIME_BUILD_DIR + env_vars.insert( + "AVOCADO_RUNTIME_BUILD_DIR".to_string(), + format!( + "/opt/_avocado/{}/runtimes/{}", + target_arch, self.config.runtime_name + ), + ); + + // Set AVOCADO_DISTRO_VERSION if configured + if let Some(distro_version) = config.get_distro_version() { + env_vars.insert("AVOCADO_DISTRO_VERSION".to_string(), distro_version.clone()); + } + + // Determine state file path and container location if a provision profile is set + let state_file_info = if let Some(profile) = &self.config.provision_profile { + let state_file_path = self + .config + .state_file + .clone() + .unwrap_or_else(|| config.get_provision_state_file(profile)); + let container_state_path = format!( + "/opt/_avocado/{}/output/runtimes/{}/provision-state.state", + target_arch, self.config.runtime_name + ); + env_vars.insert( + "AVOCADO_PROVISION_STATE".to_string(), + container_state_path.clone(), + ); + Some((state_file_path, container_state_path)) + } else { + None + }; + let env_vars = if env_vars.is_empty() { None } else { Some(env_vars) }; + // Copy state file to container volume if it exists + let src_dir = std::env::current_dir()?; + let state_file_existed = + if let Some((ref state_file_path, ref container_state_path)) = state_file_info { + self.copy_state_to_container( + &src_dir, + state_file_path, + container_state_path, + &target_arch, + ) + .await? + } else { + false + }; + + // Check if runtime has signing configured + let signing_config = self.setup_signing_service(&config).await?; + // Initialize SDK container helper let container_helper = SdkContainer::new(); @@ -134,7 +219,7 @@ impl RuntimeProvisionCommand { print_info("Executing provision script.", OutputLevel::Normal); } - let run_config = RunConfig { + let mut run_config = RunConfig { container_image: container_image.to_string(), target: target_arch.clone(), command: provision_script, @@ -149,15 +234,46 @@ impl RuntimeProvisionCommand { dnf_args: self.config.dnf_args.clone(), ..Default::default() }; + + // Add signing configuration to run_config if available + if let Some((socket_path, helper_script_path, key_name, checksum_algo)) = &signing_config { + run_config.signing_socket_path = Some(socket_path.clone()); + run_config.signing_helper_script_path = Some(helper_script_path.clone()); + run_config.signing_key_name = Some(key_name.clone()); + run_config.signing_checksum_algorithm = Some(checksum_algo.clone()); + } + let provision_result = container_helper .run_in_container(run_config) .await .context("Failed to provision runtime")?; + // Shutdown signing service if it was started + if signing_config.is_some() { + self.cleanup_signing_service().await?; + } + if !provision_result { return Err(anyhow::anyhow!("Failed to provision runtime")); } + // Fix file ownership if --out was specified + if let Some(out_path) = &self.config.out { + self.fix_output_permissions(out_path).await?; + } + + // Copy state file back from container if it exists + if let Some((ref state_file_path, ref container_state_path)) = state_file_info { + self.copy_state_from_container( + &src_dir, + state_file_path, + container_state_path, + &target_arch, + state_file_existed, + ) + .await?; + } + print_success( &format!( "Successfully provisioned runtime '{}'", @@ -168,6 +284,228 @@ impl RuntimeProvisionCommand { Ok(()) } + /// Setup signing service if signing is configured for the runtime + /// + /// Returns Some((socket_path, helper_script_path, key_name, checksum_algorithm)) if signing is enabled + async fn setup_signing_service( + &mut self, + config: &crate::utils::config::Config, + ) -> Result> { + // Check if runtime has signing configuration + let signing_key_name = match config.get_runtime_signing_key(&self.config.runtime_name) { + Some(keyid) => { + // Get the key name from signing_keys mapping + let signing_keys = config.get_signing_keys(); + signing_keys + .and_then(|keys| { + keys.iter() + .find(|(_, v)| *v == &keyid) + .map(|(k, _)| k.clone()) + }) + .context("Signing key ID not found in signing_keys mapping")? + } + None => { + // No signing configured for this runtime + if self.config.verbose { + print_info( + "No signing key configured for runtime. Signing service will not be started.", + OutputLevel::Verbose, + ); + } + return Ok(None); + } + }; + + let keyid = config + .get_runtime_signing_key(&self.config.runtime_name) + .context("Failed to get signing key ID")?; + + // Get checksum algorithm (defaults to sha256) + let checksum_str = config + .runtime + .as_ref() + .and_then(|r| r.get(&self.config.runtime_name)) + .and_then(|rc| rc.signing.as_ref()) + .map(|s| s.checksum_algorithm.as_str()) + .unwrap_or("sha256"); + + // Create temporary directory for socket and helper script + let temp_dir = tempfile::tempdir().context("Failed to create temp directory")?; + let socket_path = temp_dir.path().join("sign.sock"); + let helper_script_path = temp_dir.path().join("avocado-sign-request"); + + // Write helper script + let helper_script = generate_helper_script(); + std::fs::write(&helper_script_path, helper_script) + .context("Failed to write helper script")?; + + // Make helper script executable + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + let perms = std::fs::Permissions::from_mode(0o755); + std::fs::set_permissions(&helper_script_path, perms) + .context("Failed to set helper script permissions")?; + } + + if self.config.verbose { + print_info( + &format!( + "Starting signing service with key '{}' using {} checksums", + signing_key_name, checksum_str + ), + OutputLevel::Verbose, + ); + } + + // Start signing service + // Note: Hash computation happens in the container, so we don't need volume access + let service_config = SigningServiceConfig { + socket_path: socket_path.clone(), + runtime_name: self.config.runtime_name.clone(), + key_name: signing_key_name.clone(), + keyid, + verbose: self.config.verbose, + }; + + let service = SigningService::start(service_config, temp_dir).await?; + + // Store the service handle for cleanup + self.signing_service = Some(service); + + Ok(Some(( + socket_path, + helper_script_path, + signing_key_name, + checksum_str.to_string(), + ))) + } + + /// Cleanup signing service resources + async fn cleanup_signing_service(&mut self) -> Result<()> { + if let Some(service) = self.signing_service.take() { + service.shutdown().await?; + } + Ok(()) + } + + /// Fix file ownership of output directory to match calling user + async fn fix_output_permissions(&self, out_path: &str) -> Result<()> { + // Get the absolute path to the output directory + let src_dir = std::env::current_dir()?; + let out_dir = src_dir.join(out_path); + + // Only proceed if the directory exists + if !out_dir.exists() { + if self.config.verbose { + print_info( + &format!("Output directory does not exist yet: {}", out_dir.display()), + OutputLevel::Verbose, + ); + } + return Ok(()); + } + + // Get current user's UID and GID + #[cfg(unix)] + { + // Get the UID and GID of the calling user + let uid = unsafe { libc::getuid() }; + let gid = unsafe { libc::getgid() }; + + if self.config.verbose { + print_info( + &format!( + "Fixing ownership of {} to {}:{}", + out_dir.display(), + uid, + gid + ), + OutputLevel::Verbose, + ); + } + + // Load configuration to get container image + let config = load_config(&self.config.config_path)?; + let container_image = config + .get_sdk_image() + .context("No SDK container image specified in configuration")?; + + // Build the chown command to run inside the container + let container_out_path = format!("/opt/src/{}", out_path); + let chown_script = format!("chown -R {}:{} '{}'", uid, gid, container_out_path); + + // Run chown inside a container with the same volume mounts + let container_tool = "docker"; + let volume_manager = + VolumeManager::new(container_tool.to_string(), self.config.verbose); + let volume_state = volume_manager.get_or_create_volume(&src_dir).await?; + + let mut chown_cmd = vec![ + container_tool.to_string(), + "run".to_string(), + "--rm".to_string(), + ]; + + // Mount the source directory + chown_cmd.push("-v".to_string()); + chown_cmd.push(format!("{}:/opt/src:rw", src_dir.display())); + + // Mount the volume + chown_cmd.push("-v".to_string()); + chown_cmd.push(format!("{}:/opt/_avocado:rw", volume_state.volume_name)); + + // Add the container image + chown_cmd.push(container_image.to_string()); + + // Add the command + chown_cmd.push("bash".to_string()); + chown_cmd.push("-c".to_string()); + chown_cmd.push(chown_script); + + if self.config.verbose { + print_info( + &format!("Running: {}", chown_cmd.join(" ")), + OutputLevel::Verbose, + ); + } + + let mut cmd = tokio::process::Command::new(&chown_cmd[0]); + cmd.args(&chown_cmd[1..]); + cmd.stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()); + + let status = cmd + .status() + .await + .context("Failed to execute chown command")?; + + if !status.success() { + print_info( + "Warning: Failed to fix ownership of output directory. Files may be owned by root.", + OutputLevel::Normal, + ); + } else if self.config.verbose { + print_info( + "Successfully fixed output directory ownership", + OutputLevel::Verbose, + ); + } + } + + #[cfg(not(unix))] + { + if self.config.verbose { + print_info( + "Skipping ownership fix on non-Unix platform", + OutputLevel::Verbose, + ); + } + } + + Ok(()) + } + fn create_provision_script(&self, target_arch: &str) -> Result { let script = format!( r#" @@ -180,6 +518,276 @@ avocado-provision-{} {} Ok(script) } + /// Copy state file from src_dir to container volume before provisioning. + /// Returns true if the state file existed and was copied, false otherwise. + async fn copy_state_to_container( + &self, + src_dir: &std::path::Path, + state_file_path: &str, + container_state_path: &str, + _target_arch: &str, + ) -> Result { + let host_state_file = src_dir.join(state_file_path); + + // Check if the state file exists on the host + if !host_state_file.exists() { + if self.config.verbose { + print_info( + &format!( + "No existing state file at {}, starting fresh", + host_state_file.display() + ), + OutputLevel::Verbose, + ); + } + return Ok(false); + } + + if self.config.verbose { + print_info( + &format!( + "Copying state file from {} to container at {}", + host_state_file.display(), + container_state_path + ), + OutputLevel::Verbose, + ); + } + + // Load configuration to get container image + let config = load_config(&self.config.config_path)?; + let container_image = config + .get_sdk_image() + .context("No SDK container image specified in configuration")?; + + let container_tool = "docker"; + let volume_manager = VolumeManager::new(container_tool.to_string(), self.config.verbose); + let volume_state = volume_manager.get_or_create_volume(src_dir).await?; + + // Ensure parent directory exists and copy file to container + let copy_script = format!( + "mkdir -p \"$(dirname '{}')\" && cp '/opt/src/{}' '{}'", + container_state_path, state_file_path, container_state_path + ); + + let mut copy_cmd = vec![ + container_tool.to_string(), + "run".to_string(), + "--rm".to_string(), + ]; + + // Mount the source directory + copy_cmd.push("-v".to_string()); + copy_cmd.push(format!("{}:/opt/src:ro", src_dir.display())); + + // Mount the volume + copy_cmd.push("-v".to_string()); + copy_cmd.push(format!("{}:/opt/_avocado:rw", volume_state.volume_name)); + + // Add the container image + copy_cmd.push(container_image.to_string()); + + // Add the command + copy_cmd.push("bash".to_string()); + copy_cmd.push("-c".to_string()); + copy_cmd.push(copy_script); + + if self.config.verbose { + print_info( + &format!("Running: {}", copy_cmd.join(" ")), + OutputLevel::Verbose, + ); + } + + let mut cmd = tokio::process::Command::new(©_cmd[0]); + cmd.args(©_cmd[1..]); + cmd.stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()); + + let status = cmd + .status() + .await + .context("Failed to copy state file to container")?; + + if !status.success() { + print_info( + "Warning: Failed to copy state file to container", + OutputLevel::Normal, + ); + return Ok(false); + } + + if self.config.verbose { + print_info( + "Successfully copied state file to container", + OutputLevel::Verbose, + ); + } + + Ok(true) + } + + /// Copy state file from container volume back to src_dir after provisioning. + /// Only copies if the file exists in the container. If the file is empty and + /// the original didn't exist, no file is copied. + async fn copy_state_from_container( + &self, + src_dir: &std::path::Path, + state_file_path: &str, + container_state_path: &str, + _target_arch: &str, + _original_existed: bool, + ) -> Result<()> { + if self.config.verbose { + print_info( + &format!( + "Checking for state file at {} in container", + container_state_path + ), + OutputLevel::Verbose, + ); + } + + // Load configuration to get container image + let config = load_config(&self.config.config_path)?; + let container_image = config + .get_sdk_image() + .context("No SDK container image specified in configuration")?; + + let container_tool = "docker"; + let volume_manager = VolumeManager::new(container_tool.to_string(), self.config.verbose); + let volume_state = volume_manager.get_or_create_volume(src_dir).await?; + + // Check if the state file exists in the container + let check_script = format!("test -f '{}'", container_state_path); + + let mut check_cmd = vec![ + container_tool.to_string(), + "run".to_string(), + "--rm".to_string(), + ]; + + check_cmd.push("-v".to_string()); + check_cmd.push(format!("{}:/opt/_avocado:ro", volume_state.volume_name)); + + check_cmd.push(container_image.to_string()); + check_cmd.push("bash".to_string()); + check_cmd.push("-c".to_string()); + check_cmd.push(check_script); + + let mut cmd = tokio::process::Command::new(&check_cmd[0]); + cmd.args(&check_cmd[1..]); + cmd.stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()); + + let status = cmd + .status() + .await + .context("Failed to check state file existence")?; + + if !status.success() { + // State file doesn't exist in container + if self.config.verbose { + print_info( + "No state file found in container, nothing to copy back", + OutputLevel::Verbose, + ); + } + return Ok(()); + } + + // State file exists - copy it back to host + let host_state_file = src_dir.join(state_file_path); + + if self.config.verbose { + print_info( + &format!( + "Copying state file from container to {}", + host_state_file.display() + ), + OutputLevel::Verbose, + ); + } + + // Get current user's UID and GID for proper ownership + #[cfg(unix)] + let (uid, gid) = { + let uid = unsafe { libc::getuid() }; + let gid = unsafe { libc::getgid() }; + (uid, gid) + }; + + #[cfg(not(unix))] + let (uid, gid) = (0u32, 0u32); + + // Ensure parent directory exists on host and copy file with correct ownership + if let Some(parent) = host_state_file.parent() { + std::fs::create_dir_all(parent)?; + } + + // Copy from container to host src_dir with proper ownership + let copy_script = format!( + "cp '{}' '/opt/src/{}' && chown {}:{} '/opt/src/{}'", + container_state_path, state_file_path, uid, gid, state_file_path + ); + + let mut copy_cmd = vec![ + container_tool.to_string(), + "run".to_string(), + "--rm".to_string(), + ]; + + // Mount the source directory (read-write to copy file back) + copy_cmd.push("-v".to_string()); + copy_cmd.push(format!("{}:/opt/src:rw", src_dir.display())); + + // Mount the volume + copy_cmd.push("-v".to_string()); + copy_cmd.push(format!("{}:/opt/_avocado:ro", volume_state.volume_name)); + + // Add the container image + copy_cmd.push(container_image.to_string()); + + // Add the command + copy_cmd.push("bash".to_string()); + copy_cmd.push("-c".to_string()); + copy_cmd.push(copy_script); + + if self.config.verbose { + print_info( + &format!("Running: {}", copy_cmd.join(" ")), + OutputLevel::Verbose, + ); + } + + let mut cmd = tokio::process::Command::new(©_cmd[0]); + cmd.args(©_cmd[1..]); + cmd.stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()); + + let status = cmd + .status() + .await + .context("Failed to copy state file from container")?; + + if !status.success() { + print_info( + "Warning: Failed to copy state file from container", + OutputLevel::Normal, + ); + } else if self.config.verbose { + print_info( + &format!( + "Successfully copied state file to {}", + host_state_file.display() + ), + OutputLevel::Verbose, + ); + } + + Ok(()) + } + async fn collect_runtime_extensions( &self, parsed: &serde_yaml::Value, @@ -365,6 +973,7 @@ mod tests { out: None, container_args: None, dnf_args: None, + state_file: None, }; let cmd = RuntimeProvisionCommand::new(config); @@ -390,6 +999,7 @@ mod tests { out: None, container_args: None, dnf_args: None, + state_file: None, }; let cmd = RuntimeProvisionCommand::new(config); @@ -437,6 +1047,7 @@ runtime: out: None, container_args: None, dnf_args: None, + state_file: None, }; let command = RuntimeProvisionCommand::new(provision_config); @@ -478,6 +1089,7 @@ runtime: out: None, container_args: container_args.clone(), dnf_args: dnf_args.clone(), + state_file: None, }; let cmd = RuntimeProvisionCommand::new(config); @@ -512,6 +1124,7 @@ runtime: out: None, container_args: None, dnf_args: None, + state_file: None, }; let cmd = RuntimeProvisionCommand::new(config); diff --git a/src/commands/sdk/compile.rs b/src/commands/sdk/compile.rs index cce828e..d24e81e 100644 --- a/src/commands/sdk/compile.rs +++ b/src/commands/sdk/compile.rs @@ -55,9 +55,40 @@ impl SdkCompileCommand { /// Execute the sdk compile command pub async fn execute(&self) -> Result<()> { // Load the configuration + if self.verbose { + print_info( + &format!("Loading SDK compile config from: {}", self.config_path), + OutputLevel::Normal, + ); + } let config = Config::load(&self.config_path) .with_context(|| format!("Failed to load config from {}", self.config_path))?; + // Debug: Check if sdk.compile was parsed + if self.verbose { + if let Some(sdk) = &config.sdk { + if let Some(compile) = &sdk.compile { + print_info( + &format!("Found {} SDK compile section(s) in config", compile.len()), + OutputLevel::Normal, + ); + for (name, cfg) in compile { + print_info( + &format!(" - Section '{}': compile script = {:?}", name, cfg.compile), + OutputLevel::Normal, + ); + } + } else { + print_info( + "No sdk.compile section found in config", + OutputLevel::Normal, + ); + } + } else { + print_info("No sdk section found in config", OutputLevel::Normal); + } + } + // Merge container args from config with CLI args let merged_container_args = config.merge_sdk_container_args(self.container_args.as_ref()); @@ -65,6 +96,14 @@ impl SdkCompileCommand { let compile_sections = self.get_compile_sections_from_config(&config); if compile_sections.is_empty() { + // If specific sections were requested but none found, this is an error + if !self.sections.is_empty() { + return Err(anyhow::anyhow!( + "Requested compile sections {:?} not found in config '{}'", + self.sections, + self.config_path + )); + } print_success("No compile sections configured.", OutputLevel::Normal); return Ok(()); } diff --git a/src/commands/sdk/install.rs b/src/commands/sdk/install.rs index 209cbb3..2938cdb 100644 --- a/src/commands/sdk/install.rs +++ b/src/commands/sdk/install.rs @@ -48,19 +48,23 @@ impl SdkInstallCommand { /// Execute the sdk install command pub async fn execute(&self) -> Result<()> { - // Load the configuration - let config = Config::load(&self.config_path) + // Early target validation - load basic config first + let basic_config = Config::load(&self.config_path) .with_context(|| format!("Failed to load config from {}", self.config_path))?; + let target = validate_and_log_target(self.target.as_deref(), &basic_config)?; - // Early target validation and logging - fail fast if target is unsupported - let target = validate_and_log_target(self.target.as_deref(), &config)?; + // Load the composed configuration (merges external configs, applies interpolation) + let composed = Config::load_composed(&self.config_path, self.target.as_deref()) + .with_context(|| format!("Failed to load composed config from {}", self.config_path))?; + + let config = &composed.config; // Merge container args from config with CLI args let merged_container_args = config.merge_sdk_container_args(self.container_args.as_ref()); - // Read the config file content for extension parsing - let config_content = std::fs::read_to_string(&self.config_path) - .with_context(|| format!("Failed to read config file {}", self.config_path))?; + // Serialize the merged config back to string for extension parsing methods + let config_content = serde_yaml::to_string(&composed.merged_value) + .with_context(|| "Failed to serialize composed config")?; // Get the SDK image from configuration let container_image = config.get_sdk_image().ok_or_else(|| { @@ -69,13 +73,12 @@ impl SdkInstallCommand { print_info("Installing SDK dependencies.", OutputLevel::Normal); - // Get SDK dependencies with target interpolation - // This re-parses the config to interpolate {{ avocado.target }} templates + // Get SDK dependencies from the composed config (already has external deps merged) let sdk_dependencies = config .get_sdk_dependencies_for_target(&self.config_path, &target) .with_context(|| "Failed to get SDK dependencies with target interpolation")?; - // Get extension SDK dependencies (including nested ones with target-specific dependencies) + // Get extension SDK dependencies (from the composed, interpolated config) let extension_sdk_dependencies = config .get_extension_sdk_dependencies_with_config_path_and_target( &config_content, @@ -93,7 +96,7 @@ impl SdkInstallCommand { // Use the container helper to run the installation let container_helper = - SdkContainer::from_config(&self.config_path, &config)?.verbose(self.verbose); + SdkContainer::from_config(&self.config_path, config)?.verbose(self.verbose); // Install SDK dependencies (into SDK) let mut sdk_packages = Vec::new(); diff --git a/src/commands/signing_keys/create.rs b/src/commands/signing_keys/create.rs index 6fd51cf..cc09ce1 100644 --- a/src/commands/signing_keys/create.rs +++ b/src/commands/signing_keys/create.rs @@ -177,7 +177,7 @@ fn generate_keyid_from_uri(uri: &str) -> String { let mut hasher = Sha256::new(); hasher.update(uri.as_bytes()); let hash = hasher.finalize(); - format!("sha256-{}", hex_encode(&hash[..8])) + hex_encode(&hash) } fn hex_encode(bytes: &[u8]) -> String { @@ -192,8 +192,10 @@ mod tests { fn test_generate_keyid_from_uri() { let uri = "pkcs11:token=YubiKey;object=signing-key"; let keyid = generate_keyid_from_uri(uri); - assert!(keyid.starts_with("sha256-")); - assert_eq!(keyid.len(), 7 + 16); // "sha256-" + 16 hex chars + // Key ID is the full SHA-256 hash, base16 encoded (64 hex chars) + assert_eq!(keyid.len(), 64); + // Verify it's valid hex + assert!(keyid.chars().all(|c| c.is_ascii_hexdigit())); } #[test] diff --git a/src/main.rs b/src/main.rs index 452d69b..d6b7f7a 100644 --- a/src/main.rs +++ b/src/main.rs @@ -935,7 +935,7 @@ async fn main() -> Result<()> { container_args, dnf_args, } => { - let provision_cmd = RuntimeProvisionCommand::new( + let mut provision_cmd = RuntimeProvisionCommand::new( crate::commands::runtime::provision::RuntimeProvisionConfig { runtime_name: runtime, config_path: config, @@ -947,6 +947,7 @@ async fn main() -> Result<()> { out, container_args, dnf_args, + state_file: None, // Resolved from config during execution }, ); provision_cmd.execute().await?; diff --git a/src/utils/config.rs b/src/utils/config.rs index 3b91aaf..120a8b4 100644 --- a/src/utils/config.rs +++ b/src/utils/config.rs @@ -112,6 +112,25 @@ pub enum ExtensionLocation { External { name: String, config_path: String }, } +/// A composed configuration that merges the main config with external extension configs. +/// +/// This struct provides a unified view where: +/// - `distro`, `default_target`, `supported_targets` come from the main config only +/// - `ext` sections are merged from both main and external configs +/// - `sdk.dependencies` and `sdk.compile` are merged from both main and external configs +/// +/// Interpolation is applied after merging, so external configs can reference +/// `{{ config.distro.version }}` and resolve to the main config's values. +#[derive(Debug, Clone)] +pub struct ComposedConfig { + /// The base Config (deserialized from the merged YAML) + pub config: Config, + /// The merged YAML value (with external configs merged in, after interpolation) + pub merged_value: serde_yaml::Value, + /// The path to the main config file + pub config_path: String, +} + /// Configuration error type #[derive(Debug, thiserror::Error)] pub enum ConfigError { @@ -174,6 +193,10 @@ pub struct CompileConfig { pub struct ProvisionProfileConfig { #[serde(default, deserialize_with = "container_args_deserializer::deserialize")] pub container_args: Option>, + /// Path to state file relative to src_dir for persisting state between provision runs. + /// Defaults to `provision-{profile}.state` when not specified. + /// The state file is copied into the container before provisioning and copied back after. + pub state_file: Option, } /// Distribution configuration @@ -331,6 +354,336 @@ impl Config { Ok(parsed) } + /// Load a composed configuration that merges the main config with external extension configs. + /// + /// This method: + /// 1. Loads the main config (raw, without interpolation) + /// 2. Discovers all external config references in runtime and ext dependencies + /// 3. Loads each external config (raw) + /// 4. Merges external `ext.*`, `sdk.dependencies`, and `sdk.compile` sections + /// 5. Applies interpolation to the composed model + /// + /// The `distro`, `default_target`, and `supported_targets` sections come from the main config only, + /// allowing external configs to reference `{{ config.distro.version }}` and resolve to main config values. + pub fn load_composed>( + config_path: P, + target: Option<&str>, + ) -> Result { + let path = config_path.as_ref(); + let config_path_str = path.to_string_lossy().to_string(); + + // Load main config content (raw, no interpolation yet) + let content = fs::read_to_string(path) + .with_context(|| format!("Failed to read config file: {}", path.display()))?; + let mut main_config = Self::parse_config_value(&config_path_str, &content)?; + + // Discover all external config references + let external_refs = Self::discover_external_config_refs(&main_config); + + // Load and merge each external config + for (ext_name, external_config_path) in &external_refs { + // Resolve the external config path relative to the main config's directory + let main_config_dir = path.parent().unwrap_or(Path::new(".")); + let resolved_path = main_config_dir.join(external_config_path); + + if !resolved_path.exists() { + // Skip non-existent external configs with a warning (they may be optional) + continue; + } + + // Load external config (raw) + let external_content = fs::read_to_string(&resolved_path).with_context(|| { + format!( + "Failed to read external config: {}", + resolved_path.display() + ) + })?; + let external_config = Self::parse_config_value( + resolved_path.to_str().unwrap_or(external_config_path), + &external_content, + )?; + + // Merge external config into main config + Self::merge_external_config(&mut main_config, &external_config, ext_name); + } + + // Apply interpolation to the composed model + crate::utils::interpolation::interpolate_config(&mut main_config, target) + .with_context(|| "Failed to interpolate composed configuration")?; + + // Deserialize the merged config into the Config struct + let config: Config = serde_yaml::from_value(main_config.clone()) + .with_context(|| "Failed to deserialize composed configuration")?; + + Ok(ComposedConfig { + config, + merged_value: main_config, + config_path: config_path_str, + }) + } + + /// Discover all external config references in runtime and ext dependencies. + /// + /// Scans these locations: + /// - `runtime..dependencies..config` + /// - `runtime...dependencies..config` + /// - `ext..dependencies..config` + /// + /// Returns a list of (extension_name, config_path) tuples. + fn discover_external_config_refs(config: &serde_yaml::Value) -> Vec<(String, String)> { + let mut refs = Vec::new(); + let mut visited = std::collections::HashSet::new(); + + // Scan runtime dependencies + if let Some(runtime_section) = config.get("runtime").and_then(|r| r.as_mapping()) { + for (_runtime_name, runtime_config) in runtime_section { + Self::collect_external_refs_from_dependencies( + runtime_config, + &mut refs, + &mut visited, + ); + + // Also check target-specific sections within runtime + if let Some(runtime_table) = runtime_config.as_mapping() { + for (key, value) in runtime_table { + // Skip known non-target keys + if let Some(key_str) = key.as_str() { + if ![ + "dependencies", + "target", + "stone_include_paths", + "stone_manifest", + "signing", + ] + .contains(&key_str) + { + // This might be a target-specific section + Self::collect_external_refs_from_dependencies( + value, + &mut refs, + &mut visited, + ); + } + } + } + } + } + } + + // Scan ext dependencies + if let Some(ext_section) = config.get("ext").and_then(|e| e.as_mapping()) { + for (_ext_name, ext_config) in ext_section { + Self::collect_external_refs_from_dependencies(ext_config, &mut refs, &mut visited); + + // Also check target-specific sections within ext + if let Some(ext_table) = ext_config.as_mapping() { + for (key, value) in ext_table { + // Skip known non-target keys + if let Some(key_str) = key.as_str() { + if ![ + "version", + "release", + "summary", + "description", + "license", + "url", + "vendor", + "types", + "packages", + "dependencies", + "sdk", + "enable_services", + "on_merge", + "sysusers", + "kernel_modules", + "reload_service_manager", + "ld_so_conf_d", + "confext", + "sysext", + "overlay", + ] + .contains(&key_str) + { + // This might be a target-specific section + Self::collect_external_refs_from_dependencies( + value, + &mut refs, + &mut visited, + ); + } + } + } + } + } + } + + refs + } + + /// Collect external config references from a dependencies section. + fn collect_external_refs_from_dependencies( + section: &serde_yaml::Value, + refs: &mut Vec<(String, String)>, + visited: &mut std::collections::HashSet, + ) { + let dependencies = section.get("dependencies").and_then(|d| d.as_mapping()); + + if let Some(deps_map) = dependencies { + for (_dep_name, dep_spec) in deps_map { + if let Some(spec_map) = dep_spec.as_mapping() { + // Check for external extension reference + if let (Some(ext_name), Some(config_path)) = ( + spec_map.get("ext").and_then(|v| v.as_str()), + spec_map.get("config").and_then(|v| v.as_str()), + ) { + let key = format!("{}:{}", ext_name, config_path); + if !visited.contains(&key) { + visited.insert(key); + refs.push((ext_name.to_string(), config_path.to_string())); + } + } + } + } + } + } + + /// Merge an external config into the main config. + /// + /// Merges: + /// - `ext.*` sections (external extensions added to main ext section) + /// - `sdk.dependencies` (merged, main takes precedence on conflicts) + /// - `sdk.compile` (merged, main takes precedence on conflicts) + /// + /// Does NOT merge: + /// - `distro` (main config only) + /// - `default_target` (main config only) + /// - `supported_targets` (main config only) + fn merge_external_config( + main_config: &mut serde_yaml::Value, + external_config: &serde_yaml::Value, + _ext_name: &str, + ) { + // Merge ext sections + if let Some(external_ext) = external_config.get("ext").and_then(|e| e.as_mapping()) { + let main_ext = main_config + .as_mapping_mut() + .and_then(|m| { + if !m.contains_key(serde_yaml::Value::String("ext".to_string())) { + m.insert( + serde_yaml::Value::String("ext".to_string()), + serde_yaml::Value::Mapping(serde_yaml::Mapping::new()), + ); + } + m.get_mut(serde_yaml::Value::String("ext".to_string())) + }) + .and_then(|e| e.as_mapping_mut()); + + if let Some(main_ext_map) = main_ext { + for (ext_key, ext_value) in external_ext { + // Only add if not already present in main config + if !main_ext_map.contains_key(ext_key) { + main_ext_map.insert(ext_key.clone(), ext_value.clone()); + } + } + } + } + + // Merge sdk.dependencies + if let Some(external_sdk_deps) = external_config + .get("sdk") + .and_then(|s| s.get("dependencies")) + .and_then(|d| d.as_mapping()) + { + Self::ensure_sdk_dependencies_section(main_config); + + if let Some(main_sdk_deps) = main_config + .get_mut("sdk") + .and_then(|s| s.get_mut("dependencies")) + .and_then(|d| d.as_mapping_mut()) + { + for (dep_key, dep_value) in external_sdk_deps { + // Only add if not already present (main takes precedence) + if !main_sdk_deps.contains_key(dep_key) { + main_sdk_deps.insert(dep_key.clone(), dep_value.clone()); + } + } + } + } + + // Merge sdk.compile + if let Some(external_sdk_compile) = external_config + .get("sdk") + .and_then(|s| s.get("compile")) + .and_then(|c| c.as_mapping()) + { + Self::ensure_sdk_compile_section(main_config); + + if let Some(main_sdk_compile) = main_config + .get_mut("sdk") + .and_then(|s| s.get_mut("compile")) + .and_then(|c| c.as_mapping_mut()) + { + for (compile_key, compile_value) in external_sdk_compile { + // Only add if not already present (main takes precedence) + if !main_sdk_compile.contains_key(compile_key) { + main_sdk_compile.insert(compile_key.clone(), compile_value.clone()); + } + } + } + } + } + + /// Ensure the sdk.dependencies section exists in the config. + fn ensure_sdk_dependencies_section(config: &mut serde_yaml::Value) { + if let Some(main_map) = config.as_mapping_mut() { + // Ensure sdk section exists + if !main_map.contains_key(serde_yaml::Value::String("sdk".to_string())) { + main_map.insert( + serde_yaml::Value::String("sdk".to_string()), + serde_yaml::Value::Mapping(serde_yaml::Mapping::new()), + ); + } + + // Ensure sdk.dependencies section exists + if let Some(sdk) = main_map.get_mut(serde_yaml::Value::String("sdk".to_string())) { + if let Some(sdk_map) = sdk.as_mapping_mut() { + if !sdk_map.contains_key(serde_yaml::Value::String("dependencies".to_string())) + { + sdk_map.insert( + serde_yaml::Value::String("dependencies".to_string()), + serde_yaml::Value::Mapping(serde_yaml::Mapping::new()), + ); + } + } + } + } + } + + /// Ensure the sdk.compile section exists in the config. + fn ensure_sdk_compile_section(config: &mut serde_yaml::Value) { + if let Some(main_map) = config.as_mapping_mut() { + // Ensure sdk section exists + if !main_map.contains_key(serde_yaml::Value::String("sdk".to_string())) { + main_map.insert( + serde_yaml::Value::String("sdk".to_string()), + serde_yaml::Value::Mapping(serde_yaml::Mapping::new()), + ); + } + + // Ensure sdk.compile section exists + if let Some(sdk) = main_map.get_mut(serde_yaml::Value::String("sdk".to_string())) { + if let Some(sdk_map) = sdk.as_mapping_mut() { + if !sdk_map.contains_key(serde_yaml::Value::String("compile".to_string())) { + sdk_map.insert( + serde_yaml::Value::String("compile".to_string()), + serde_yaml::Value::Mapping(serde_yaml::Mapping::new()), + ); + } + } + } + } + } + /// Helper function to get a nested section from YAML using dot notation #[allow(dead_code)] // Helper for merging system fn get_nested_section<'a>( @@ -734,6 +1087,11 @@ impl Config { self.sdk.as_ref()?.repo_release.as_ref().cloned() } + /// Get the distro version from configuration + pub fn get_distro_version(&self) -> Option<&String> { + self.distro.as_ref()?.version.as_ref() + } + /// Get the SDK container args from configuration pub fn get_sdk_container_args(&self) -> Option<&Vec> { self.sdk.as_ref()?.container_args.as_ref() @@ -748,13 +1106,17 @@ impl Config { .unwrap_or(false) // Default to false (enable weak dependencies) } - /// Get signing keys mapping (name -> keyid) + /// Get signing keys mapping (name -> keyid or global name) #[allow(dead_code)] // Public API for future use pub fn get_signing_keys(&self) -> Option<&HashMap> { self.signing_keys.as_ref() } - /// Get signing key ID by name + /// Get signing key ID by local config name. + /// + /// Returns the raw value from the signing_keys mapping. The value can be either: + /// - A key ID (64-char hex hash of the public key) + /// - A global registry key name (which should be resolved via `resolve_signing_key_reference`) #[allow(dead_code)] // Public API for future use pub fn get_signing_key_id(&self, name: &str) -> Option<&String> { self.signing_keys.as_ref()?.get(name) @@ -769,12 +1131,62 @@ impl Config { .unwrap_or_default() } + /// Resolve a signing key reference to an actual key ID. + /// + /// The reference can be: + /// - A key ID directly (64-char hex hash of the public key) + /// - A global registry key name (resolved to its key ID) + /// + /// Returns (key_name, key_id) where key_name is the name in the global registry. + #[allow(dead_code)] // Public API for future use + pub fn resolve_signing_key_reference(reference: &str) -> Option<(String, String)> { + use crate::utils::signing_keys::KeysRegistry; + + let registry = KeysRegistry::load().ok()?; + + // First, try to find by global registry name + if let Some(entry) = registry.get_key(reference) { + return Some((reference.to_string(), entry.keyid.clone())); + } + + // If not found by name, check if it's a valid key ID that exists in the registry + for (name, entry) in ®istry.keys { + if entry.keyid == reference { + return Some((name.clone(), entry.keyid.clone())); + } + } + + None + } + /// Get signing key for a specific runtime + /// + /// The signing key reference in the config can be either: + /// - A key ID (64-char hex hash) + /// - A global registry key name + /// + /// Returns the resolved key ID. #[allow(dead_code)] // Public API for future use pub fn get_runtime_signing_key(&self, runtime_name: &str) -> Option { let runtime_config = self.runtime.as_ref()?.get(runtime_name)?; let signing_key_name = &runtime_config.signing.as_ref()?.key; - self.get_signing_key_id(signing_key_name).cloned() + + // First, check the local signing_keys mapping + if let Some(key_ref) = self.get_signing_key_id(signing_key_name) { + // The value can be a key ID or a global name, resolve it + if let Some((_, keyid)) = Self::resolve_signing_key_reference(key_ref) { + return Some(keyid); + } + // If resolution fails, return the value as-is (might be a key ID not yet in registry) + return Some(key_ref.clone()); + } + + // If not in local mapping, try resolving signing_key_name directly as a global reference + if let Some((_, keyid)) = Self::resolve_signing_key_reference(signing_key_name) { + return Some(keyid); + } + + None } /// Get provision profile configuration @@ -789,6 +1201,15 @@ impl Config { .as_ref() } + /// Get the state file path for a provision profile. + /// Returns the configured state_file path, or the default `provision-{profile}.state` if not set. + /// The path is relative to src_dir. + pub fn get_provision_state_file(&self, profile_name: &str) -> String { + self.get_provision_profile(profile_name) + .and_then(|p| p.state_file.clone()) + .unwrap_or_else(|| format!("provision-{}.state", profile_name)) + } + /// Get the resolved source directory path /// If src_dir is configured, it resolves relative paths relative to the config file /// If not configured, returns None (use default behavior) @@ -2736,6 +3157,51 @@ image = "docker.io/avocadolinux/sdk:apollo-edge" assert!(merged.is_none()); } + #[test] + fn test_provision_state_file_default() { + // Test that state_file defaults to provision-{profile}.state when not configured + let config_content = r#" +provision: + usb: + container_args: + - --privileged +"#; + + let config = Config::load_from_yaml_str(config_content).unwrap(); + + // Should use default pattern when state_file is not configured + let state_file = config.get_provision_state_file("usb"); + assert_eq!(state_file, "provision-usb.state"); + + // Should also use default for non-existent profiles + let state_file = config.get_provision_state_file("nonexistent"); + assert_eq!(state_file, "provision-nonexistent.state"); + } + + #[test] + fn test_provision_state_file_custom() { + // Test that custom state_file is used when configured + let config_content = r#" +provision: + production: + container_args: + - --privileged + state_file: custom-state.json + development: + state_file: dev/state.json +"#; + + let config = Config::load_from_yaml_str(config_content).unwrap(); + + // Should use custom state_file when configured + let state_file = config.get_provision_state_file("production"); + assert_eq!(state_file, "custom-state.json"); + + // Should work with nested paths + let state_file = config.get_provision_state_file("development"); + assert_eq!(state_file, "dev/state.json"); + } + #[test] fn test_merged_sdk_config() { // Create a temporary config file for testing merging @@ -4599,15 +5065,20 @@ sdk: #[test] fn test_signing_keys_parsing() { - let config_content = r#" + // Key IDs are now full 64-char hex-encoded SHA-256 hashes + let production_keyid = "abc123def456abc123def456abc123def456abc123def456abc123def456abc1"; + let backup_keyid = "789012fedcba789012fedcba789012fedcba789012fedcba789012fedcba7890"; + + let config_content = format!( + r#" default_target: qemux86-64 sdk: image: ghcr.io/avocado-framework/avocado-sdk:latest signing_keys: - - my-production-key: sha256-abc123def456 - - backup-key: sha256-789012fedcba + - my-production-key: {production_keyid} + - backup-key: {backup_keyid} runtime: dev: @@ -4622,9 +5093,10 @@ runtime: signing: key: my-production-key # checksum_algorithm defaults to sha256 -"#; +"# + ); - let config = Config::load_from_yaml_str(config_content).unwrap(); + let config = Config::load_from_yaml_str(&config_content).unwrap(); // Test that signing_keys is parsed correctly let signing_keys = config.get_signing_keys(); @@ -4633,11 +5105,11 @@ runtime: assert_eq!(signing_keys.len(), 2); assert_eq!( signing_keys.get("my-production-key"), - Some(&"sha256-abc123def456".to_string()) + Some(&production_keyid.to_string()) ); assert_eq!( signing_keys.get("backup-key"), - Some(&"sha256-789012fedcba".to_string()) + Some(&backup_keyid.to_string()) ); // Test get_signing_key_names helper @@ -4649,17 +5121,18 @@ runtime: // Test get_signing_key_id helper assert_eq!( config.get_signing_key_id("my-production-key"), - Some(&"sha256-abc123def456".to_string()) + Some(&production_keyid.to_string()) ); assert_eq!( config.get_signing_key_id("backup-key"), - Some(&"sha256-789012fedcba".to_string()) + Some(&backup_keyid.to_string()) ); assert_eq!(config.get_signing_key_id("nonexistent"), None); - // Test runtime signing key reference + // Test runtime signing key reference - returns the keyid from the mapping + // (without global registry, resolve_signing_key_reference returns None so we get the raw value) let runtime_key = config.get_runtime_signing_key("dev"); - assert_eq!(runtime_key, Some("sha256-abc123def456".to_string())); + assert_eq!(runtime_key, Some(production_keyid.to_string())); // Test runtime signing config let runtime = config.runtime.as_ref().unwrap().get("dev").unwrap(); @@ -4701,4 +5174,240 @@ sdk: let key_names = config.get_signing_key_names(); assert!(key_names.is_empty()); } + + #[test] + fn test_discover_external_config_refs_from_runtime() { + let config_content = r#" +runtime: + prod: + target: qemux86-64 + dependencies: + peridio: + ext: avocado-ext-peridio + config: avocado-ext-peridio/avocado.yml + local-ext: + ext: local-extension +"#; + + let parsed: serde_yaml::Value = serde_yaml::from_str(config_content).unwrap(); + let refs = Config::discover_external_config_refs(&parsed); + + assert_eq!(refs.len(), 1); + assert_eq!(refs[0].0, "avocado-ext-peridio"); + assert_eq!(refs[0].1, "avocado-ext-peridio/avocado.yml"); + } + + #[test] + fn test_discover_external_config_refs_from_ext() { + let config_content = r#" +ext: + main-ext: + types: + - sysext + dependencies: + external-dep: + ext: external-extension + config: external/config.yaml +"#; + + let parsed: serde_yaml::Value = serde_yaml::from_str(config_content).unwrap(); + let refs = Config::discover_external_config_refs(&parsed); + + assert_eq!(refs.len(), 1); + assert_eq!(refs[0].0, "external-extension"); + assert_eq!(refs[0].1, "external/config.yaml"); + } + + #[test] + fn test_merge_external_config_ext_section() { + let main_config_content = r#" +distro: + version: "1.0.0" +ext: + local-ext: + types: + - sysext +"#; + let external_config_content = r#" +ext: + external-ext: + types: + - sysext + version: "{{ config.distro.version }}" +"#; + + let mut main_config: serde_yaml::Value = serde_yaml::from_str(main_config_content).unwrap(); + let external_config: serde_yaml::Value = + serde_yaml::from_str(external_config_content).unwrap(); + + Config::merge_external_config(&mut main_config, &external_config, "external-ext"); + + // Check that both extensions are present + let ext_section = main_config.get("ext").unwrap().as_mapping().unwrap(); + assert!(ext_section.contains_key(serde_yaml::Value::String("local-ext".to_string()))); + assert!(ext_section.contains_key(serde_yaml::Value::String("external-ext".to_string()))); + } + + #[test] + fn test_merge_external_config_sdk_dependencies() { + let main_config_content = r#" +sdk: + image: test-image + dependencies: + main-package: "*" +"#; + let external_config_content = r#" +sdk: + dependencies: + external-package: "1.0.0" + main-package: "2.0.0" # Should not override main config +"#; + + let mut main_config: serde_yaml::Value = serde_yaml::from_str(main_config_content).unwrap(); + let external_config: serde_yaml::Value = + serde_yaml::from_str(external_config_content).unwrap(); + + Config::merge_external_config(&mut main_config, &external_config, "test-ext"); + + let sdk_deps = main_config + .get("sdk") + .unwrap() + .get("dependencies") + .unwrap() + .as_mapping() + .unwrap(); + + // External package should be added + assert!(sdk_deps.contains_key(serde_yaml::Value::String("external-package".to_string()))); + assert_eq!( + sdk_deps + .get(serde_yaml::Value::String("external-package".to_string())) + .unwrap() + .as_str(), + Some("1.0.0") + ); + + // Main package should NOT be overridden + assert_eq!( + sdk_deps + .get(serde_yaml::Value::String("main-package".to_string())) + .unwrap() + .as_str(), + Some("*") + ); + } + + #[test] + fn test_merge_does_not_override_distro() { + let main_config_content = r#" +distro: + version: "1.0.0" + channel: "stable" +"#; + let external_config_content = r#" +distro: + version: "2.0.0" + channel: "edge" +"#; + + let mut main_config: serde_yaml::Value = serde_yaml::from_str(main_config_content).unwrap(); + let external_config: serde_yaml::Value = + serde_yaml::from_str(external_config_content).unwrap(); + + Config::merge_external_config(&mut main_config, &external_config, "test-ext"); + + // Distro should remain unchanged from main config + let distro = main_config.get("distro").unwrap(); + assert_eq!(distro.get("version").unwrap().as_str(), Some("1.0.0")); + assert_eq!(distro.get("channel").unwrap().as_str(), Some("stable")); + } + + #[test] + fn test_load_composed_with_interpolation() { + use tempfile::TempDir; + + // Create a temp directory for our test configs + let temp_dir = TempDir::new().unwrap(); + + // Create main config + let main_config_content = r#" +distro: + version: "1.0.0" + channel: apollo-edge +default_target: qemux86-64 +sdk: + image: "docker.io/test:{{ config.distro.channel }}" + dependencies: + main-sdk-dep: "*" +runtime: + prod: + target: qemux86-64 + dependencies: + peridio: + ext: test-ext + config: external/avocado.yml +"#; + let main_config_path = temp_dir.path().join("avocado.yaml"); + std::fs::write(&main_config_path, main_config_content).unwrap(); + + // Create external config directory and file + let external_dir = temp_dir.path().join("external"); + std::fs::create_dir_all(&external_dir).unwrap(); + + let external_config_content = r#" +ext: + test-ext: + version: "{{ config.distro.version }}" + types: + - sysext +sdk: + dependencies: + external-sdk-dep: "*" +"#; + let external_config_path = external_dir.join("avocado.yml"); + std::fs::write(&external_config_path, external_config_content).unwrap(); + + // Load composed config + let composed = Config::load_composed(&main_config_path, Some("qemux86-64")).unwrap(); + + // Verify the SDK image was interpolated using main config's distro + assert_eq!( + composed + .config + .sdk + .as_ref() + .unwrap() + .image + .as_ref() + .unwrap(), + "docker.io/test:apollo-edge" + ); + + // Verify the external extension was merged + let ext_section = composed + .merged_value + .get("ext") + .unwrap() + .as_mapping() + .unwrap(); + assert!(ext_section.contains_key(serde_yaml::Value::String("test-ext".to_string()))); + + // Verify the external extension's version was interpolated from main config's distro + let test_ext = ext_section + .get(serde_yaml::Value::String("test-ext".to_string())) + .unwrap(); + assert_eq!(test_ext.get("version").unwrap().as_str(), Some("1.0.0")); + + // Verify SDK dependencies were merged + let sdk_deps = composed + .merged_value + .get("sdk") + .unwrap() + .get("dependencies") + .unwrap() + .as_mapping() + .unwrap(); + assert!(sdk_deps.contains_key(serde_yaml::Value::String("main-sdk-dep".to_string()))); + assert!(sdk_deps.contains_key(serde_yaml::Value::String("external-sdk-dep".to_string()))); + } } diff --git a/src/utils/container.rs b/src/utils/container.rs index 153a6f2..09e00a6 100644 --- a/src/utils/container.rs +++ b/src/utils/container.rs @@ -32,6 +32,10 @@ pub struct RunConfig { pub runtime_sysroot: Option, pub no_bootstrap: bool, pub disable_weak_dependencies: bool, + pub signing_socket_path: Option, + pub signing_helper_script_path: Option, + pub signing_key_name: Option, + pub signing_checksum_algorithm: Option, } impl Default for RunConfig { @@ -56,6 +60,10 @@ impl Default for RunConfig { runtime_sysroot: None, no_bootstrap: false, disable_weak_dependencies: false, + signing_socket_path: None, + signing_helper_script_path: None, + signing_key_name: None, + signing_checksum_algorithm: None, } } } @@ -121,7 +129,7 @@ impl SdkContainer { let volume_state = volume_manager.get_or_create_volume(&self.cwd).await?; // Build environment variables - let mut env_vars = config.env_vars.unwrap_or_default(); + let mut env_vars = config.env_vars.clone().unwrap_or_default(); // Set host platform environment variable let host_platform = if cfg!(target_os = "windows") { @@ -172,18 +180,8 @@ impl SdkContainer { let bash_cmd = vec!["bash".to_string(), "-c".to_string(), full_command]; // Build container command with volume state - let container_cmd = self.build_container_command( - &config.container_image, - &bash_cmd, - &config.target, - &env_vars, - config.container_name.as_deref(), - config.detach, - config.rm, - config.interactive, - config.container_args.as_deref(), - &volume_state, - )?; + let container_cmd = + self.build_container_command(&config, &bash_cmd, &env_vars, &volume_state)?; // Execute the command self.execute_container_command( @@ -195,34 +193,27 @@ impl SdkContainer { } /// Build the complete container command - #[allow(clippy::too_many_arguments)] fn build_container_command( &self, - container_image: &str, + config: &RunConfig, command: &[String], - target: &str, env_vars: &HashMap, - container_name: Option<&str>, - detach: bool, - rm: bool, - interactive: bool, - container_args: Option<&[String]>, volume_state: &VolumeState, ) -> Result> { let mut container_cmd = vec![self.container_tool.clone(), "run".to_string()]; // Container options - if rm { + if config.rm { container_cmd.push("--rm".to_string()); } - if let Some(name) = container_name { + if let Some(name) = &config.container_name { container_cmd.push("--name".to_string()); container_cmd.push(name.to_string()); } - if detach { + if config.detach { container_cmd.push("-d".to_string()); } - if interactive { + if config.interactive { container_cmd.push("-i".to_string()); container_cmd.push("-t".to_string()); } @@ -234,6 +225,23 @@ impl SdkContainer { container_cmd.push("-v".to_string()); container_cmd.push(format!("{}:/opt/_avocado:rw", volume_state.volume_name)); + // Mount signing socket directory if provided + if let Some(socket_path) = &config.signing_socket_path { + if let Some(socket_dir) = socket_path.parent() { + container_cmd.push("-v".to_string()); + container_cmd.push(format!("{}:/run/avocado:rw", socket_dir.display())); + } + } + + // Mount signing helper script if provided + if let Some(helper_script_path) = &config.signing_helper_script_path { + container_cmd.push("-v".to_string()); + container_cmd.push(format!( + "{}:/usr/local/bin/avocado-sign-request:ro", + helper_script_path.display() + )); + } + // Mount signing keys directory if it exists (read-only for security) let signing_keys_env = if let Ok(signing_keys_dir) = crate::utils::signing_keys::get_signing_keys_dir() { @@ -256,9 +264,29 @@ impl SdkContainer { // Add environment variables container_cmd.push("-e".to_string()); - container_cmd.push(format!("AVOCADO_TARGET={target}")); + container_cmd.push(format!("AVOCADO_TARGET={}", config.target)); + container_cmd.push("-e".to_string()); + container_cmd.push(format!("AVOCADO_SDK_TARGET={}", config.target)); container_cmd.push("-e".to_string()); - container_cmd.push(format!("AVOCADO_SDK_TARGET={target}")); + container_cmd.push("AVOCADO_SRC_DIR=/opt/src".to_string()); + + // Add signing-related environment variables + if config.signing_socket_path.is_some() { + container_cmd.push("-e".to_string()); + container_cmd.push("AVOCADO_SIGNING_SOCKET=/run/avocado/sign.sock".to_string()); + container_cmd.push("-e".to_string()); + container_cmd.push("AVOCADO_SIGNING_ENABLED=1".to_string()); + } + + if let Some(key_name) = &config.signing_key_name { + container_cmd.push("-e".to_string()); + container_cmd.push(format!("AVOCADO_SIGNING_KEY_NAME={}", key_name)); + } + + if let Some(checksum_algo) = &config.signing_checksum_algorithm { + container_cmd.push("-e".to_string()); + container_cmd.push(format!("AVOCADO_SIGNING_CHECKSUM={}", checksum_algo)); + } // Add signing keys directory env var if mounted if let Some(keys_dir) = signing_keys_env { @@ -272,14 +300,14 @@ impl SdkContainer { } // Add additional container arguments if provided - if let Some(args) = container_args { + if let Some(args) = &config.container_args { for arg in args { container_cmd.extend(Self::parse_container_arg(arg)); } } // Add the container image - container_cmd.push(container_image.to_string()); + container_cmd.push(config.container_image.to_string()); // Add the command to execute container_cmd.extend(command.iter().cloned()); @@ -294,7 +322,7 @@ impl SdkContainer { let volume_state = volume_manager.get_or_create_volume(&self.cwd).await?; // Build environment variables - let mut env_vars = config.env_vars.unwrap_or_default(); + let mut env_vars = config.env_vars.clone().unwrap_or_default(); // Set host platform environment variable let host_platform = if cfg!(target_os = "windows") { @@ -345,18 +373,8 @@ impl SdkContainer { let bash_cmd = vec!["bash".to_string(), "-c".to_string(), full_command]; // Build container command with volume state - let container_cmd = self.build_container_command( - &config.container_image, - &bash_cmd, - &config.target, - &env_vars, - config.container_name.as_deref(), - false, // Never detach when capturing output - config.rm, - false, // Never interactive when capturing output - config.container_args.as_deref(), - &volume_state, - )?; + let container_cmd = + self.build_container_command(&config, &bash_cmd, &env_vars, &volume_state)?; if config.verbose || self.verbose { print_info( @@ -1001,18 +1019,33 @@ mod tests { let env_vars = HashMap::new(); let volume_state = VolumeState::new(std::env::current_dir().unwrap(), "docker".to_string()); - let result = container.build_container_command( - "test-image", - &command, - "test-target", - &env_vars, - None, - false, - true, - false, - None, - &volume_state, - ); + let config = RunConfig { + container_image: "test-image".to_string(), + target: "test-target".to_string(), + command: "".to_string(), + container_name: None, + detach: false, + rm: true, + env_vars: None, + verbose: false, + source_environment: false, + use_entrypoint: false, + interactive: false, + repo_url: None, + repo_release: None, + container_args: None, + dnf_args: None, + extension_sysroot: None, + runtime_sysroot: None, + no_bootstrap: false, + disable_weak_dependencies: false, + signing_socket_path: None, + signing_helper_script_path: None, + signing_key_name: None, + signing_checksum_algorithm: None, + }; + + let result = container.build_container_command(&config, &command, &env_vars, &volume_state); assert!(result.is_ok()); let cmd = result.unwrap(); @@ -1022,6 +1055,8 @@ mod tests { assert!(cmd.contains(&"test-image".to_string())); assert!(cmd.contains(&"echo".to_string())); assert!(cmd.contains(&"test".to_string())); + // Verify AVOCADO_SRC_DIR is set + assert!(cmd.contains(&"AVOCADO_SRC_DIR=/opt/src".to_string())); } #[test] diff --git a/src/utils/mod.rs b/src/utils/mod.rs index 0b089c5..d4e8366 100644 --- a/src/utils/mod.rs +++ b/src/utils/mod.rs @@ -5,5 +5,6 @@ pub mod interpolation; pub mod output; pub mod pkcs11_devices; pub mod signing_keys; +pub mod signing_service; pub mod target; pub mod volume; diff --git a/src/utils/pkcs11_devices.rs b/src/utils/pkcs11_devices.rs index 4622cb4..7b8b551 100644 --- a/src/utils/pkcs11_devices.rs +++ b/src/utils/pkcs11_devices.rs @@ -579,12 +579,15 @@ fn detect_algorithm_from_attributes(attributes: &[Attribute]) -> Result String { let mut hasher = Sha256::new(); hasher.update(public_key_bytes); let hash = hasher.finalize(); - format!("sha256-{}", hex_encode(&hash[..8])) + hex_encode(&hash) } /// Build a PKCS#11 URI @@ -899,7 +902,9 @@ mod tests { fn test_generate_keyid_from_public_key() { let test_key = b"test public key data"; let keyid = generate_keyid_from_public_key(test_key); - assert!(keyid.starts_with("sha256-")); - assert_eq!(keyid.len(), 7 + 16); // "sha256-" + 16 hex chars + // Key ID is the full SHA-256 hash, base16 encoded (64 hex chars) + assert_eq!(keyid.len(), 64); + // Verify it's valid hex + assert!(keyid.chars().all(|c| c.is_ascii_hexdigit())); } } diff --git a/src/utils/signing_keys.rs b/src/utils/signing_keys.rs index d06f108..88ea08f 100644 --- a/src/utils/signing_keys.rs +++ b/src/utils/signing_keys.rs @@ -130,12 +130,15 @@ pub fn get_key_file_path(keyid: &str) -> Result { Ok(keys_dir.join(keyid)) } -/// Generate a key ID from a public key (SHA-256 hash, first 16 hex chars) +/// Generate a key ID from a public key (full SHA-256 hash, base16/hex encoded) +/// +/// Returns the full 64-character hex-encoded SHA-256 hash of the public key. +/// This key ID is also used as the default friendly name when no name is provided. pub fn generate_keyid(public_key: &PublicKey) -> String { let mut hasher = Sha256::new(); hasher.update(public_key.as_ref()); let hash = hasher.finalize(); - format!("sha256-{}", hex::encode(&hash[..8])) + hex::encode(&hash) } /// Generate a new ed25519 keypair @@ -323,8 +326,10 @@ mod tests { fn test_generate_keyid() { let (_, verifying_key) = generate_keypair(); let keyid = generate_keyid(&verifying_key); - assert!(keyid.starts_with("sha256-")); - assert_eq!(keyid.len(), 7 + 16); // "sha256-" + 16 hex chars + // Key ID is the full SHA-256 hash, base16 encoded (64 hex chars) + assert_eq!(keyid.len(), 64); + // Verify it's valid hex + assert!(keyid.chars().all(|c| c.is_ascii_hexdigit())); } #[test] @@ -370,7 +375,8 @@ mod tests { registry.keys.insert( "test-key".to_string(), KeyEntry { - keyid: "sha256-abcd1234abcd1234".to_string(), + keyid: "abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234" + .to_string(), algorithm: "ed25519".to_string(), created_at: Utc::now(), uri: "file:///path/to/key".to_string(), diff --git a/src/utils/signing_service.rs b/src/utils/signing_service.rs new file mode 100644 index 0000000..ce9e67e --- /dev/null +++ b/src/utils/signing_service.rs @@ -0,0 +1,581 @@ +//! Signing service for handling binary signing requests from containers. +//! +//! This module provides a Unix domain socket service that listens for signing +//! requests from containers during provisioning operations. The service allows +//! container scripts to request binary signing without breaking execution flow. + +use anyhow::{Context, Result}; +use serde::{Deserialize, Serialize}; +use std::path::PathBuf; +use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader}; +use tokio::net::{UnixListener, UnixStream}; +use tokio::sync::mpsc; +use tokio::time::{timeout, Duration}; + +use crate::utils::output::{print_error, print_info, OutputLevel}; + +/// Maximum time to wait for a signing operation (30 seconds) +const SIGNING_TIMEOUT: Duration = Duration::from_secs(30); + +/// Request from container to sign a binary +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct SignRequest { + /// Type identifier for the request + #[serde(rename = "type")] + pub request_type: String, + /// Path to the binary inside the container (for reference in signature file) + pub binary_path: String, + /// Hex-encoded hash computed by the container + pub hash: String, + /// File size in bytes + pub size: u64, + /// Checksum algorithm used (sha256 or blake3) + pub checksum_algorithm: String, +} + +/// Response from host after signing +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct SignResponse { + /// Type identifier for the response + #[serde(rename = "type")] + pub response_type: String, + /// Whether the signing was successful + pub success: bool, + /// The signature content (JSON format) - container writes this to .sig file + pub signature: Option, + /// Error message if signing failed + pub error: Option, +} + +/// Configuration for the signing service +#[derive(Debug, Clone)] +pub struct SigningServiceConfig { + /// Path to the Unix socket file on the host + pub socket_path: PathBuf, + /// Name of the runtime being provisioned + pub runtime_name: String, + /// Signing key name to use + pub key_name: String, + /// Signing key ID + pub keyid: String, + /// Enable verbose logging + pub verbose: bool, +} + +/// Handle for controlling the signing service +pub struct SigningService { + /// Channel to send shutdown signal + shutdown_tx: mpsc::Sender<()>, + /// Task handle for the service + task_handle: tokio::task::JoinHandle>, + /// Temporary directory for socket and helper script (kept alive until service is dropped) + _temp_dir: std::sync::Arc, +} + +impl SigningService { + /// Start a new signing service + pub async fn start(config: SigningServiceConfig, temp_dir: tempfile::TempDir) -> Result { + let (shutdown_tx, mut shutdown_rx) = mpsc::channel::<()>(1); + + // Create the socket + let socket_path = config.socket_path.clone(); + + // Remove socket file if it exists from a previous run + if socket_path.exists() { + std::fs::remove_file(&socket_path).with_context(|| { + format!( + "Failed to remove existing socket at {}", + socket_path.display() + ) + })?; + } + + // Create parent directory if needed + if let Some(parent) = socket_path.parent() { + std::fs::create_dir_all(parent).with_context(|| { + format!("Failed to create socket directory at {}", parent.display()) + })?; + } + + let listener = UnixListener::bind(&socket_path) + .with_context(|| format!("Failed to bind Unix socket at {}", socket_path.display()))?; + + // Set socket permissions to 0600 (owner only) + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + let perms = std::fs::Permissions::from_mode(0o600); + std::fs::set_permissions(&socket_path, perms) + .context("Failed to set socket permissions")?; + } + + if config.verbose { + print_info( + &format!("Signing service listening on {}", socket_path.display()), + OutputLevel::Verbose, + ); + } + + // Spawn the service task + let config_clone = config.clone(); + let socket_path_clone = socket_path.clone(); + let task_handle = tokio::spawn(async move { + let result = run_service(listener, config_clone, &mut shutdown_rx).await; + + // Clean up socket file + let _ = std::fs::remove_file(&socket_path_clone); + + result + }); + + Ok(Self { + shutdown_tx, + task_handle, + _temp_dir: std::sync::Arc::new(temp_dir), + }) + } + + /// Shutdown the signing service + pub async fn shutdown(self) -> Result<()> { + // Send shutdown signal + let _ = self.shutdown_tx.send(()).await; + + // Wait for the task to complete + self.task_handle + .await + .context("Failed to join signing service task")? + .context("Signing service encountered an error") + } +} + +/// Run the signing service loop +async fn run_service( + listener: UnixListener, + config: SigningServiceConfig, + shutdown_rx: &mut mpsc::Receiver<()>, +) -> Result<()> { + loop { + tokio::select! { + // Handle shutdown signal + _ = shutdown_rx.recv() => { + if config.verbose { + print_info("Signing service shutting down", OutputLevel::Verbose); + } + break; + } + + // Accept new connections + result = listener.accept() => { + match result { + Ok((stream, _addr)) => { + let config = config.clone(); + tokio::spawn(async move { + if let Err(e) = handle_connection(stream, config).await { + print_error( + &format!("Error handling signing request: {}", e), + OutputLevel::Normal, + ); + } + }); + } + Err(e) => { + print_error( + &format!("Failed to accept connection: {}", e), + OutputLevel::Normal, + ); + } + } + } + } + } + + Ok(()) +} + +/// Handle a single connection from a container +async fn handle_connection(stream: UnixStream, config: SigningServiceConfig) -> Result<()> { + if config.verbose { + print_info( + "Received signing request from container", + OutputLevel::Verbose, + ); + } + + let (reader, mut writer) = stream.into_split(); + let mut reader = BufReader::new(reader); + let mut line = String::new(); + + // Read the request with timeout + let request: SignRequest = match timeout(SIGNING_TIMEOUT, reader.read_line(&mut line)).await { + Ok(Ok(_)) => serde_json::from_str(&line).context("Failed to parse signing request JSON")?, + Ok(Err(e)) => { + return Err(anyhow::anyhow!("Failed to read request: {}", e)); + } + Err(_) => { + return Err(anyhow::anyhow!("Timeout reading signing request")); + } + }; + + if config.verbose { + print_info( + &format!("Processing signing request for: {}", request.binary_path), + OutputLevel::Verbose, + ); + } + + // Process the signing request (synchronous - just signs the pre-computed hash) + let response = process_signing_request(request, &config); + + // Send response back to container + let response_json = + serde_json::to_string(&response).context("Failed to serialize signing response")?; + + writer + .write_all(response_json.as_bytes()) + .await + .context("Failed to write response")?; + writer + .write_all(b"\n") + .await + .context("Failed to write newline")?; + writer.flush().await.context("Failed to flush response")?; + + if config.verbose { + if response.success { + print_info( + "Signing request completed successfully", + OutputLevel::Verbose, + ); + } else { + print_error( + &format!( + "Signing request failed: {}", + response.error.unwrap_or_default() + ), + OutputLevel::Verbose, + ); + } + } + + Ok(()) +} + +/// Process a signing request and generate a response +/// +/// The container has already computed the hash - we just need to sign it. +/// This is fast since there's no file I/O involved. +fn process_signing_request(request: SignRequest, config: &SigningServiceConfig) -> SignResponse { + match sign_hash_from_request(&request, config) { + Ok(signature_content) => SignResponse { + response_type: "sign_response".to_string(), + success: true, + signature: Some(signature_content), + error: None, + }, + Err(e) => SignResponse { + response_type: "sign_response".to_string(), + success: false, + signature: None, + error: Some(format!("{:#}", e)), + }, + } +} + +/// Sign a hash provided in the request +fn sign_hash_from_request( + request: &SignRequest, + config: &SigningServiceConfig, +) -> anyhow::Result { + use crate::utils::image_signing::{sign_hash_manifest, HashManifest, HashManifestEntry}; + + // Create a manifest with the pre-computed hash from the container + let manifest = HashManifest { + runtime: config.runtime_name.clone(), + checksum_algorithm: request.checksum_algorithm.clone(), + files: vec![HashManifestEntry { + container_path: request.binary_path.clone(), + hash: request.hash.clone(), + size: request.size, + }], + }; + + // Sign the hash - this is fast, no file I/O needed + let signatures = sign_hash_manifest(&manifest, &config.key_name, &config.keyid) + .context("Failed to sign hash")?; + + if signatures.is_empty() { + anyhow::bail!("No signature generated"); + } + + Ok(signatures[0].content.clone()) +} + +/// Generate the helper script for containers to request signing +pub fn generate_helper_script() -> String { + r#"#!/bin/bash +# avocado-sign-request - Request binary signing from host CLI +# This script is injected into containers during provisioning to enable +# inline binary signing without breaking script execution flow. +# +# The script computes the hash locally in the container, sends only the hash +# to the host for signing, and writes the signature file locally. +# This avoids expensive file transfers between container and host. + +set -e + +# Configuration +MAX_RETRIES=3 +RETRY_DELAY=1 +# Timeout for waiting on response (signing is fast since we only send the hash) +SOCKET_TIMEOUT=30 + +# Check if signing socket is available +if [ ! -S "/run/avocado/sign.sock" ]; then + echo "Error: Signing socket not available" >&2 + exit 2 # Signing unavailable +fi + +# Check arguments +if [ $# -ne 1 ]; then + echo "Usage: avocado-sign-request " >&2 + exit 1 +fi + +BINARY_PATH="$1" + +# Check if binary exists +if [ ! -f "$BINARY_PATH" ]; then + echo "Error: Binary not found: $BINARY_PATH" >&2 + exit 3 # File not found +fi + +# Get absolute path +BINARY_PATH=$(realpath "$BINARY_PATH") + +# Determine checksum algorithm from environment or default to sha256 +CHECKSUM_ALGO="${AVOCADO_SIGNING_CHECKSUM:-sha256}" + +# Get file size +FILE_SIZE=$(stat -c%s "$BINARY_PATH" 2>/dev/null || stat -f%z "$BINARY_PATH" 2>/dev/null) +if [ -z "$FILE_SIZE" ]; then + echo "Error: Could not determine file size" >&2 + exit 1 +fi + +# Compute hash locally in the container +echo "Computing $CHECKSUM_ALGO hash of: $BINARY_PATH" >&2 +case "$CHECKSUM_ALGO" in + sha256) + if command -v sha256sum &> /dev/null; then + HASH=$(sha256sum "$BINARY_PATH" | cut -d' ' -f1) + elif command -v shasum &> /dev/null; then + HASH=$(shasum -a 256 "$BINARY_PATH" | cut -d' ' -f1) + else + echo "Error: No sha256 tool available (sha256sum or shasum)" >&2 + exit 2 + fi + ;; + blake3) + if command -v b3sum &> /dev/null; then + HASH=$(b3sum "$BINARY_PATH" | cut -d' ' -f1) + else + echo "Error: b3sum not available for blake3 hashing" >&2 + exit 2 + fi + ;; + *) + echo "Error: Unsupported checksum algorithm: $CHECKSUM_ALGO" >&2 + exit 1 + ;; +esac + +if [ -z "$HASH" ]; then + echo "Error: Failed to compute hash" >&2 + exit 1 +fi + +# Build JSON request with the pre-computed hash +# Using printf to avoid issues with JSON escaping +REQUEST=$(printf '{"type":"sign_request","binary_path":"%s","hash":"%s","size":%s,"checksum_algorithm":"%s"}' \ + "$BINARY_PATH" "$HASH" "$FILE_SIZE" "$CHECKSUM_ALGO") + +# Function to send request and get response +send_signing_request() { + local response="" + + # Send request to signing service via Unix socket + # The -t option for socat sets the timeout for half-close situations + if command -v socat &> /dev/null; then + response=$(echo "$REQUEST" | socat -t${SOCKET_TIMEOUT} -T${SOCKET_TIMEOUT} - UNIX-CONNECT:/run/avocado/sign.sock 2>/dev/null) || true + elif command -v nc &> /dev/null; then + # Try with -q option first (GNU netcat), fall back to -w only + if nc -h 2>&1 | grep -q '\-q'; then + response=$(echo "$REQUEST" | nc -w ${SOCKET_TIMEOUT} -q ${SOCKET_TIMEOUT} -U /run/avocado/sign.sock 2>/dev/null) || true + else + response=$(echo "$REQUEST" | nc -w ${SOCKET_TIMEOUT} -U /run/avocado/sign.sock 2>/dev/null) || true + fi + else + echo "Error: Neither socat nor nc available for socket communication" >&2 + exit 2 + fi + + echo "$response" +} + +# Retry loop with exponential backoff +RESPONSE="" +ATTEMPT=1 +while [ $ATTEMPT -le $MAX_RETRIES ]; do + if [ $ATTEMPT -gt 1 ]; then + echo "Retry attempt $ATTEMPT of $MAX_RETRIES..." >&2 + sleep $RETRY_DELAY + RETRY_DELAY=$((RETRY_DELAY * 2)) + fi + + RESPONSE=$(send_signing_request) + + # Check if we got a valid response + if [ -n "$RESPONSE" ]; then + if echo "$RESPONSE" | grep -q '"success"'; then + break + fi + fi + + ATTEMPT=$((ATTEMPT + 1)) +done + +# Check if response is empty after all retries +if [ -z "$RESPONSE" ]; then + echo "Error: No response from signing service after $MAX_RETRIES attempts" >&2 + exit 1 +fi + +# Parse response and check success +SUCCESS=$(echo "$RESPONSE" | grep -o '"success":[^,}]*' | cut -d: -f2 | tr -d ' ') + +if [ "$SUCCESS" = "true" ]; then + # Extract signature content from response and write to .sig file + # The signature field contains the JSON signature content (escaped in the response) + SIG_PATH="${BINARY_PATH}.sig" + + # Extract the signature JSON from the response using the best available tool + SIGNATURE="" + + # Try jq first (most reliable for JSON parsing) + if command -v jq &> /dev/null; then + SIGNATURE=$(echo "$RESPONSE" | jq -r '.signature // empty' 2>/dev/null) || true + fi + + # Fall back to python3 if jq didn't work or isn't available + if [ -z "$SIGNATURE" ] && command -v python3 &> /dev/null; then + SIGNATURE=$(echo "$RESPONSE" | python3 -c " +import sys, json +try: + data = json.load(sys.stdin) + sig = data.get('signature', '') + if sig: + print(sig, end='') +except Exception as e: + pass +" 2>/dev/null) || true + fi + + # Last resort: try python (python2 on some systems) + if [ -z "$SIGNATURE" ] && command -v python &> /dev/null; then + SIGNATURE=$(echo "$RESPONSE" | python -c " +import sys, json +try: + data = json.load(sys.stdin) + sig = data.get('signature', '') + if sig: + sys.stdout.write(sig) +except: + pass +" 2>/dev/null) || true + fi + + if [ -z "$SIGNATURE" ]; then + echo "Error: Could not extract signature from response. Need jq or python3." >&2 + echo "Response was: $RESPONSE" >&2 + exit 1 + fi + + # Write the signature file (use printf to avoid adding extra newline) + printf '%s\n' "$SIGNATURE" > "$SIG_PATH" + + echo "Successfully signed: $BINARY_PATH" >&2 + exit 0 +else + # Extract error message + ERROR="" + if command -v jq &> /dev/null; then + ERROR=$(echo "$RESPONSE" | jq -r '.error // empty' 2>/dev/null) || true + fi + if [ -z "$ERROR" ]; then + ERROR=$(echo "$RESPONSE" | grep -o '"error":"[^"]*"' | cut -d'"' -f4) + fi + echo "Error signing binary: $ERROR" >&2 + exit 1 +fi +"# + .to_string() +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_sign_request_serialization() { + let request = SignRequest { + request_type: "sign_request".to_string(), + binary_path: "/opt/_avocado/x86_64/runtimes/test/binary".to_string(), + hash: "abcd1234".to_string(), + size: 1024, + checksum_algorithm: "sha256".to_string(), + }; + + let json = serde_json::to_string(&request).unwrap(); + assert!(json.contains("sign_request")); + assert!(json.contains("/opt/_avocado/x86_64/runtimes/test/binary")); + assert!(json.contains("abcd1234")); + assert!(json.contains("1024")); + assert!(json.contains("sha256")); + } + + #[test] + fn test_sign_response_serialization() { + let response = SignResponse { + response_type: "sign_response".to_string(), + success: true, + signature: Some("{\"version\":\"1\"}".to_string()), + error: None, + }; + + let json = serde_json::to_string(&response).unwrap(); + assert!(json.contains("sign_response")); + assert!(json.contains("true")); + assert!(json.contains("signature")); + } + + #[test] + fn test_helper_script_generation() { + let script = generate_helper_script(); + assert!(script.contains("#!/bin/bash")); + assert!(script.contains("avocado-sign-request")); + assert!(script.contains("/run/avocado/sign.sock")); + assert!(script.contains("sign_request")); + // Verify retry logic is present + assert!(script.contains("MAX_RETRIES")); + assert!(script.contains("SOCKET_TIMEOUT")); + // Verify proper socat/nc timeout options + assert!(script.contains("-t${SOCKET_TIMEOUT}")); + // Verify hash computation is done locally + assert!(script.contains("sha256sum")); + assert!(script.contains("Computing")); + // Verify signature file is written locally + assert!(script.contains("SIG_PATH")); + // Verify jq is used for JSON parsing (most reliable) + assert!(script.contains("jq -r")); + } +} diff --git a/tests/fixtures/configs/with-signing-keys.yaml b/tests/fixtures/configs/with-signing-keys.yaml index 969dadb..2581319 100644 --- a/tests/fixtures/configs/with-signing-keys.yaml +++ b/tests/fixtures/configs/with-signing-keys.yaml @@ -3,9 +3,10 @@ default_target: qemux86-64 sdk: image: ghcr.io/avocado-framework/avocado-sdk:latest +# Key IDs are full 64-char hex-encoded SHA-256 hashes of the public key signing_keys: - - my-production-key: sha256-abc123def456 - - backup-key: sha256-789012fedcba + - my-production-key: abc123def456abc123def456abc123def456abc123def456abc123def456abc1 + - backup-key: 789012fedcba789012fedcba789012fedcba789012fedcba789012fedcba7890 runtime: default: diff --git a/tests/signing_integration.rs b/tests/signing_integration.rs new file mode 100644 index 0000000..b873702 --- /dev/null +++ b/tests/signing_integration.rs @@ -0,0 +1,105 @@ +//! Integration tests for signing service and request handling + +#[cfg(test)] +mod tests { + use std::path::PathBuf; + + #[test] + fn test_signing_request_serialization() { + use serde_json; + + let request = serde_json::json!({ + "type": "sign_request", + "binary_path": "/opt/_avocado/x86_64/runtimes/test/binary", + "checksum_algorithm": "sha256" + }); + + let request_str = serde_json::to_string(&request).unwrap(); + assert!(request_str.contains("sign_request")); + assert!(request_str.contains("binary")); + } + + #[test] + fn test_signing_response_serialization() { + use serde_json; + + let response = serde_json::json!({ + "type": "sign_response", + "success": true, + "signature_path": "/opt/_avocado/x86_64/runtimes/test/binary.sig", + "signature_content": "{}", + "error": null + }); + + let response_str = serde_json::to_string(&response).unwrap(); + assert!(response_str.contains("sign_response")); + assert!(response_str.contains("true")); + } + + #[test] + fn test_helper_script_contains_required_elements() { + use avocado_cli::utils::signing_service::generate_helper_script; + + let script = generate_helper_script(); + + // Check for required shebang + assert!(script.starts_with("#!/bin/bash")); + + // Check for socket path + assert!(script.contains("/run/avocado/sign.sock")); + + // Check for error handling + assert!(script.contains("exit 1")); + assert!(script.contains("exit 2")); + assert!(script.contains("exit 3")); + + // Check for JSON request building + assert!(script.contains("sign_request")); + assert!(script.contains("binary_path")); + assert!(script.contains("checksum_algorithm")); + } + + #[test] + fn test_run_config_with_signing_defaults() { + use avocado_cli::utils::container::RunConfig; + + let config = RunConfig::default(); + + assert!(config.signing_socket_path.is_none()); + assert!(config.signing_helper_script_path.is_none()); + assert!(config.signing_key_name.is_none()); + assert!(config.signing_checksum_algorithm.is_none()); + } + + #[test] + fn test_run_config_with_signing_configured() { + use avocado_cli::utils::container::RunConfig; + + let config = RunConfig { + signing_socket_path: Some(PathBuf::from("/tmp/sign.sock")), + signing_helper_script_path: Some(PathBuf::from("/tmp/helper.sh")), + signing_key_name: Some("test-key".to_string()), + signing_checksum_algorithm: Some("sha256".to_string()), + ..Default::default() + }; + + assert!(config.signing_socket_path.is_some()); + assert!(config.signing_helper_script_path.is_some()); + assert_eq!(config.signing_key_name.unwrap(), "test-key"); + assert_eq!(config.signing_checksum_algorithm.unwrap(), "sha256"); + } +} + +#[cfg(test)] +mod path_validation_tests { + // Note: These tests are in a separate module because they test internal + // functions that aren't publicly exposed. In the actual implementation, + // the validation tests are in signing_request_handler.rs + + #[test] + fn test_valid_binary_path() { + // This would require exposing validate_binary_path or testing through + // the public handle_signing_request function + // For now, we rely on the unit tests in signing_request_handler.rs + } +}