diff --git a/CLAUDE.md b/CLAUDE.md index 1eb5222..af7bed8 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -14,6 +14,7 @@ These rules are **mandatory** for every Claude instance working on this repo. 6. **Atomic commits** — 1 commit = 1 concept. Better 3 small focused commits than 1 giant commit. Each commit should be self-contained and pass CI on its own. 7. **Commit message format** — `type: clear title` where type is `feat`, `fix`, `chore`, `refactor`, or `docs`. Add bullet points in the body for details when needed. 8. **Documentation** — After every significant change, update `CLAUDE.md` first (implementation state, module structure, test count), then `README.md` if user-facing features changed. +9. **GitHub metadata** — When changes affect the project scope, features, or tech stack, update the GitHub repository description and topics to stay in sync. See the "GitHub Metadata" section below for current values and the `gh` commands to update them. ## Project Overview @@ -29,7 +30,7 @@ These rules are **mandatory** for every Claude instance working on this repo. cargo build # Debug build cargo build --release # Release build cargo run -- # Run (e.g., cargo run -- install firefox) -cargo test # Run all tests (195 tests: 103 bin + 92 lib) +cargo test # Run all tests (209 tests: 92 bin + 117 lib) cargo test # Run a single test by name cargo test -- --nocapture # Run tests with stdout visible cargo clippy # Lint @@ -47,17 +48,31 @@ There are no integration tests — all tests are unit tests inside `#[cfg(test)] 1. **Detect system** — `SystemProfile::detect()` auto-detects arch, dynamic linker, libc, lib dirs, filesystem layout 2. **Resolve source** — if `--from` omitted, queries ALL plugins in parallel (including AUR `-bin` variants); user picks from full list via interactive `dialoguer::Select` (`pick_source()` in `cli/install.rs`) -3. **Resolve deps** — recursive dependency resolution with cycle detection (`cli/deps.rs`) +3. **Resolve deps** — recursive dependency resolution with cycle detection + **cross-source fallback** (queries all plugins if dep not found in primary source, lets user choose) (`cli/deps.rs`) 4. **Check conflicts** — 5 types: file ownership, binary name, library soname, declared conflicts, version constraints (`core/conflicts.rs`) -5. **Download** in parallel (4 threads via `thread::scope`) with progress bars and retry -6. **Verify** — SHA256 checksum + GPG signature (best-effort) +5. **Download** in parallel (4 threads via `thread::scope`) with progress bars and retry — **step [1/4] indicator** +6. **Verify** — SHA256 checksum + GPG signature (best-effort) — **step [2/4]** 7. **Extract** and analyze ELF binaries with `goblin` 8. **Arch check** — verify ELF `e_machine` matches host architecture before patching (warning on mismatch) -9. **Patch** ELF binaries with `elb` (set interpreter, RUNPATH) using detected page size +9. **Patch** ELF binaries with `elb` (set interpreter, RUNPATH) — **parallel patching** via `thread::scope` for packages with multiple ELFs — **step [3/4]** 10. **Remap** FHS paths to ZL-managed directories (`core/path/`) 11. **Install** atomically — `Transaction` tracks all changes; rollback on any failure 12. **Post-install checks** — warn about missing shared libraries not found in ZL DB or system lib dirs 13. **Track** in redb database + dependency graph +14. **Record history** — install event stored in HISTORY table for `zl history`/`zl rollback` +15. **Summary** — colored output with step [4/4] completion indicator + +### New commands (v0.2) + +- **`zl run `** — download, extract, patch, execute a package without installing. Temp dir auto-cleaned on exit. +- **`zl history list`** — show install/remove/upgrade history with timestamps +- **`zl history rollback [N]`** — undo the last N operations (installs can be rolled back; removes show reinstall hint) +- **`zl why `** — trace dependency chain explaining why a package is installed +- **`zl doctor`** — full system diagnostics: DB integrity, broken symlinks, missing libs, orphans, disk usage, system profile +- **`zl size [package]`** — disk usage per package with file breakdown, dep costs. `--sort` for largest first. +- **`zl diff `** — show version/dep/size changes before updating +- **`zl audit [package]`** — check installed packages for known CVEs via OSV.dev API +- **`zl cache dedup`** — deduplicate identical shared libraries across packages using hardlinks ### Search flow (`zl search`) @@ -66,6 +81,7 @@ There are no integration tests — all tests are unit tests inside `#[cfg(test)] 3. **AUR binary discovery** — if the query doesn't end with `-bin`, automatically also fetches `-bin`, `-appimage`, `-prebuilt` variants and tags them `[binary]` 4. **Sorted output** — results sorted by relevance (default), name, or version via `--sort` 5. **Filtering** — `--exact` shows only exact name matches; `--from` limits to a single source; `--limit` controls results per source +6. **Colored output** — exact matches highlighted green+bold, versions in yellow, source headers in cyan ### Removal flow (`zl remove --cascade`) @@ -73,6 +89,7 @@ There are no integration tests — all tests are unit tests inside `#[cfg(test)] 2. **Orphan detection** — only removes packages that are: (a) tracked in ZL's DB, (b) marked as implicit (`explicit: false`), (c) not depended on by any remaining package (checked via both dependency table and shared lib needs) 3. **Shared dep protection** — dependencies used by other packages are listed as "Keeping (needed by X)" and never removed 4. **Dry-run support** — `--dry-run` with `--cascade` shows the full removal plan without touching anything +5. **History recording** — removal events stored for `zl history`/`zl rollback` ### Startup flow (`main.rs`) @@ -91,9 +108,11 @@ There are no integration tests — all tests are unit tests inside `#[cfg(test)] - **`SourcePlugin` trait** (`plugin/mod.rs`): Interface every package source implements — `name()`, `search()`, `resolve()`, `download()`, `extract()`, `sync()`. Plugins are compile-time modules with trait objects, not dynamic libraries. - **`Transaction`** (`core/transaction.rs`): Atomic install — tracks files/dirs/symlinks/DB entries created during install, rolls back everything on failure. - **`DepGraph`** (`core/graph/model.rs`): petgraph-based dependency graph with topological sort, cycle detection, orphan detection. -- **`ZlDatabase`** (`core/db/ops.rs`): redb-based persistent store. Tables: PACKAGES, FILE_OWNERS, LIB_INDEX, DEPENDENCIES, PINNED, PLUGIN_METADATA. +- **`ZlDatabase`** (`core/db/ops.rs`): redb-based persistent store. Tables: PACKAGES, FILE_OWNERS, LIB_INDEX, DEPENDENCIES, PINNED, PLUGIN_METADATA, HISTORY. - **`PathMapping`** (`core/path/mod.rs`): Dynamic FHS-to-ZL path translation using SystemProfile. - **`PackageCandidate` / `ExtractedPackage`** (`plugin/mod.rs`): Common types shared across all plugins for package metadata and extracted content. +- **`PluginInfo`** (`plugin/mod.rs`): Plugin metadata for the remote plugin registry. +- **`HistoryEntry`** (`core/db/ops.rs`): Tracks install/remove/upgrade/rollback events with timestamps. ### Plugin system @@ -104,10 +123,14 @@ All plugins implement `SourcePlugin` and are registered in `main.rs`. To add a n Current plugins: `pacman` (Arch repos), `aur` (AUR RPC v5 + makepkg, with `-bin` variant discovery), `apt` (Packages.gz + .deb), `github` (Releases API). +Remote plugin registry: `fetch_remote_registry()` fetches `PluginInfo` from a URL for future plugin marketplace. + ### Command dispatch pattern Each CLI command lives in `src/cli/.rs` with a `pub fn handle(...)` function. Most `handle` functions receive the parsed args struct plus an `AppContext` reference (defined in `cli/mod.rs`), which bundles shared state: `ZlPaths`, `ZlDatabase`, `PluginRegistry`, `SystemProfile`, and flags (`auto_yes`, `dry_run`, `skip_verify`). Commands are dispatched via a `match` in `main.rs`. +**Full command list**: `install`, `remove`, `search`, `update`, `upgrade`, `list`, `info`, `cache` (list/clean/dedup), `completions`, `pin`, `unpin`, `export`, `import`, `switch`, `self-update`, `env` (shell/list/delete), `run`, `history` (list/rollback), `why`, `doctor`, `size`, `diff`, `audit`. + ### Error handling - `ZlError` enum in `error.rs` (thiserror, boxed where needed to keep size small) for domain errors with `.suggestion()` hints @@ -122,6 +145,9 @@ Each CLI command lives in `src/cli/.rs` with a `pub fn handle(...)` fun - **Dynamic detection over hardcoded paths**: interpreter from /bin/sh's PT_INTERP, lib dirs from ldconfig + ld.so.conf - **RUNPATH over RPATH**: modern standard, respects LD_LIBRARY_PATH - **Atomic transactions**: every install is wrapped; failure = full rollback +- **Colored output**: uses `console` crate throughout (search, list, doctor, diff, audit, size, install steps) +- **Parallel ELF patching**: packages with >1 ELF are patched concurrently via `thread::scope` with 4-way chunking +- **Cross-source dep resolution**: when a dependency is not found in the primary source, all other sources are queried and the user chooses ### ZL directory layout (runtime) @@ -134,7 +160,7 @@ Each CLI command lives in `src/cli/.rs` with a `pub fn handle(...)` fun packages/ # Per-package directories (name-version/) cache/ # Download cache envs/ # Ephemeral/named environment roots - zl.redb # Package database + zl.redb # Package database (includes HISTORY table) ``` ### Key crates @@ -150,12 +176,13 @@ Each CLI command lives in `src/cli/.rs` with a `pub fn handle(...)` fun | `tar` + `zstd` + `flate2` + `xz2` + `bzip2` + `ar` + `zip` | Archive formats | | `sha2` | SHA256 checksums | | `indicatif` + `dialoguer` | Progress bars and interactive prompts | +| `console` | Colored terminal output | ### Code quality - **Zero clippy warnings**: `cargo clippy -- -D warnings` passes clean - **Zero `cargo fmt` diff**: all code is formatted -- **195 tests**: comprehensive coverage of core modules (conflicts, ELF, path mapping, DB, graph, transaction, verify, plugins, search scoring, system detection) +- **209 tests**: comprehensive coverage of core modules (conflicts, ELF, path mapping, DB, graph, transaction, verify, plugins, search scoring, system detection, cache dedup, run, doctor, size, history, why) ### Naming conventions @@ -163,7 +190,35 @@ Each CLI command lives in `src/cli/.rs` with a `pub fn handle(...)` fun - `Conflict` variants avoid repeating the enum name: `Declared` (not `DeclaredConflict`), `Version` (not `VersionConflict`) - `Arch::parse()` and `SystemLayout::parse()` instead of `from_str()` (avoids confusion with `std::str::FromStr` trait) - `SortOrder` enum (in `cli/mod.rs`) uses `ValueEnum` derive for clap: `Relevance`, `Name`, `Version` +- `HistoryAction` enum: `Install`, `Remove`, `Upgrade`, `Rollback` - Structs with simple `new()` constructors also implement `Default` (via `#[derive(Default)]` or manual impl): `PluginRegistry`, `PacmanPlugin`, `AptPlugin`, `AurPlugin`, `GithubPlugin`, `DepGraph`, `Transaction` - The `core/build/` module uses `#![allow(dead_code)]` since it is scaffolding for future source-build support - `DepGraph`, `DependencyEdge`, `DepType` have `#[allow(dead_code)]` — they are part of the graph model used for future features - `ArchMismatch` error variant has `#[allow(dead_code)]` — available for strict arch enforcement in future +- `PluginInfo`, `fetch_remote_registry`, `list_info` have `#[allow(dead_code)]` — scaffolding for remote plugin marketplace + +## GitHub Metadata + +Keep the repository description and topics in sync with the project state. Update them whenever features, scope, or tech stack change significantly. + +### Current description + +``` +Universal Linux package manager with native binary translation. Install packages from any source (pacman, apt, AUR, GitHub releases) on any Linux system — no containers, no VMs, zero runtime overhead. Written in Rust. +``` + +### Current topics + +``` +linux, package-manager, rust, elf, binary-translation, cli, apt, pacman, aur, cross-distribution, dependency-management +``` + +### Commands to update + +```bash +# Set description +gh repo edit --description "Universal Linux package manager with native binary translation. Install packages from any source (pacman, apt, AUR, GitHub releases) on any Linux system — no containers, no VMs, zero runtime overhead. Written in Rust." + +# Set topics (replaces all topics) +gh repo edit --add-topic linux --add-topic package-manager --add-topic rust --add-topic elf --add-topic binary-translation --add-topic cli --add-topic apt --add-topic pacman --add-topic aur --add-topic cross-distribution --add-topic dependency-management +``` diff --git a/Cargo.lock b/Cargo.lock index 1d86021..25f3869 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2646,6 +2646,7 @@ dependencies = [ "bzip2", "clap", "clap_complete", + "console", "dialoguer", "dirs", "elb", diff --git a/Cargo.toml b/Cargo.toml index 1c0ab29..eef97c6 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -50,6 +50,7 @@ tracing-subscriber = { version = "0.3", features = ["env-filter"] } # Progress bars and user interaction indicatif = "0.17" dialoguer = "0.11" +console = "0.15" # File hashing sha2 = "0.10" diff --git a/README.md b/README.md index 59b5fcc..948178e 100644 --- a/README.md +++ b/README.md @@ -305,11 +305,41 @@ zl pin # Pin a package (prevent updates) zl unpin # Unpin a package (allow updates) ``` +### Run Without Installing + +```bash +zl run # Download, patch, execute, then cleanup +zl run --from github # Run from a specific source +zl run -- --help # Pass args to the binary +``` + +### Diagnostics & Analysis + +```bash +zl doctor # Full system health check (DB, symlinks, libs, orphans) +zl why # Show why a package is installed (dependency chain) +zl size # Disk usage per package +zl size # Detailed breakdown with file sizes and dep costs +zl size --sort # Sort by size (largest first) +zl diff # Show what would change if updated +zl audit # Check all packages for known CVEs (via OSV.dev) +zl audit # Check a specific package +``` + +### History & Rollback + +```bash +zl history list # Show install/remove/upgrade history +zl history rollback # Undo the last operation +zl history rollback 3 # Undo the last 3 operations +``` + ### Cache Management ```bash zl cache list # Show cached downloads and sizes zl cache clean # Remove all cached files +zl cache dedup # Deduplicate shared libraries (hardlinks) ``` ### Lockfile Export/Import @@ -530,10 +560,15 @@ ZL can build packages from source when precompiled binaries aren't available. It - **5-way conflict detection** — prevents broken installs before they happen - **Multi-version packages** — install multiple versions side-by-side, switch between them - **Ephemeral environments** — isolated shells where packages disappear on exit -- **ELF patching with `elb`** — pure-Rust patchelf alternative, sets interpreter and RUNPATH +- **ELF patching with `elb`** — pure-Rust patchelf alternative, sets interpreter and RUNPATH, parallel patching for multi-ELF packages - **RUNPATH over RPATH** — modern standard, respects `LD_LIBRARY_PATH` - **`redb` database** — pure-Rust embedded key-value store (ACID, no SQLite/C dependency) - **`petgraph` dependency graph** — topological sort, cycle detection, orphan detection +- **Cross-source dependency resolution** — when a dep is not found in the primary source, queries all other sources and lets the user choose +- **Colored output** — uses `console` crate for colored output throughout (search, list, doctor, audit, diff, size) +- **CVE auditing** — checks installed packages against the OSV.dev vulnerability database +- **History & rollback** — all install/remove events are recorded; undo recent operations +- **Cache deduplication** — identical shared libraries are hardlinked to save disk space ## Configuration @@ -562,7 +597,7 @@ repos = ["core", "extra"] ```bash cargo build # Build -cargo test # Run all tests (195 tests: 103 bin + 92 lib) +cargo test # Run all tests (209 tests: 92 bin + 117 lib) cargo test # Run a single test cargo clippy # Lint cargo fmt # Format diff --git a/src/cli/audit.rs b/src/cli/audit.rs new file mode 100644 index 0000000..bf9a9e4 --- /dev/null +++ b/src/cli/audit.rs @@ -0,0 +1,175 @@ +//! `zl audit` — check installed packages for known vulnerabilities (CVE). +//! +//! Uses the OSV.dev API (https://api.osv.dev/v1/query) which is free, +//! open-source, and covers multiple ecosystems. + +use console::style; + +use crate::core::db::ops::ZlDatabase; +use crate::error::{ZlError, ZlResult}; + +use super::AuditArgs; + +const OSV_API: &str = "https://api.osv.dev/v1/query"; + +#[derive(serde::Serialize)] +struct OsvQuery { + package: OsvPackage, + version: String, +} + +#[derive(serde::Serialize)] +struct OsvPackage { + name: String, + ecosystem: String, +} + +#[derive(serde::Deserialize)] +struct OsvResponse { + #[serde(default)] + vulns: Vec, +} + +#[derive(serde::Deserialize)] +struct OsvVuln { + id: String, + summary: Option, + #[serde(default)] + severity: Vec, +} + +#[derive(serde::Deserialize)] +struct OsvSeverity { + #[serde(rename = "type")] + severity_type: String, + score: String, +} + +pub fn handle(args: AuditArgs, db: &ZlDatabase) -> ZlResult<()> { + let packages = if let Some(ref name) = args.package { + let pkg = db + .get_package_by_name(name)? + .ok_or_else(|| ZlError::PackageNotFound { name: name.clone() })?; + vec![pkg] + } else { + db.list_packages()? + }; + + if packages.is_empty() { + println!("No packages to audit."); + return Ok(()); + } + + println!( + "{} Auditing {} package(s) against OSV.dev...\n", + style("🔍").bold(), + packages.len() + ); + + let client = reqwest::blocking::Client::builder() + .user_agent("zero-layer/0.1") + .timeout(std::time::Duration::from_secs(15)) + .build() + .unwrap_or_default(); + + let mut total_vulns = 0; + let mut vulnerable_pkgs = 0; + + for pkg in &packages { + let ecosystem = match pkg.id.source.as_str() { + "pacman" | "aur" => "Arch Linux", + "apt" => "Debian", + "github" => "GitHub Actions", // best-effort mapping + _ => "Linux", + }; + + let query = OsvQuery { + package: OsvPackage { + name: pkg.id.name.clone(), + ecosystem: ecosystem.to_string(), + }, + version: pkg.id.version.clone(), + }; + + match client.post(OSV_API).json(&query).send() { + Ok(resp) if resp.status().is_success() => { + if let Ok(osv_resp) = resp.json::() + && !osv_resp.vulns.is_empty() + { + vulnerable_pkgs += 1; + println!( + " {} {}-{} — {} vulnerability(ies)", + style("!").red().bold(), + pkg.id.name, + pkg.id.version, + osv_resp.vulns.len() + ); + + for vuln in &osv_resp.vulns { + total_vulns += 1; + let severity = vuln + .severity + .first() + .map(|s| format!(" [{}:{}]", s.severity_type, s.score)) + .unwrap_or_default(); + let summary = vuln.summary.as_deref().unwrap_or("No description"); + println!( + " {}{} — {}", + style(&vuln.id).yellow(), + severity, + summary + ); + } + println!(); + } + } + Ok(_) => { + tracing::debug!( + "OSV API returned non-success for {}-{}", + pkg.id.name, + pkg.id.version + ); + } + Err(e) => { + tracing::debug!("OSV query failed for {}: {}", pkg.id.name, e); + } + } + } + + // Summary + if total_vulns == 0 { + println!( + "{} No known vulnerabilities found.", + style("✓").green().bold() + ); + } else { + println!( + "{} Found {} vulnerability(ies) in {} package(s).", + style("!").red().bold(), + total_vulns, + vulnerable_pkgs + ); + println!(" hint: update affected packages with `zl update `"); + } + + Ok(()) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_osv_query_serializes() { + let q = OsvQuery { + package: OsvPackage { + name: "openssl".into(), + ecosystem: "Arch Linux".into(), + }, + version: "3.1.0".into(), + }; + let json = serde_json::to_string(&q).unwrap(); + assert!(json.contains("openssl")); + assert!(json.contains("Arch Linux")); + } +} diff --git a/src/cli/cache.rs b/src/cli/cache.rs index a3e9881..3f93878 100644 --- a/src/cli/cache.rs +++ b/src/cli/cache.rs @@ -7,6 +7,7 @@ pub fn handle(cmd: CacheCommand, paths: &ZlPaths) -> ZlResult<()> { match cmd { CacheCommand::List => handle_list(paths), CacheCommand::Clean => handle_clean(paths), + CacheCommand::Dedup => handle_dedup(paths), } } @@ -97,3 +98,160 @@ fn handle_clean(paths: &ZlPaths) -> ZlResult<()> { Ok(()) } + +/// Deduplicate identical shared libraries across packages using hardlinks. +/// Libraries with the same SHA256 hash are hardlinked to save disk space. +fn handle_dedup(paths: &ZlPaths) -> ZlResult<()> { + use sha2::{Digest, Sha256}; + use std::collections::HashMap; + + if !paths.packages.is_dir() { + println!("No packages installed."); + return Ok(()); + } + + println!("Scanning packages for duplicate libraries..."); + + // Map: SHA256 hash -> (canonical_path, size) + let mut seen: HashMap = HashMap::new(); + let mut dedup_count = 0u64; + let mut saved_bytes = 0u64; + + for entry in walkdir::WalkDir::new(&paths.packages) + .into_iter() + .filter_map(|e| e.ok()) + { + if !entry.file_type().is_file() { + continue; + } + + let path = entry.path(); + let fname = path + .file_name() + .and_then(|n| n.to_str()) + .unwrap_or_default(); + + // Only process shared library files + if !fname.contains(".so") { + continue; + } + + let metadata = match std::fs::metadata(path) { + Ok(m) => m, + Err(_) => continue, + }; + + // Skip symlinks and already-hardlinked files (nlink > 1 is fine, but check hash) + let size = metadata.len(); + if size == 0 { + continue; + } + + let data = match std::fs::read(path) { + Ok(d) => d, + Err(_) => continue, + }; + let hash = format!("{:x}", Sha256::digest(&data)); + + if let Some((canonical, _)) = seen.get(&hash) { + // Same content — replace with hardlink + if canonical == path { + continue; + } + + // Check if already hardlinked (same inode) + #[cfg(unix)] + { + use std::os::unix::fs::MetadataExt; + if let Ok(cm) = std::fs::metadata(canonical) + && cm.ino() == metadata.ino() + && cm.dev() == metadata.dev() + { + continue; // already hardlinked + } + } + + if let Err(e) = std::fs::remove_file(path) { + tracing::warn!("Failed to remove {} for dedup: {}", path.display(), e); + continue; + } + if let Err(e) = std::fs::hard_link(canonical, path) { + tracing::warn!( + "Failed to hardlink {} -> {}: {}", + path.display(), + canonical.display(), + e + ); + // Restore by writing the data back + let _ = std::fs::write(path, &data); + continue; + } + + dedup_count += 1; + saved_bytes += size; + tracing::debug!( + "Deduped: {} -> {} ({:.1} KB)", + path.display(), + canonical.display(), + size as f64 / 1000.0 + ); + } else { + seen.insert(hash, (path.to_path_buf(), size)); + } + } + + if dedup_count == 0 { + println!("No duplicate libraries found."); + } else { + println!( + "Deduplicated {} file(s), saved {:.1} MB.", + dedup_count, + saved_bytes as f64 / 1_000_000.0 + ); + } + + Ok(()) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_dedup_hardlinks_identical_files() { + let tmp = tempfile::tempdir().unwrap(); + let packages_dir = tmp.path().join("packages"); + let pkg1 = packages_dir.join("a-1.0"); + let pkg2 = packages_dir.join("b-1.0"); + std::fs::create_dir_all(&pkg1).unwrap(); + std::fs::create_dir_all(&pkg2).unwrap(); + + // Create identical .so files + let content = b"fake shared library content for testing"; + std::fs::write(pkg1.join("libfoo.so.1"), content).unwrap(); + std::fs::write(pkg2.join("libfoo.so.1"), content).unwrap(); + + let paths = ZlPaths { + root: tmp.path().to_path_buf(), + bin: tmp.path().join("bin"), + lib: tmp.path().join("lib"), + share: tmp.path().join("share"), + etc: tmp.path().join("etc"), + packages: packages_dir, + cache: tmp.path().join("cache"), + db_file: tmp.path().join("zl.redb"), + envs: tmp.path().join("envs"), + }; + + handle_dedup(&paths).unwrap(); + + // After dedup, both files should have the same inode + #[cfg(unix)] + { + use std::os::unix::fs::MetadataExt; + let m1 = std::fs::metadata(pkg1.join("libfoo.so.1")).unwrap(); + let m2 = std::fs::metadata(pkg2.join("libfoo.so.1")).unwrap(); + assert_eq!(m1.ino(), m2.ino()); + } + } +} diff --git a/src/cli/deps.rs b/src/cli/deps.rs index 74dd2c9..a799dac 100644 --- a/src/cli/deps.rs +++ b/src/cli/deps.rs @@ -1,6 +1,8 @@ use std::collections::HashSet; use std::path::Path; +use console::style; + use crate::core::db::ops::ZlDatabase; use crate::error::{ZlError, ZlResult}; use crate::plugin::{PackageCandidate, PluginRegistry, SourcePlugin}; @@ -51,6 +53,9 @@ fn strip_version_constraint(dep: &str) -> &str { /// Resolve a package and all its transitive dependencies. /// Returns an InstallPlan with packages in dependency-first order. +/// +/// When a dependency is not found in the primary source, queries all other +/// registered plugins and lets the user choose where to install it from. pub fn resolve_with_deps( name: &str, version: Option<&str>, @@ -84,6 +89,7 @@ pub fn resolve_with_deps( true, plugin, db, + registry, profile, &mut resolved, &mut resolving_stack, @@ -105,6 +111,7 @@ fn resolve_recursive( explicit: bool, plugin: &dyn SourcePlugin, db: &ZlDatabase, + registry: &PluginRegistry, profile: &SystemProfile, resolved: &mut HashSet, resolving_stack: &mut Vec, @@ -169,14 +176,15 @@ fn resolve_recursive( continue; } - // Try to resolve the dependency via the plugin + // Try to resolve the dependency via the primary plugin match plugin.resolve(dep_name, None)? { Some(dep_candidate) => { resolve_recursive( &dep_candidate, - false, // dependencies are implicit + false, plugin, db, + registry, profile, resolved, resolving_stack, @@ -186,8 +194,25 @@ fn resolve_recursive( )?; } None => { - // Dependency not found in this source — track but don't fail - if !unresolvable.contains(&dep_str.to_string()) { + // Cross-source resolution: try other plugins + if let Some(cross_candidate) = + try_cross_source_resolve(dep_name, plugin.name(), registry)? + { + let cross_plugin = registry.get(&cross_candidate.source).unwrap_or(plugin); + resolve_recursive( + &cross_candidate, + false, + cross_plugin, + db, + registry, + profile, + resolved, + resolving_stack, + plan, + unresolvable, + already_installed, + )?; + } else if !unresolvable.contains(&dep_str.to_string()) { unresolvable.push(dep_str.clone()); } } @@ -205,6 +230,75 @@ fn resolve_recursive( Ok(()) } +/// Try to resolve a dependency from other sources when the primary source fails. +/// Presents the user with options if found in multiple sources. +fn try_cross_source_resolve( + dep_name: &str, + primary_source: &str, + registry: &PluginRegistry, +) -> ZlResult> { + let mut found: Vec<(String, PackageCandidate)> = Vec::new(); + + for plugin in registry.all() { + if plugin.name() == primary_source { + continue; + } + match plugin.resolve(dep_name, None) { + Ok(Some(candidate)) => { + found.push((plugin.name().to_string(), candidate)); + } + _ => continue, + } + } + + match found.len() { + 0 => Ok(None), + 1 => { + let (source, candidate) = found.into_iter().next().unwrap(); + eprintln!( + " {} Dependency '{}' not in primary source, found in {}", + style("~").yellow(), + dep_name, + style(&source).cyan() + ); + Ok(Some(candidate)) + } + _ => { + // Multiple sources — let user choose + eprintln!( + "\n {} Dependency '{}' not in primary source, found in {} other source(s):", + style("?").yellow().bold(), + style(dep_name).bold(), + found.len() + ); + + let items: Vec = found + .iter() + .map(|(source, c)| format!("{} {} [{}]", c.name, c.version, source)) + .collect(); + + // Add "skip" option + let mut all_items: Vec<&str> = items.iter().map(|s| s.as_str()).collect(); + all_items.push("Skip (don't install this dependency)"); + + let selection = + dialoguer::Select::with_theme(&dialoguer::theme::ColorfulTheme::default()) + .with_prompt(format!("Install '{}' from", dep_name)) + .items(&all_items) + .default(0) + .interact() + .unwrap_or(all_items.len() - 1); // default to skip on error + + if selection >= found.len() { + Ok(None) // user chose skip + } else { + let (_, candidate) = found.into_iter().nth(selection).unwrap(); + Ok(Some(candidate)) + } + } + } +} + /// Display the install plan to the user pub fn display_plan(plan: &InstallPlan) { let dep_count = plan.dep_count(); diff --git a/src/cli/diff.rs b/src/cli/diff.rs new file mode 100644 index 0000000..792216d --- /dev/null +++ b/src/cli/diff.rs @@ -0,0 +1,130 @@ +//! `zl diff ` — show what would change if a package is updated. + +use console::style; + +use crate::error::{ZlError, ZlResult}; + +use super::{AppContext, DiffArgs}; + +pub fn handle(args: DiffArgs, ctx: &AppContext) -> ZlResult<()> { + let installed = + ctx.db + .get_package_by_name(&args.package)? + .ok_or_else(|| ZlError::PackageNotFound { + name: args.package.clone(), + })?; + + let from = args.from.as_deref().unwrap_or(&installed.id.source); + + let plugin = ctx.registry.get(from).ok_or_else(|| ZlError::Plugin { + plugin: from.to_string(), + message: "Source plugin not found".into(), + })?; + + // Sync and resolve latest + plugin.sync()?; + let latest = plugin + .resolve(&args.package, None)? + .ok_or_else(|| ZlError::PackageNotFound { + name: args.package.clone(), + })?; + + println!( + "{} {} — installed vs latest\n", + style(&args.package).bold(), + style(format!("[{}]", from)).dim() + ); + + // Version diff + if installed.id.version == latest.version { + println!( + " Version: {} ({})", + installed.id.version, + style("up to date").green() + ); + return Ok(()); + } + + println!( + " Version: {} -> {}", + style(&installed.id.version).red(), + style(&latest.version).green() + ); + + // Dependency diff + let installed_deps: std::collections::HashSet = ctx + .db + .get_dependencies(&format!("{}-{}", installed.id.name, installed.id.version)) + .unwrap_or_default() + .into_iter() + .collect(); + + let new_deps: std::collections::HashSet = latest.dependencies.iter().cloned().collect(); + + let added: Vec<&String> = new_deps.difference(&installed_deps).collect(); + let removed: Vec<&String> = installed_deps.difference(&new_deps).collect(); + + if !added.is_empty() { + println!("\n New dependencies:"); + for dep in &added { + println!(" {} {}", style("+").green(), dep); + } + } + + if !removed.is_empty() { + println!("\n Removed dependencies:"); + for dep in &removed { + println!(" {} {}", style("-").red(), dep); + } + } + + if added.is_empty() && removed.is_empty() { + println!(" Dependencies: unchanged"); + } + + // Size comparison + let current_size: u64 = installed + .installed_files + .iter() + .filter_map(|f| std::fs::metadata(f).ok()) + .map(|m| m.len()) + .sum(); + + println!( + "\n Current size: {:.1} MB ({} files)", + current_size as f64 / 1_000_000.0, + installed.installed_files.len() + ); + + if latest.installed_size > 0 { + let delta = latest.installed_size as i64 - current_size as i64; + let delta_str = if delta > 0 { + format!("+{:.1} MB", delta as f64 / 1_000_000.0) + } else if delta < 0 { + format!("{:.1} MB", delta as f64 / 1_000_000.0) + } else { + "no change".to_string() + }; + println!( + " New size: {:.1} MB ({})", + latest.installed_size as f64 / 1_000_000.0, + delta_str + ); + } + + println!( + "\n hint: run `zl update {}` to apply this update", + args.package + ); + + Ok(()) +} + +#[cfg(test)] +mod tests { + #[test] + fn test_placeholder() { + // Integration testing requires live plugin; unit tests are in individual modules + assert!(true); + } +} diff --git a/src/cli/doctor.rs b/src/cli/doctor.rs new file mode 100644 index 0000000..c004b09 --- /dev/null +++ b/src/cli/doctor.rs @@ -0,0 +1,264 @@ +//! `zl doctor` — diagnose system and ZL health. +//! +//! Checks: +//! - Database integrity (can read all packages) +//! - Broken symlinks in bin/ and lib/ +//! - Missing shared libraries for installed packages +//! - Orphaned packages (deps no longer needed) +//! - Disk usage summary +//! - System profile consistency + +use console::style; + +use crate::core::db::ops::ZlDatabase; +use crate::error::ZlResult; +use crate::system::SystemProfile; + +use super::AppContext; + +pub fn handle(ctx: &AppContext) -> ZlResult<()> { + println!("{}", style("ZL Doctor — System Diagnostics").bold().cyan()); + println!(); + + let mut issues = 0; + let mut warnings = 0; + + // 1. Database check + print!(" Checking database... "); + match check_database(ctx.db) { + Ok(count) => println!("{} ({} packages)", style("OK").green(), count), + Err(e) => { + println!("{} ({})", style("ERROR").red(), e); + issues += 1; + } + } + + // 2. Broken symlinks in bin/ + print!(" Checking bin/ symlinks... "); + let broken_bins = check_broken_symlinks(&ctx.paths.bin); + if broken_bins.is_empty() { + println!("{}", style("OK").green()); + } else { + println!("{} ({} broken)", style("WARN").yellow(), broken_bins.len()); + for path in &broken_bins { + println!(" -> {}", path); + } + warnings += broken_bins.len(); + } + + // 3. Broken symlinks in lib/ + print!(" Checking lib/ symlinks... "); + let broken_libs = check_broken_symlinks(&ctx.paths.lib); + if broken_libs.is_empty() { + println!("{}", style("OK").green()); + } else { + println!("{} ({} broken)", style("WARN").yellow(), broken_libs.len()); + for path in &broken_libs { + println!(" -> {}", path); + } + warnings += broken_libs.len(); + } + + // 4. Missing shared libraries + print!(" Checking shared library deps... "); + let missing = check_missing_libs(ctx.db, ctx.profile)?; + if missing.is_empty() { + println!("{}", style("OK").green()); + } else { + println!( + "{} ({} missing across packages)", + style("WARN").yellow(), + missing.len() + ); + for (pkg, lib) in &missing { + println!(" {} needs {}", pkg, lib); + } + warnings += missing.len(); + } + + // 5. Orphaned packages + print!(" Checking for orphans... "); + let orphans = check_orphans(ctx.db)?; + if orphans.is_empty() { + println!("{}", style("OK").green()); + } else { + println!("{} ({} orphaned)", style("INFO").blue(), orphans.len()); + for name in &orphans { + println!(" - {}", name); + } + println!(" hint: remove with `zl remove --cascade` or reinstall as explicit"); + } + + // 6. Disk usage + print!(" Computing disk usage... "); + let total_size = compute_total_size(&ctx.paths.packages); + let cache_size = compute_total_size(&ctx.paths.cache); + println!( + "{} packages, {} cache", + format_size(total_size), + format_size(cache_size) + ); + + // 7. System profile + println!(" System: {} {}", ctx.profile.arch, ctx.profile.libc); + println!( + " Layout: {} ({} lib dirs)", + ctx.profile.layout, + ctx.profile.lib_dirs.len() + ); + println!(" Interpreter: {}", ctx.profile.interpreter.display()); + + // Summary + println!(); + if issues > 0 { + println!( + "{} {} issue(s) found. Run with -vv for details.", + style("!").red().bold(), + issues + ); + } else if warnings > 0 { + println!( + "{} {} warning(s). Everything functional but some cleanup recommended.", + style("~").yellow().bold(), + warnings + ); + } else { + println!("{} Everything looks healthy!", style("✓").green().bold()); + } + + Ok(()) +} + +fn check_database(db: &ZlDatabase) -> ZlResult { + let packages = db.list_packages()?; + Ok(packages.len()) +} + +fn check_broken_symlinks(dir: &std::path::Path) -> Vec { + let mut broken = Vec::new(); + if !dir.is_dir() { + return broken; + } + if let Ok(entries) = std::fs::read_dir(dir) { + for entry in entries.flatten() { + let path = entry.path(); + if path.symlink_metadata().is_ok() && !path.exists() { + broken.push(path.to_string_lossy().into_owned()); + } + } + } + broken +} + +fn check_missing_libs(db: &ZlDatabase, profile: &SystemProfile) -> ZlResult> { + let mut missing = Vec::new(); + let packages = db.list_packages()?; + + for pkg in &packages { + for lib in &pkg.needs_libs { + if db.lib_provider(lib)?.is_some() { + continue; + } + if profile.system_lib_exists(lib) { + continue; + } + if pkg.provides_libs.contains_key(lib) { + continue; + } + missing.push((pkg.id.name.clone(), lib.clone())); + } + } + + Ok(missing) +} + +fn check_orphans(db: &ZlDatabase) -> ZlResult> { + let packages = db.list_packages()?; + let mut orphans = Vec::new(); + + for pkg in &packages { + if pkg.explicit { + continue; + } + + let has_dependents = packages.iter().any(|other| { + if other.id.name == pkg.id.name { + return false; + } + let key = format!("{}-{}", other.id.name, other.id.version); + db.get_dependencies(&key) + .unwrap_or_default() + .iter() + .any(|dep| { + let dep_name = dep.split(&['>', '<', '=', ':'][..]).next().unwrap_or(dep); + dep_name == pkg.id.name + }) + }); + + if !has_dependents { + orphans.push(format!("{}-{}", pkg.id.name, pkg.id.version)); + } + } + + Ok(orphans) +} + +fn compute_total_size(dir: &std::path::Path) -> u64 { + walkdir::WalkDir::new(dir) + .into_iter() + .filter_map(|e| e.ok()) + .filter(|e| e.file_type().is_file()) + .filter_map(|e| e.metadata().ok()) + .map(|m| m.len()) + .sum() +} + +fn format_size(bytes: u64) -> String { + if bytes >= 1_000_000_000 { + format!("{:.1} GB", bytes as f64 / 1_000_000_000.0) + } else if bytes >= 1_000_000 { + format!("{:.1} MB", bytes as f64 / 1_000_000.0) + } else if bytes >= 1_000 { + format!("{:.1} KB", bytes as f64 / 1_000.0) + } else { + format!("{} B", bytes) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_format_size() { + assert_eq!(format_size(0), "0 B"); + assert_eq!(format_size(500), "500 B"); + assert_eq!(format_size(1500), "1.5 KB"); + assert_eq!(format_size(1_500_000), "1.5 MB"); + assert_eq!(format_size(1_500_000_000), "1.5 GB"); + } + + #[test] + fn test_check_broken_symlinks_empty() { + let tmp = tempfile::tempdir().unwrap(); + let broken = check_broken_symlinks(tmp.path()); + assert!(broken.is_empty()); + } + + #[test] + fn test_check_broken_symlinks_finds_broken() { + let tmp = tempfile::tempdir().unwrap(); + let link = tmp.path().join("broken_link"); + std::os::unix::fs::symlink("/nonexistent/target", &link).unwrap(); + let broken = check_broken_symlinks(tmp.path()); + assert_eq!(broken.len(), 1); + } + + #[test] + fn test_check_database() { + let db_file = tempfile::NamedTempFile::new().unwrap(); + let db = ZlDatabase::open(db_file.path()).unwrap(); + let count = check_database(&db).unwrap(); + assert_eq!(count, 0); + } +} diff --git a/src/cli/history.rs b/src/cli/history.rs new file mode 100644 index 0000000..8d04509 --- /dev/null +++ b/src/cli/history.rs @@ -0,0 +1,167 @@ +//! `zl history` — show install/remove history and rollback changes. + +use crate::core::db::ops::ZlDatabase; +use crate::error::ZlResult; + +use super::{AppContext, HistoryCommand, RollbackArgs}; + +pub fn handle(cmd: HistoryCommand, ctx: &AppContext) -> ZlResult<()> { + match cmd { + HistoryCommand::List => handle_list(ctx.db), + HistoryCommand::Rollback(args) => handle_rollback(args, ctx), + } +} + +fn handle_list(db: &ZlDatabase) -> ZlResult<()> { + let entries = db.list_history(50)?; + + if entries.is_empty() { + println!("No history recorded yet."); + return Ok(()); + } + + println!("{:<22} {:<10} Packages", "Date", "Action"); + println!("{}", "-".repeat(70)); + + for entry in &entries { + let date = format_timestamp(entry.timestamp); + let pkgs = entry.packages.join(", "); + println!("{:<22} {:<10} {}", date, entry.action, pkgs); + } + + Ok(()) +} + +fn handle_rollback(args: RollbackArgs, ctx: &AppContext) -> ZlResult<()> { + use crate::core::db::ops::HistoryAction; + + let entries = ctx.db.list_history(args.count)?; + + if entries.is_empty() { + println!("No history to rollback."); + return Ok(()); + } + + println!( + "Rolling back {} operation(s)...\n", + entries.len().min(args.count) + ); + + for entry in entries.iter().take(args.count) { + match entry.action { + HistoryAction::Install => { + // Undo install = remove the packages + println!(" Undoing install of: {}", entry.packages.join(", ")); + for pkg_name in &entry.packages { + // Parse "name-version" into name + let name = pkg_name + .rfind('-') + .map(|pos| &pkg_name[..pos]) + .unwrap_or(pkg_name); + if let Some(node) = ctx.db.get_package_by_name(name)? { + let pkg_key = format!("{}-{}", node.id.name, node.id.version); + let pkg_dir = ctx.paths.packages.join(&pkg_key); + + // Remove bin symlinks + super::remove::remove_bin_symlinks_public( + &node.installed_files, + &ctx.paths.bin, + )?; + + // Remove lib symlinks + for soname in node.provides_libs.keys() { + let link = ctx.paths.lib.join(soname); + if link.symlink_metadata().is_ok() { + std::fs::remove_file(&link)?; + } + } + + // Remove package dir + if pkg_dir.exists() { + std::fs::remove_dir_all(&pkg_dir)?; + } + + // Remove from DB + ctx.db.remove_files_for_package(&pkg_key)?; + ctx.db.remove_dependencies(&pkg_key)?; + ctx.db.remove_package(&node.id.name, &node.id.version)?; + + println!(" Removed {}", pkg_key); + } + } + } + HistoryAction::Remove => { + // Undo remove = we can't restore deleted packages + println!( + " Cannot undo removal of: {} (packages already deleted)", + entry.packages.join(", ") + ); + println!(" hint: reinstall them with `zl install`"); + } + HistoryAction::Upgrade => { + println!( + " Cannot undo upgrade of: {} (previous version not cached)", + entry.packages.join(", ") + ); + println!(" hint: install the old version explicitly with --version"); + } + HistoryAction::Rollback => { + println!(" Skipping rollback entry (already a rollback)"); + } + } + } + + // Record this rollback in history + let now = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + ctx.db.record_history(&crate::core::db::ops::HistoryEntry { + timestamp: now, + action: HistoryAction::Rollback, + packages: entries + .iter() + .take(args.count) + .flat_map(|e| e.packages.clone()) + .collect(), + })?; + + println!("\nRollback complete."); + Ok(()) +} + +fn format_timestamp(ts: u64) -> String { + if ts == 0 { + return "unknown".to_string(); + } + let secs_per_day = 86400u64; + let days_since_epoch = ts / secs_per_day; + let years = 1970 + days_since_epoch / 365; + let remaining_days = days_since_epoch % 365; + let months = remaining_days / 30 + 1; + let days = remaining_days % 30 + 1; + let hour = (ts % secs_per_day) / 3600; + let min = (ts % 3600) / 60; + format!( + "{:04}-{:02}-{:02} {:02}:{:02}", + years, months, days, hour, min + ) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_format_timestamp() { + let ts = 1708700000; // ~Feb 2024 + let s = format_timestamp(ts); + assert!(s.contains("2024")); + assert!(s.contains(":")); + } + + #[test] + fn test_format_timestamp_zero() { + assert_eq!(format_timestamp(0), "unknown"); + } +} diff --git a/src/cli/install.rs b/src/cli/install.rs index 29c3dde..bf06701 100644 --- a/src/cli/install.rs +++ b/src/cli/install.rs @@ -114,12 +114,20 @@ pub fn handle(args: InstallArgs, ctx: &AppContext) -> ZlResult<()> { // 6. Download all packages with progress bars let total = plan.packages.len(); let candidates: Vec<&PackageCandidate> = plan.packages.iter().map(|e| &e.candidate).collect(); + let steps = 4; - println!("\nDownloading {} package(s)...", total); + println!( + "\n{} Downloading {} package(s)...", + console::style(format!("[1/{}]", steps)).dim(), + total + ); let archives = download_parallel(&candidates, plugin, &ctx.paths.cache)?; // 7. Verify all downloads - println!("Verifying packages..."); + println!( + "{} Verifying packages...", + console::style(format!("[2/{}]", steps)).dim() + ); for (candidate, archive_path) in candidates.iter().zip(archives.iter()) { let result = verify::verify_package( archive_path, @@ -133,6 +141,10 @@ pub fn handle(args: InstallArgs, ctx: &AppContext) -> ZlResult<()> { } // 8. Install each package with transaction support + println!( + "{} Installing & patching...", + console::style(format!("[3/{}]", steps)).dim() + ); let mut txn = Transaction::new(); let mut installed_count = 0; @@ -177,16 +189,40 @@ pub fn handle(args: InstallArgs, ctx: &AppContext) -> ZlResult<()> { // Commit the transaction — all installs succeeded txn.commit(); + // Record in history + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + let _ = ctx.db.record_history(&crate::core::db::ops::HistoryEntry { + timestamp: now, + action: crate::core::db::ops::HistoryAction::Install, + packages: plan + .packages + .iter() + .map(|e| format!("{}-{}", e.candidate.name, e.candidate.version)) + .collect(), + }); + // 9. Summary + println!( + "{} Done!", + console::style(format!("[{}/{}]", steps, steps)).dim() + ); let dep_count = plan.dep_count(); if dep_count > 0 { println!( - "\nInstalled {} package(s) + {} dependency(ies).", + "\n{} Installed {} package(s) + {} dependency(ies).", + console::style("✓").green().bold(), total - dep_count, dep_count ); } else { - println!("\nInstalled {} package(s).", total); + println!( + "\n{} Installed {} package(s).", + console::style("✓").green().bold(), + total + ); } if !plan.unresolvable.is_empty() { @@ -413,16 +449,47 @@ pub fn install_from_archive( } } - // Patch ELF binaries - for elf_path in &extracted.elf_files { - match analysis::analyze(elf_path) { - Ok(info) => { - if let Err(e) = patcher::patch_for_zl(elf_path, &info, &mapping, profile) { - tracing::warn!("Failed to patch {}: {}", elf_path.display(), e); - } + // Patch ELF binaries in parallel (thread::scope for zero-overhead parallelism) + if extracted.elf_files.len() > 1 { + std::thread::scope(|scope| { + let mapping = &mapping; + let chunk_size = (extracted.elf_files.len() / 4).max(1); + let mut handles = Vec::new(); + + for chunk in extracted.elf_files.chunks(chunk_size) { + handles.push(scope.spawn(move || { + for elf_path in chunk { + match analysis::analyze(elf_path) { + Ok(info) => { + if let Err(e) = + patcher::patch_for_zl(elf_path, &info, mapping, profile) + { + tracing::warn!("Failed to patch {}: {}", elf_path.display(), e); + } + } + Err(e) => { + tracing::debug!("Skipping ELF {}: {}", elf_path.display(), e); + } + } + } + })); } - Err(e) => { - tracing::debug!("Skipping ELF {}: {}", elf_path.display(), e); + + for handle in handles { + let _ = handle.join(); + } + }); + } else { + for elf_path in &extracted.elf_files { + match analysis::analyze(elf_path) { + Ok(info) => { + if let Err(e) = patcher::patch_for_zl(elf_path, &info, &mapping, profile) { + tracing::warn!("Failed to patch {}: {}", elf_path.display(), e); + } + } + Err(e) => { + tracing::debug!("Skipping ELF {}: {}", elf_path.display(), e); + } } } } @@ -1066,6 +1133,16 @@ fn format_option_label(candidate: &PackageCandidate) -> String { ) } +/// Simplified source picker for `zl run` — same as pick_source but public. +pub fn pick_source_for_run( + package: &str, + version: Option<&str>, + registry: &PluginRegistry, + auto_yes: bool, +) -> ZlResult { + pick_source(package, version, registry, auto_yes) +} + fn is_executable(path: &Path) -> bool { use std::os::unix::fs::PermissionsExt; std::fs::metadata(path) diff --git a/src/cli/list.rs b/src/cli/list.rs index fe9b3f0..d0e5d12 100644 --- a/src/cli/list.rs +++ b/src/cli/list.rs @@ -1,3 +1,5 @@ +use console::style; + use crate::core::db::ops::ZlDatabase; use crate::error::ZlResult; @@ -40,33 +42,37 @@ pub fn handle(args: ListArgs, db: &ZlDatabase) -> ZlResult<()> { pinned_list.into_iter().map(|(name, _)| name).collect(); println!( - "{:<30} {:<20} {:<15} {:>6} Status", - "Name", "Version", "Source", "Files" + "{:<30} {:<20} {:<15} {:>6} {}", + style("Name").bold(), + style("Version").bold(), + style("Source").bold(), + style("Files").bold(), + style("Status").bold() ); println!("{}", "-".repeat(85)); for pkg in &filtered { - let mut status = Vec::new(); + let mut status_parts = Vec::new(); if pkg.explicit { - status.push("explicit"); + status_parts.push(style("explicit").green().to_string()); } else { - status.push("dep"); + status_parts.push(style("dep").dim().to_string()); } if pinned_names.contains(&pkg.id.name) { - status.push("pinned"); + status_parts.push(style("pinned").yellow().to_string()); } println!( "{:<30} {:<20} {:<15} {:>6} [{}]", - pkg.id.name, - pkg.id.version, - pkg.id.source, + style(&pkg.id.name).white().bold(), + style(&pkg.id.version).yellow(), + style(&pkg.id.source).cyan(), pkg.installed_files.len(), - status.join(", ") + status_parts.join(", ") ); } - println!("\n{} package(s) listed.", filtered.len()); + println!("\n{} package(s) listed.", style(filtered.len()).bold()); Ok(()) } diff --git a/src/cli/mod.rs b/src/cli/mod.rs index 4e9370c..acc52fb 100644 --- a/src/cli/mod.rs +++ b/src/cli/mod.rs @@ -1,17 +1,24 @@ +pub mod audit; pub mod cache; pub mod completions; pub mod deps; +pub mod diff; +pub mod doctor; pub mod env; +pub mod history; pub mod info; pub mod install; pub mod list; pub mod lockfile; pub mod pin; pub mod remove; +pub mod run; pub mod search; pub mod selfupdate; +pub mod size; pub mod update; pub mod upgrade; +pub mod why; use clap::{ArgAction, Args, Parser, Subcommand, ValueEnum}; use clap_complete::Shell; @@ -113,6 +120,21 @@ pub enum Commands { /// Manage ephemeral environments #[command(subcommand)] Env(EnvCommand), + /// Run a package without installing (temporary execution) + Run(RunArgs), + /// Show install/remove history and rollback changes + #[command(subcommand)] + History(HistoryCommand), + /// Show why a package is installed (dependency chain) + Why(WhyArgs), + /// Diagnose system and ZL health + Doctor, + /// Show disk usage per package + Size(SizeArgs), + /// Show what would change if a package is updated + Diff(DiffArgs), + /// Check installed packages for known vulnerabilities (CVE) + Audit(AuditArgs), } #[derive(Args)] @@ -198,6 +220,8 @@ pub enum CacheCommand { List, /// Remove all cached files Clean, + /// Deduplicate shared libraries using hardlinks + Dedup, } #[derive(Args)] @@ -259,3 +283,63 @@ pub struct EnvDeleteArgs { /// Name of the environment to delete pub name: String, } + +#[derive(Args)] +pub struct RunArgs { + /// Package name to run + pub package: String, + /// Source to use (e.g., pacman, apt, github) + #[arg(long)] + pub from: Option, + /// Specific version + #[arg(long)] + pub version: Option, + /// Arguments to pass to the binary + #[arg(trailing_var_arg = true, allow_hyphen_values = true)] + pub args: Vec, +} + +#[derive(Subcommand)] +pub enum HistoryCommand { + /// Show install/remove history + List, + /// Rollback the last N operations + Rollback(RollbackArgs), +} + +#[derive(Args)] +pub struct RollbackArgs { + /// Number of operations to rollback (default: 1) + #[arg(default_value = "1")] + pub count: usize, +} + +#[derive(Args)] +pub struct WhyArgs { + /// Package name to trace + pub package: String, +} + +#[derive(Args)] +pub struct SizeArgs { + /// Show only a specific package (default: all) + pub package: Option, + /// Sort by size (largest first) + #[arg(long)] + pub sort: bool, +} + +#[derive(Args)] +pub struct DiffArgs { + /// Package name to diff + pub package: String, + /// Source to check (default: same as installed) + #[arg(long)] + pub from: Option, +} + +#[derive(Args)] +pub struct AuditArgs { + /// Check only a specific package (default: all installed) + pub package: Option, +} diff --git a/src/cli/remove.rs b/src/cli/remove.rs index 00f7707..4b6ba3f 100644 --- a/src/cli/remove.rs +++ b/src/cli/remove.rs @@ -127,6 +127,17 @@ pub fn handle(args: RemoveArgs, ctx: &AppContext) -> ZlResult<()> { println!("Removed {}-{}.", node.id.name, node.id.version); + // Record in history + let now = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + let _ = db.record_history(&crate::core::db::ops::HistoryEntry { + timestamp: now, + action: crate::core::db::ops::HistoryAction::Remove, + packages: vec![format!("{}-{}", node.id.name, node.id.version)], + }); + // 7. Cascade: remove orphans if requested if args.cascade { remove_orphans(paths, db, dry_run)?; @@ -238,6 +249,14 @@ fn remove_single( Ok(()) } +/// Public wrapper for use by the history/rollback module +pub fn remove_bin_symlinks_public( + installed_files: &[std::path::PathBuf], + bin_dir: &std::path::Path, +) -> ZlResult<()> { + remove_bin_symlinks(installed_files, bin_dir) +} + /// Remove symlinks from bin/ that point into the package's installed files fn remove_bin_symlinks( installed_files: &[std::path::PathBuf], diff --git a/src/cli/run.rs b/src/cli/run.rs new file mode 100644 index 0000000..3de5912 --- /dev/null +++ b/src/cli/run.rs @@ -0,0 +1,215 @@ +//! `zl run` — run a package without installing it. +//! +//! Downloads to a temp directory, extracts, patches ELF binaries, executes +//! the requested binary, then cleans up automatically. + +use std::path::Path; + +use crate::core::elf::{analysis, patcher}; +use crate::core::path::PathMapping; +use crate::error::{ZlError, ZlResult}; + +use super::{AppContext, RunArgs}; + +pub fn handle(args: RunArgs, ctx: &AppContext) -> ZlResult<()> { + let from: String = match args.from.as_deref() { + Some(f) => f.to_string(), + None => super::install::pick_source_for_run( + &args.package, + args.version.as_deref(), + ctx.registry, + ctx.auto_yes, + )?, + }; + + let plugin = ctx.registry.get(&from).ok_or_else(|| ZlError::Plugin { + plugin: from.clone(), + message: "No matching source plugin found".into(), + })?; + + // Resolve the package + let candidate = plugin + .resolve(&args.package, args.version.as_deref())? + .ok_or_else(|| ZlError::PackageNotFound { + name: args.package.clone(), + })?; + + println!( + "Fetching {}-{} from {} for temporary execution...", + candidate.name, candidate.version, from + ); + + // Download to temp dir + let tmp_dir = tempfile::tempdir()?; + let archive_path = plugin.download(&candidate, tmp_dir.path())?; + + // Extract + let extracted = plugin.extract(&archive_path)?; + + // Patch ELF binaries + let mapping = PathMapping::for_package( + extracted.extract_dir.path(), + &candidate.name, + &candidate.version, + ctx.profile, + ); + + for elf_path in &extracted.elf_files { + if let Ok(info) = analysis::analyze(elf_path) + && let Err(e) = patcher::patch_for_zl(elf_path, &info, &mapping, ctx.profile) + { + tracing::warn!("Failed to patch {}: {}", elf_path.display(), e); + } + } + + // Find the main executable + let binary = find_main_binary(extracted.extract_dir.path(), &args.package)?; + + println!("Running {}...\n", binary.display()); + + // Execute + let status = std::process::Command::new(&binary) + .args(&args.args) + .env( + "LD_LIBRARY_PATH", + build_ld_path(extracted.extract_dir.path(), ctx), + ) + .status() + .map_err(|e| ZlError::Plugin { + plugin: "run".into(), + message: format!("Failed to execute {}: {}", binary.display(), e), + })?; + + // tmp_dir drops automatically, cleaning up everything + + if !status.success() { + std::process::exit(status.code().unwrap_or(1)); + } + + Ok(()) +} + +/// Find the main binary in the extracted package directory. +/// Looks in standard bin subdirectories for a binary matching the package name, +/// or falls back to the first executable found. +fn find_main_binary(extract_dir: &Path, package_name: &str) -> ZlResult { + let bin_subdirs = [ + "usr/bin", + "usr/sbin", + "bin", + "sbin", + "usr/local/bin", + "usr/local/sbin", + ]; + + let mut first_executable = None; + + for subdir in &bin_subdirs { + let dir = extract_dir.join(subdir); + if !dir.is_dir() { + continue; + } + + if let Ok(entries) = std::fs::read_dir(&dir) { + for entry in entries.flatten() { + let path = entry.path(); + if path.is_file() && is_executable(&path) { + // Exact match on package name + if entry.file_name().to_string_lossy() == package_name { + return Ok(path); + } + if first_executable.is_none() { + first_executable = Some(path); + } + } + } + } + } + + first_executable.ok_or_else(|| ZlError::Plugin { + plugin: "run".into(), + message: format!( + "No executable found in {} — the package may not contain binaries", + extract_dir.display() + ), + }) +} + +/// Build LD_LIBRARY_PATH from the extracted package + system + ZL lib dirs +fn build_ld_path(extract_dir: &Path, ctx: &AppContext) -> String { + let mut paths = Vec::new(); + + // Add lib dirs from the extracted package + for subdir in &["usr/lib", "lib", "usr/lib64", "lib64"] { + let dir = extract_dir.join(subdir); + if dir.is_dir() { + paths.push(dir.to_string_lossy().into_owned()); + } + } + + // Add ZL lib dir + paths.push(ctx.paths.lib.to_string_lossy().into_owned()); + + // Add system lib dirs + for dir in &ctx.profile.lib_dirs { + paths.push(dir.to_string_lossy().into_owned()); + } + + paths.join(":") +} + +fn is_executable(path: &Path) -> bool { + use std::os::unix::fs::PermissionsExt; + std::fs::metadata(path) + .map(|m| m.permissions().mode() & 0o111 != 0) + .unwrap_or(false) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_find_main_binary() { + let tmp = tempfile::tempdir().unwrap(); + let bin_dir = tmp.path().join("usr/bin"); + std::fs::create_dir_all(&bin_dir).unwrap(); + + // Create a mock executable + let exe = bin_dir.join("testpkg"); + std::fs::write(&exe, "#!/bin/sh\necho hi").unwrap(); + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + std::fs::set_permissions(&exe, std::fs::Permissions::from_mode(0o755)).unwrap(); + } + + let found = find_main_binary(tmp.path(), "testpkg").unwrap(); + assert_eq!(found, exe); + } + + #[test] + fn test_find_main_binary_fallback() { + let tmp = tempfile::tempdir().unwrap(); + let bin_dir = tmp.path().join("usr/bin"); + std::fs::create_dir_all(&bin_dir).unwrap(); + + let exe = bin_dir.join("other-binary"); + std::fs::write(&exe, "#!/bin/sh\necho hi").unwrap(); + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + std::fs::set_permissions(&exe, std::fs::Permissions::from_mode(0o755)).unwrap(); + } + + let found = find_main_binary(tmp.path(), "nonexistent-name").unwrap(); + assert_eq!(found, exe); + } + + #[test] + fn test_find_main_binary_none() { + let tmp = tempfile::tempdir().unwrap(); + let result = find_main_binary(tmp.path(), "pkg"); + assert!(result.is_err()); + } +} diff --git a/src/cli/search.rs b/src/cli/search.rs index a7fa0d7..fa77fe6 100644 --- a/src/cli/search.rs +++ b/src/cli/search.rs @@ -1,5 +1,7 @@ use std::sync::Mutex; +use console::style; + use crate::error::ZlResult; use crate::plugin::{PackageCandidate, PluginRegistry, SourcePlugin}; @@ -146,8 +148,10 @@ pub fn handle(args: SearchArgs, registry: &PluginRegistry) -> ZlResult<()> { let shown = total_count.min(limit); println!( - "── {} ({} result{}) ──", - plugin.display_name(), + "{} ({} result{})", + style(format!("── {} ──", plugin.display_name())) + .cyan() + .bold(), total_count, if total_count == 1 { "" } else { "s" } ); @@ -156,7 +160,7 @@ pub fn handle(args: SearchArgs, registry: &PluginRegistry) -> ZlResult<()> { let tag_str = if entry.tag.is_empty() { String::new() } else { - format!(" [{}]", entry.tag) + format!(" {}", style(format!("[{}]", entry.tag)).dim()) }; // Truncate description to 55 chars @@ -166,9 +170,23 @@ pub fn handle(args: SearchArgs, registry: &PluginRegistry) -> ZlResult<()> { entry.candidate.description.clone() }; + let name_styled = if entry.score == 100 { + style(format!("{:<30}", entry.candidate.name)) + .green() + .bold() + .to_string() + } else { + style(format!("{:<30}", entry.candidate.name)) + .white() + .to_string() + }; + println!( - " {:<30} {:<15} {}{}", - entry.candidate.name, entry.candidate.version, desc, tag_str + " {} {:<15} {}{}", + name_styled, + style(&entry.candidate.version).yellow(), + desc, + tag_str ); } diff --git a/src/cli/size.rs b/src/cli/size.rs new file mode 100644 index 0000000..418b14b --- /dev/null +++ b/src/cli/size.rs @@ -0,0 +1,187 @@ +//! `zl size` — show disk usage per package. + +use console::style; + +use crate::core::db::ops::ZlDatabase; +use crate::error::{ZlError, ZlResult}; + +use super::SizeArgs; + +pub fn handle(args: SizeArgs, db: &ZlDatabase) -> ZlResult<()> { + if let Some(ref name) = args.package { + return show_single(name, db); + } + + let packages = db.list_packages()?; + + if packages.is_empty() { + println!("No packages installed."); + return Ok(()); + } + + let mut entries: Vec<(String, String, u64, usize)> = packages + .iter() + .map(|pkg| { + let size: u64 = pkg + .installed_files + .iter() + .filter_map(|f| std::fs::metadata(f).ok()) + .map(|m| m.len()) + .sum(); + ( + pkg.id.name.clone(), + pkg.id.version.clone(), + size, + pkg.installed_files.len(), + ) + }) + .collect(); + + if args.sort { + entries.sort_by(|a, b| b.2.cmp(&a.2)); + } else { + entries.sort_by(|a, b| a.0.cmp(&b.0)); + } + + println!( + "{:<30} {:<15} {:>10} {:>8}", + style("Package").bold(), + style("Version").bold(), + style("Size").bold(), + style("Files").bold() + ); + println!("{}", "-".repeat(65)); + + let mut total_size = 0u64; + let mut total_files = 0usize; + + for (name, version, size, files) in &entries { + println!( + "{:<30} {:<15} {:>10} {:>8}", + name, + version, + format_size(*size), + files + ); + total_size += size; + total_files += files; + } + + println!("{}", "-".repeat(65)); + println!( + "{:<30} {:<15} {:>10} {:>8}", + style("Total").bold(), + format!("{} packages", entries.len()), + style(format_size(total_size)).bold(), + total_files + ); + + Ok(()) +} + +fn show_single(name: &str, db: &ZlDatabase) -> ZlResult<()> { + let pkg = db + .get_package_by_name(name)? + .ok_or_else(|| ZlError::PackageNotFound { + name: name.to_string(), + })?; + + let mut file_sizes: Vec<(String, u64)> = pkg + .installed_files + .iter() + .filter_map(|f| { + std::fs::metadata(f) + .ok() + .map(|m| (f.to_string_lossy().into_owned(), m.len())) + }) + .collect(); + + file_sizes.sort_by(|a, b| b.1.cmp(&a.1)); + + let total: u64 = file_sizes.iter().map(|(_, s)| s).sum(); + + println!( + "{}-{} — {} ({} files)\n", + style(&pkg.id.name).bold(), + pkg.id.version, + format_size(total), + pkg.installed_files.len() + ); + + // Show top 20 largest files + println!(" Largest files:"); + for (path, size) in file_sizes.iter().take(20) { + println!(" {:>10} {}", format_size(*size), path); + } + + if file_sizes.len() > 20 { + println!(" ... and {} more files", file_sizes.len() - 20); + } + + // Show lib breakdown + if !pkg.provides_libs.is_empty() { + println!("\n Shared libraries provided:"); + for (soname, path) in &pkg.provides_libs { + let size = std::fs::metadata(path).map(|m| m.len()).unwrap_or(0); + println!( + " {:>10} {} -> {}", + format_size(size), + soname, + path.display() + ); + } + } + + // Dependency cost + let deps = db + .get_dependencies(&format!("{}-{}", pkg.id.name, pkg.id.version)) + .unwrap_or_default(); + if !deps.is_empty() { + println!("\n Dependencies ({}):", deps.len()); + let mut dep_total = 0u64; + for dep_name in &deps { + if let Some(dep_pkg) = db.get_package_by_name(dep_name).ok().flatten() { + let size: u64 = dep_pkg + .installed_files + .iter() + .filter_map(|f| std::fs::metadata(f).ok()) + .map(|m| m.len()) + .sum(); + println!( + " {:>10} {}-{}", + format_size(size), + dep_pkg.id.name, + dep_pkg.id.version + ); + dep_total += size; + } + } + println!("\n Total with deps: {}", format_size(total + dep_total)); + } + + Ok(()) +} + +fn format_size(bytes: u64) -> String { + if bytes >= 1_000_000_000 { + format!("{:.1} GB", bytes as f64 / 1_000_000_000.0) + } else if bytes >= 1_000_000 { + format!("{:.1} MB", bytes as f64 / 1_000_000.0) + } else if bytes >= 1_000 { + format!("{:.1} KB", bytes as f64 / 1_000.0) + } else { + format!("{} B", bytes) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_format_size() { + assert_eq!(format_size(0), "0 B"); + assert_eq!(format_size(1500), "1.5 KB"); + assert_eq!(format_size(1_500_000), "1.5 MB"); + } +} diff --git a/src/cli/why.rs b/src/cli/why.rs new file mode 100644 index 0000000..c50de0c --- /dev/null +++ b/src/cli/why.rs @@ -0,0 +1,101 @@ +//! `zl why ` — show why a package is installed (dependency chain). + +use crate::core::db::ops::ZlDatabase; +use crate::error::{ZlError, ZlResult}; + +use super::WhyArgs; + +pub fn handle(args: WhyArgs, db: &ZlDatabase) -> ZlResult<()> { + let pkg = db + .get_package_by_name(&args.package)? + .ok_or_else(|| ZlError::PackageNotFound { + name: args.package.clone(), + })?; + + if pkg.explicit { + println!( + "{}-{} was explicitly installed by the user.", + pkg.id.name, pkg.id.version + ); + return Ok(()); + } + + // Find reverse dependency chain + println!( + "{}-{} is installed as a dependency.\n", + pkg.id.name, pkg.id.version + ); + + let chain = find_dependency_chain(&args.package, db, 0)?; + if chain.is_empty() { + println!( + " No reverse dependency found — this may be an orphan.\n hint: remove it with `zl remove {}`", + args.package + ); + } + + Ok(()) +} + +/// Recursively trace why a package is installed, printing the chain. +fn find_dependency_chain( + package_name: &str, + db: &ZlDatabase, + depth: usize, +) -> ZlResult> { + let mut chain = Vec::new(); + let indent = " ".repeat(depth); + + let rdeps = db.reverse_dependencies(package_name)?; + + if rdeps.is_empty() { + return Ok(chain); + } + + for rdep_key in &rdeps { + // rdep_key is "name-version"; extract name + let rdep_name = rdep_key + .rfind('-') + .map(|pos| &rdep_key[..pos]) + .unwrap_or(rdep_key); + + if let Some(rdep_node) = db.get_package_by_name(rdep_name)? { + if rdep_node.explicit { + println!( + "{}-> {}-{} (explicitly installed)", + indent, rdep_node.id.name, rdep_node.id.version + ); + chain.push(rdep_name.to_string()); + } else { + println!( + "{}-> {}-{} (dependency)", + indent, rdep_node.id.name, rdep_node.id.version + ); + chain.push(rdep_name.to_string()); + // Avoid infinite recursion + if depth < 10 { + let sub_chain = find_dependency_chain(rdep_name, db, depth + 1)?; + chain.extend(sub_chain); + } + } + } else { + println!("{}-> {} (not found in DB)", indent, rdep_key); + } + } + + Ok(chain) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_find_dependency_chain_empty() { + let db_file = tempfile::NamedTempFile::new().unwrap(); + let db = ZlDatabase::open(db_file.path()).unwrap(); + + let chain = find_dependency_chain("nonexistent", &db, 0).unwrap(); + assert!(chain.is_empty()); + } +} diff --git a/src/core/db/ops.rs b/src/core/db/ops.rs index be81146..bced33b 100644 --- a/src/core/db/ops.rs +++ b/src/core/db/ops.rs @@ -2,7 +2,7 @@ use std::path::Path; use redb::{Database, ReadableTable}; -use super::schema::{DEPENDENCIES, FILE_OWNERS, LIB_INDEX, PACKAGES, PINNED, PLUGIN_META}; +use super::schema::{DEPENDENCIES, FILE_OWNERS, HISTORY, LIB_INDEX, PACKAGES, PINNED, PLUGIN_META}; use crate::core::graph::model::PackageNode; use crate::error::{ZlError, ZlResult}; @@ -34,6 +34,8 @@ impl ZlDatabase { .map_err(|e| ZlError::Config(format!("Failed to init PLUGIN_META table: {}", e)))?; txn.open_table(PINNED) .map_err(|e| ZlError::Config(format!("Failed to init PINNED table: {}", e)))?; + txn.open_table(HISTORY) + .map_err(|e| ZlError::Config(format!("Failed to init HISTORY table: {}", e)))?; } txn.commit() .map_err(|e| ZlError::Config(format!("Failed to commit init: {}", e)))?; @@ -518,6 +520,80 @@ impl ZlDatabase { } Ok(pinned) } + + // ── History ── + + /// Record a history entry (install, remove, upgrade, etc.) + pub fn record_history(&self, entry: &HistoryEntry) -> ZlResult<()> { + let key = format!("{:020}", entry.timestamp); + let value = serde_json::to_vec(entry)?; + let txn = self + .db + .begin_write() + .map_err(|e| ZlError::Config(e.to_string()))?; + { + let mut table = txn + .open_table(HISTORY) + .map_err(|e| ZlError::Config(e.to_string()))?; + table + .insert(key.as_str(), value.as_slice()) + .map_err(|e| ZlError::Config(e.to_string()))?; + } + txn.commit().map_err(|e| ZlError::Config(e.to_string()))?; + Ok(()) + } + + /// List history entries, newest first + pub fn list_history(&self, limit: usize) -> ZlResult> { + let txn = self + .db + .begin_read() + .map_err(|e| ZlError::Config(e.to_string()))?; + let table = txn + .open_table(HISTORY) + .map_err(|e| ZlError::Config(e.to_string()))?; + let mut entries = Vec::new(); + + let iter = table + .iter() + .map_err(|e: redb::StorageError| ZlError::Config(e.to_string()))?; + for entry in iter { + let (_, v) = entry.map_err(|e: redb::StorageError| ZlError::Config(e.to_string()))?; + let he: HistoryEntry = serde_json::from_slice(v.value())?; + entries.push(he); + } + + entries.reverse(); // newest first + entries.truncate(limit); + Ok(entries) + } +} + +/// A single history record +#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)] +pub struct HistoryEntry { + pub timestamp: u64, + pub action: HistoryAction, + pub packages: Vec, +} + +#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)] +pub enum HistoryAction { + Install, + Remove, + Upgrade, + Rollback, +} + +impl std::fmt::Display for HistoryAction { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + HistoryAction::Install => write!(f, "install"), + HistoryAction::Remove => write!(f, "remove"), + HistoryAction::Upgrade => write!(f, "upgrade"), + HistoryAction::Rollback => write!(f, "rollback"), + } + } } #[cfg(test)] diff --git a/src/core/db/schema.rs b/src/core/db/schema.rs index 38b45d4..8eca0b1 100644 --- a/src/core/db/schema.rs +++ b/src/core/db/schema.rs @@ -6,3 +6,4 @@ pub const LIB_INDEX: TableDefinition<&str, &str> = TableDefinition::new("lib_ind pub const DEPENDENCIES: TableDefinition<&str, &[u8]> = TableDefinition::new("dependencies"); pub const PLUGIN_META: TableDefinition<&str, &[u8]> = TableDefinition::new("plugin_meta"); pub const PINNED: TableDefinition<&str, &str> = TableDefinition::new("pinned"); +pub const HISTORY: TableDefinition<&str, &[u8]> = TableDefinition::new("history"); diff --git a/src/main.rs b/src/main.rs index 9483dc2..884bc20 100644 --- a/src/main.rs +++ b/src/main.rs @@ -123,6 +123,13 @@ fn run(cli_args: cli::Cli) -> anyhow::Result<()> { cli::Commands::Switch(args) => cli::install::handle_switch(args, ctx.paths, ctx.db)?, cli::Commands::SelfUpdate => unreachable!("handled above"), cli::Commands::Env(cmd) => cli::env::handle(cmd, ctx.paths, &config, ctx.profile)?, + cli::Commands::Run(args) => cli::run::handle(args, &ctx)?, + cli::Commands::History(cmd) => cli::history::handle(cmd, &ctx)?, + cli::Commands::Why(args) => cli::why::handle(args, ctx.db)?, + cli::Commands::Doctor => cli::doctor::handle(&ctx)?, + cli::Commands::Size(args) => cli::size::handle(args, ctx.db)?, + cli::Commands::Diff(args) => cli::diff::handle(args, &ctx)?, + cli::Commands::Audit(args) => cli::audit::handle(args, ctx.db)?, } Ok(()) diff --git a/src/plugin/mod.rs b/src/plugin/mod.rs index e491aee..7ebc669 100644 --- a/src/plugin/mod.rs +++ b/src/plugin/mod.rs @@ -76,4 +76,59 @@ impl PluginRegistry { None => self.plugins.first().map(|p| p.as_ref()), } } + + /// List all registered plugin names and their display names + #[allow(dead_code)] + pub fn list_info(&self) -> Vec { + self.plugins + .iter() + .map(|p| PluginInfo { + name: p.name().to_string(), + display_name: p.display_name().to_string(), + builtin: true, + }) + .collect() + } +} + +/// Metadata about a plugin (for registry listing) +#[allow(dead_code)] +#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)] +pub struct PluginInfo { + pub name: String, + pub display_name: String, + pub builtin: bool, +} + +/// Remote plugin registry: fetch available plugins from a URL. +/// Returns a list of PluginInfo for plugins available in the registry. +#[allow(dead_code)] +pub fn fetch_remote_registry(registry_url: &str) -> ZlResult> { + let client = reqwest::blocking::Client::builder() + .user_agent("zero-layer/0.1") + .timeout(std::time::Duration::from_secs(15)) + .build() + .unwrap_or_default(); + + let resp = client + .get(registry_url) + .send() + .map_err(|e| crate::error::ZlError::Plugin { + plugin: "registry".into(), + message: format!("Failed to fetch registry: {}", e), + })?; + + if !resp.status().is_success() { + return Err(crate::error::ZlError::Plugin { + plugin: "registry".into(), + message: format!("Registry returned HTTP {}", resp.status()), + }); + } + + let plugins: Vec = resp.json().map_err(|e| crate::error::ZlError::Plugin { + plugin: "registry".into(), + message: format!("Failed to parse registry response: {}", e), + })?; + + Ok(plugins) }