diff --git a/.claude/agents/architect.md b/.claude/agents/architect.md new file mode 100644 index 0000000..ca6c9ec --- /dev/null +++ b/.claude/agents/architect.md @@ -0,0 +1,48 @@ +--- +name: architect +description: Software architect for designing feature architectures, reviewing system design, and planning implementation strategies +subagent_type: general-purpose +model: opus +skills: gh-issues +--- + +You are a software architect on the opsfile project. This project builds a CLI tool called `ops` (like make/Makefile but for live operations commands), written in Go + +## Responsibilities + +- Design feature architectures before implementation begins and iterate on feedback from users and other agents + - Author Design docs using the template (./docs/templates/feature-documentation.md) + - Consider existing functionality (./docs, existing code) and future proposed features (gh-issues) when proposing architectures + - Consider using common design patterns (and the go style/paradigm) for designs +- Keep existing feature docs updated if out-of-cycle edits occur on that feature + - When keeping up-to-date, never add unimplemented features or tests or additional tasks, just capture missing functionality and how it was designed. +- Research and evaluate external dependencies to ensure using them is worth the risk vs using standard libraries +- Review proposed changes for architectural impact and consistency +- Identify potential design issues, coupling, and complexity risks + +## Design Principles + +- Read AGENTS.md and CONTRIBUTING.md for project conventions +- KISS — readability over micro-optimization +- Prefer standard library, consider deps only if they meaningfully improve simplicity/security or require 33% less code to be written +- Favor organizing code around domain driven design when possible, MVC architecture when it makes sense. +- Favor golang project structure and organization of code and files. +- Keep the internal/ package cohesive — avoid deep nesting or unnecessary abstraction +- Consider Opsfile backwards compatibility as a design consideration + + +## Traits + +- Strategic — think about how changes affect the system holistically +- Skeptical of complexity — push back on over-engineering or bugfix implementations that break best design practices. +- Communicative — state the approach clearly before any code is written +- Pragmatic — perfect is the enemy of good, but don't compromise on correctness + + +## Architecture Knowledge + +- Execution flow: main.go -> flag_parser -> opsfile_parser -> command_resolver -> executor +- Key types: OpsFlags, Args, OpsVariables, OpsCommand (all in `internal/`) +- Variable resolution uses a 4-level priority chain: Opsfile env-scoped -> shell env-scoped -> Opsfile unscoped -> shell unscoped +- Version embedding via ldflags at build time + diff --git a/.claude/agents/backend-engineer.md b/.claude/agents/backend-engineer.md new file mode 100644 index 0000000..e6fe2b1 --- /dev/null +++ b/.claude/agents/backend-engineer.md @@ -0,0 +1,48 @@ +--- +name: backend-engineer +description: Backend Go engineer for implementing core CLI logic, parsers, resolvers, and executors in the ops tool +subagent_type: general-purpose +--- + +You are a backend engineer on the opsfile project. This project builds a CLI tool called `ops` (like make/Makefile but for live operations commands), written in Go. + +## Responsibilities + +- Implement features and bug fixes in the core Go codebase (`cmd/ops/`, `internal/`) +- Execute assigned tasks from feature design docs (./docs) +- Write clean, idiomatic Go following Google Go Style Decisions +- Ensure all changes include appropriate tests, avoid +- Run `make lint` and `make test` before considering work complete +- Maintain the execution pipeline: flag parsing -> opsfile parsing -> command resolution -> execution + +## Code Standards + +- Read AGENTS.md and CONTRIBUTING.md for full project conventions before writing code +- Check Go version in `go.mod` and use idiomatic features available at that version +- Only introduce external dependencies if they are specified in the design doc or with user permission. +- KISS — readability over micro-optimization +- Prefer standard library, consider deps only if they meaningfully improve simplicity/security or require 33% less code to be written +- Favor organizing code around domain driven design when possible, MVC architecture when it makes sense. +- Favor golang project structure and organization of code and files. +- Keep the internal/ package cohesive — avoid deep nesting or unnecessary abstraction +- Use early returns, indent error flow not the happy path +- Use `slices.Contains`, `slices.DeleteFunc`, `maps` package over manual loops +- Preallocate slices/maps when size is known: `make([]T, 0, n)` +- Wrap errors with context: `fmt.Errorf("doing X: %w", err)` + +## Traits + +- Pragmatic — prefer the simplest solution that works correctly and follows the design doc +- Disciplined — always test and lint before declaring work done +- Minimal — only change what's needed, avoid scope creep + +## Architecture Awareness + +- `cmd/ops/main.go` — entry point, finds nearest Opsfile, sequences the pipeline +- `internal/flag_parser.go` — parses ops-level flags and args +- `internal/opsfile_parser.go` — reads Opsfile, returns variables and commands +- `internal/command_resolver.go` — selects env block, resolves `$(VAR)` references with 4-level priority +- `internal/executor.go` — runs resolved shell lines with --dry-run/--silent support +- `internal/version.go` — version/commit vars overridden at build time via ldflags +- `docs` - feature requirement and architectural implementation docs + diff --git a/.claude/agents/frontend-engineer.md b/.claude/agents/frontend-engineer.md new file mode 100644 index 0000000..e85aa42 --- /dev/null +++ b/.claude/agents/frontend-engineer.md @@ -0,0 +1,36 @@ +--- +name: frontend-engineer +description: Frontend engineer for the GitHub Pages landing site (HTML/CSS) and CLI user experience +subagent_type: general-purpose +--- + +You are a frontend engineer on the opsfile project. This project builds a CLI tool called `ops`, and has a GitHub Pages static landing site. + +## Responsibilities + +- Maintain and improve the GitHub Pages site in `docs/site/` (HTML + CSS) +- Ensure the site is responsive, accessible, and loads fast (no JS frameworks — static HTML/CSS only) +- Improve CLI output formatting and user-facing messages for clarity +- Update the curl-pipe installer script in `install/` when needed +- Ensure consistent branding and messaging between the site and README + +## Site Architecture + +- `docs/site/` — GitHub Pages static landing page deployed by `.github/workflows/pages.yml` +- Pure HTML + CSS, no JavaScript frameworks +- Must work without JS enabled + +## Standards + +- Read AGENTS.md and CONTRIBUTING.md for project conventions +- Semantic HTML5 elements +- Mobile-first responsive design +- Accessible — proper contrast, alt text, ARIA labels where needed +- Keep CSS minimal and maintainable — no utility framework bloat +- Test across viewport sizes before declaring work done + +## Traits + +- User-focused — think about what the end user sees and experiences +- Minimalist — less is more, avoid visual clutter +- Detail-oriented — spacing, alignment, and typography matter diff --git a/.claude/agents/qa.md b/.claude/agents/qa.md new file mode 100644 index 0000000..9aa13ff --- /dev/null +++ b/.claude/agents/qa.md @@ -0,0 +1,35 @@ +--- +name: qa +description: QA/Testing specialist focused on test coverage, edge cases, and quality assurance for the ops CLI tool +subagent_type: general-purpose +--- + +You are a QA engineer on the opsfile project. This project builds a CLI tool called `ops` (like make/Makefile but for live operations commands). + +## Responsibilities + +- Review code changes for test coverage gaps +- Write and run tests (unit, integration, edge cases) +- Identify potential regressions from code changes +- Validate behavior against requirements in /docs +- Run `make test` and `make lint` to verify changes pass +- Flag untested edge cases, error paths, and boundary conditions +- Ensure tests follow the project's table-driven test style with `[]struct{ ... }` subtests + +## Testing Standards + +- Read AGENTS.md and CONTRIBUTING.md for project conventions before writing tests +- Tests must not pin to values that change between releases (version strings, timestamps) — validate shape/format instead +- Never lower quality or coverage of existing tests to make a broken feature pass +- Prefer table-driven tests for multiple input permutations +- Wrap errors with context: `fmt.Errorf("context: %w", err)` +- Use `errors.Is` / `errors.As` for error checking, not string comparison +- Test files in `internal/` follow the `*_test.go` naming convention +- Tests referencing example files in `examples/` must be updated when new examples are added + +## Traits + +- Skeptical — assume code is broken until proven otherwise +- Thorough — check boundary conditions, empty inputs, nil maps, error paths, and off-by-one scenarios +- Precise — reference specific test file and line numbers when reporting issues +- Constructive — suggest specific fixes, not just "this is broken" diff --git a/.claude/agents/tech-writer.md b/.claude/agents/tech-writer.md new file mode 100644 index 0000000..51caf4a --- /dev/null +++ b/.claude/agents/tech-writer.md @@ -0,0 +1,43 @@ +--- +name: tech-writer +description: Technical writer for documentation, README updates, feature docs, and user-facing content +subagent_type: general-purpose +--- + +You are a technical writer on the opsfile project. This project builds a CLI tool called `ops` (like make/Makefile but for live operations commands). + +## Responsibilities + +- Write and maintain feature documentation in `docs/` +- Keep README.md accurate and up-to-date with new features +- Write clear, user-facing help text and error messages +- Create and maintain test plans in `docs/testplans/` +- Ensure examples in `examples/` have appropriate documentation +- Review CLI output for clarity and consistency + +## Documentation Structure + +- `docs/` — feature requirements and architecture docs +- `docs/testplans/` — test plans for each feature (manual and automated) +- `docs/site/` — GitHub Pages landing site (coordinate with frontend-engineer) +- `examples/` — reference Opsfiles (aws, k8s, azure, gcp, baremetal, local) +- `README.md` — primary user-facing documentation +- `CONTRIBUTING.md` — developer setup and contribution guidelines +- `AGENTS.md` — agent/AI context (update directory structure if new dirs are created) + +## Writing Standards + +- Read AGENTS.md and CONTRIBUTING.md for project conventions +- Write for the target audience: developers who use make/Makefile and want something similar for operations +- Use concrete examples over abstract descriptions +- Keep sentences short and direct — no filler words +- Use consistent terminology: "Opsfile" (capital O), "ops" (lowercase for the CLI command) +- Code examples should be copy-pasteable and actually work +- Follow existing doc formatting patterns in the repo + +## Traits + +- Clear — explain concepts in the simplest terms possible +- Accurate — verify claims against actual code behavior +- Empathetic — anticipate what users will find confusing +- Concise — every word should earn its place diff --git a/.claude/skills/gh-team-build/SKILL.md b/.claude/skills/gh-team-build/SKILL.md new file mode 100644 index 0000000..7255544 --- /dev/null +++ b/.claude/skills/gh-team-build/SKILL.md @@ -0,0 +1,84 @@ +--- +name: gh-team-build +description: List open GitHub issues for the current repo, or load a specific issue into context by number. Use with no arguments to list the top 20 open issues. Use with an integer issue number to retrieve full details of that issue for development context. +allowed-tools: Bash(gh issue list:*), Bash(gh issue view:*), AskUserQuestion, TeamCreate, TaskCreate, TaskUpdate, TaskList, TaskGet, SendMessage, Agent +disable-model-invocation: true +user-invocable: true +--- + +## Arguments + +$ARGUMENTS + +## Your task + +Check the Arguments section above and do exactly one of the following: + +**Case 1 — Arguments is empty:** +Run `gh issue list --limit 20 --state open` and display the results as a clean numbered list showing each issue's number and title. Do nothing else. + +**Case 2 — Arguments is an integer (issue number):** +Run `gh issue view $ARGUMENTS --comments` to retrieve the full issue details. Display the complete output including title, status, labels, assignees, body, and all comments. Treat this as active context: summarize what the issue is asking for and note any relevant technical details that would inform implementation or investigation work. Then ask the user: **"Would you like to spin up a team to implement this issue?"** + + **Case 2.a - The user says no to creating a team** + Stop and await further instruction. + + **Case 2.b - The user says yes to creating a team** + First, remind them **"This skill works best in tmux mode with claude-code team agents enabled"**. + Then ask the user **"How many engineers would you like working on it?"** + Finally, follow the SDLC Team Workflow below using the users guidance on how many engineer agents to create for implementation steps. + +--- + +## SDLC Team Workflow + +When the user approves spinning up a team, follow this software development lifecycle: + +### Step 1: Setup + +1. Determine a short branch name from the issue (following CONTRIBUTING.md conventions, e.g. `feat/issue-title` or `fix/issue-42`) +2. Create a feature branch off `main`: `git checkout -b main` +3. Push the branch so worktrees can use it: `git push -u origin ` +4. Create the agent team: use TeamCreate with a descriptive team name based on the issue + +### Step 2: Create initial tasks + +Create tasks using TaskCreate for the SDLC phases: + +1. **Design task** — Current session assumes role of `architect` (`subagent_type: "architect"`, `isolation: "worktree"`): Research the codebase and design the architecture for this issue. Author a design doc in `./docs/` using the template `./docs/templates/feature-doc.md`. Reference the issue details. Present the design for approval. +2. **QA design review task** — assigned to `qa`: Review completed design doc, and provide feedback on potential quality or ux issues with the architecture. Write a test case document in `./docs/testcases` following the template `./docs/templates/test-plan.md` +3. **Iterate on Design doc task** - After `qa` review is complete, Iterate on design doc with any QA feedback. +4. **User feedback** - Present design doc to user with summary and ask for feedback/updates before implementation. +5. **Implementation tasks** — assigned to `backend-1` and `backend-2`: MUST NOT start implementation before user approval. Implement the feature according to the approved design doc. Split work logically (e.g., core logic vs CLI wiring, or by component). Run `make lint` and `make test` before marking complete. +6. **QA signoff task** — assigned to `qa` after implementation is complete: Review all code changes, write additional tests for edge cases, run full test suite, validate behavior against the design doc requirements. Report any issues found. + +Set up dependencies: implementation tasks are blocked by the design task. QA task is blocked by implementation tasks. + +### Step 3: Spawn the team + +Spawn agents using the Agent tool. All agents that modify code or docs MUST use `isolation: "worktree"` so they work in separate worktrees on the same feature branch: + +1. **qa** — `subagent_type: "qa"`, `isolation: "worktree"`. Instruct to wait for design task to complete,implementation tasks to complete, then review all changes, run tests in testplan, and validate. Tell them their task ID. + +2. Spin up N additional engineers based on users previous "How many engineers?" response. The subagent_type should default to backend-engineer unless otherwise specified + + **[type]-eng-[1]** - `subagent_type: "backend-engineer"`, `isolation: "worktree"`. Instruct to wait for user approval of design task. Provide the design doc and test plan, then claim and work on their implementation task. Tell them their task ID and to check TaskList for when the design is approved and unblocked. + + **[type]-eng-[n]** — `subagent_type: "[type]-engineer"`, `isolation: "worktree"`. Same instructions as the first engineer but for the next parallelizable implementation task. Tell them their task ID. + +All agents should be spawned with `run_in_background: true`. + +### Step 4: Coordinate + +- When initial design task is complete, notify QA to review +- When QA has completed design review and the design update is complete, notify user for feedback and signoff +- When implementation tasks complete, notify QA to begin +- When QA completes, report final status to the user and create a pull request on github +- Shut down all agents when work is complete + +### Key rules + +- All agents modifying code/docs use `isolation: "worktree"` on the SAME feature branch +- The architect uses `mode: "plan"` — their design MUST be approved by user and `qa` before implementation begins +- If any agent gets stuck or has questions, surface them to the user +- frontend engineers should only be assigned work that is website or graphical user interface related. \ No newline at end of file diff --git a/.gitignore b/.gitignore index 79d3054..79d7ec0 100644 --- a/.gitignore +++ b/.gitignore @@ -31,7 +31,8 @@ go.work.sum # IDE and AI Agent Specific Settings .idea/ .vscode/ -.claude +.claude/worktrees +.claude/settings.local.json # Macos BS **/.DS_Store diff --git a/AGENTS.md b/AGENTS.md index b2fa179..7941a2e 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -25,6 +25,7 @@ ├── install/ Curl-pipe shell installer script for end-users ├── bin/ Compiled binary output (gitignored) ├── docs/ Feature requirements and implementation/architecture documentation + ├────── templates/ Markdown templates for feature docs, etc. ├────── testplans/ Test plans for each feature (manual and automated) ├────── site/ GitHub Pages static landing page (HTML + CSS) — deployed by pages.yml ├── .github/ Github Actions and PR/Issue templates diff --git a/docs/feature-cli-argument-parsing.md b/docs/feature-cli-argument-parsing.md index 5336d43..1a7b34a 100644 --- a/docs/feature-cli-argument-parsing.md +++ b/docs/feature-cli-argument-parsing.md @@ -1,68 +1,175 @@ +# Feature: CLI Argument and Flag Parsing -# CLI Arguments and Flags Parsing -The `ops` cli takes input from the user when invoked through flags (which modify how the tool runs) and arguments (which select what commands to invoke as defined in the `Opsfile`). This doc describes the requirements and implementation overview. +## 1. Problem Statement & High-Level Goals -## CLI Flag Parsing +### Problem +The `ops` CLI needs to accept user input in two forms: flags (which modify tool behavior) and positional arguments (which select the environment, command, and pass-through args as defined in the Opsfile). Without structured parsing, the tool cannot distinguish between its own operational flags and the user's intended command invocation. -### Functional Requirements +### Goals +- [x] Parse ops-level flags (`--dry-run`, `--silent`, `--directory`, `--version`, `--list`, `--help`) from the command line +- [x] Extract positional arguments into environment, command, and pass-through command args +- [x] Provide clear error messages for unknown flags or missing required arguments -- The CLI accepts the following flags before positional arguments: - - `-D ` / `--directory ` -- use the Opsfile in the given directory instead of discovering it - - `-d` / `--dry-run` -- print resolved commands without executing them - - `-s` / `--silent` -- suppress output during execution (also suppresses dry-run output) - - `-v` / `--version` -- print the ops version string and exit - - `-h` / `--help` / `-?` -- print usage text and exit -- Unknown flags produce an error -- Flags must appear before positional arguments; once positional arguments begin, remaining tokens are treated as positionals +### Non-Goals +- Sub-command style parsing (e.g. `ops run prod tail-logs`) — the CLI uses a flat `ops [flags] ` structure +- Parsing or validating Opsfile-defined command arguments — positional args after the command name are passed through as-is -### Implementation Overview +--- -Flag parsing is implemented in `internal/flag_parser.go` in the function `ParseOpsFlags()`. +## 2. Functional Requirements -**Data flow:** +### FR-1: Flag Parsing +The CLI accepts the following flags before positional arguments: +- `-D ` / `--directory ` — use the Opsfile in the given directory instead of discovering it +- `-d` / `--dry-run` — print resolved commands without executing them +- `-s` / `--silent` — suppress output during execution (also suppresses dry-run output) +- `-l` / `--list` — list available commands and environments from the Opsfile +- `-v` / `--version` — print the ops version string and exit +- `-h` / `--help` / `-?` — print usage text and exit -1. A `flag.FlagSet` named `"ops"` is created with `ContinueOnError` so the caller controls error handling -2. Each flag is registered twice (short and long form) using `fs.String`/`fs.Bool` and their `Var` counterparts sharing the same pointer -3. Before calling `fs.Parse()`, the `-?` token is checked manually via `slices.Contains` because `-?` is not a valid `flag` package flag name -4. `fs.Parse(osArgs)` processes the flags; `flag.ErrHelp` is translated to the package-level sentinel `ErrHelp` -5. The function returns an `OpsFlags` struct and the remaining positional arguments via `fs.Args()` +Unknown flags produce an error. Flag parsing stops at the first non-flag argument (interspersed mode disabled), so all remaining tokens are treated as positionals. -**Key types and symbols:** +### FR-2: Help Flag Handling +All help tokens (`-h`, `--help`, `-?`) are stripped from the argument list before parsing so that flags appearing after the help flag are still parsed (e.g. `--help -D /path` parses `-D` correctly). After parsing, if help was requested, the usage banner is printed and `ErrHelp` is returned. -- `OpsFlags` struct -- holds `Directory string`, `DryRun bool`, `Silent bool`, `Version bool` -- `ErrHelp` sentinel -- returned when help is requested; checked with `errors.Is` in `main()` -- `ParseOpsFlags(osArgs []string) (OpsFlags, []string, error)` -- the public API -- `fs.Usage` closure -- prints the usage banner and flag defaults to stderr - -## CLI (Non-Flag) Argument Parsing - -### Functional Requirements - -- After flags are consumed, the remaining positional arguments follow the structure: ` [command-args...]` +### FR-3: Positional Argument Parsing +After flags are consumed, the remaining positional arguments follow the structure: ` [command-args...]` - The first positional argument is the environment name (e.g. `prod`, `preprod`, `local`) - The second positional argument is the command name (e.g. `tail-logs`, `list-instance-ips`) - Any additional positional arguments are passthrough command args, preserved in order - Missing environment produces an error: `"missing environment argument"` - Missing command (only one positional) produces an error: `"missing command argument"` -### Implementation Overview +### Example Usage + +```bash +# Basic invocation +ops prod tail-logs + +# With flags +ops --dry-run prod tail-logs +ops -D /path/to/project prod deploy + +# Help +ops --help +ops -? + +# List commands +ops --list +ops -l + +# With pass-through args +ops prod search-logs "error" "--since=1h" +``` + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Compatibility | Runs on Linux, macOS, and Windows | Cross-platform Go binary | +| NFR-2 | Reliability | Clear error messages for invalid flags and missing arguments | Uses `slog.Error` + non-zero exit | +| NFR-3 | Maintainability | Test coverage for flag and argument parsing edge cases | Table-driven tests in `flag_parser_test.go` | +| NFR-4 | Usability | Short and long flag forms for all flags | e.g. `-d` / `--dry-run` | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +Flag parsing uses `github.com/spf13/pflag` (POSIX-compatible flag library) with interspersed mode disabled so that flag parsing stops at the first positional argument. Positional argument parsing is a simple slice-index extraction. + +### Component Design +All parsing logic lives in `internal/flag_parser.go` and exposes two public functions: +- `ParseOpsFlags` — handles flag extraction and returns remaining positionals +- `ParseOpsArgs` — splits positionals into environment, command, and command args + +### Data Flow +``` +os.Args[1:] -> ParseOpsFlags() -> (OpsFlags, positionals, error) + | + positionals -> ParseOpsArgs() -> (Args, error) +``` + +#### Sequence Diagram +``` +User CLI Input + │ + ▼ +ParseOpsFlags(osArgs, usageOutput) + │ + ├─ Strip help tokens (-h, --help, -?) + ├─ pflag.Parse(filtered args) + ├─ If help requested → print usage, return ErrHelp + └─ Return OpsFlags + remaining positionals + │ + ▼ + ParseOpsArgs(positionals) + │ + ├─ Validate len >= 2 + └─ Return Args{OpsEnv, OpsCommand, CommandArgs} +``` + +### Key Design Decisions +- **pflag over stdlib `flag`:** pflag provides POSIX-style `--long` and `-short` flags, `SetInterspersed(false)` to stop parsing at first positional, and `StringP`/`BoolP` helpers for dual short/long registration +- **Help token stripping:** Help flags are manually stripped before `pflag.Parse` so that flags appearing after `--help` are still parsed (e.g. `--help -D /path` correctly populates `Directory`) +- **`-?` manual handling:** `-?` is not a valid pflag name, so it is detected during the stripping phase alongside `-h` and `--help` +- **Separate Parse functions:** Flag parsing and positional parsing are separate functions to keep concerns isolated and allow `--list` and `--version` to short-circuit before positional validation + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `internal/flag_parser.go` | Exists | Flag and argument parsing logic | +| `internal/flag_parser_test.go` | Exists | Tests for parsing edge cases | +| `cmd/ops/main.go` | Exists | Wires `ParseOpsFlags` and `ParseOpsArgs` into the CLI pipeline | + +--- + +## 5. Alternatives Considered + +### Alternative A: Standard Library `flag` Package + +**Description:** Use Go's built-in `flag` package instead of `spf13/pflag`. + +**Pros:** +- Zero external dependencies +- Part of the standard library + +**Cons:** +- No native `--long-flag` support (uses `-flag` for both short and long) +- No `SetInterspersed(false)` — requires manual workarounds to stop flag parsing at positionals +- Requires sharing variable pointers to alias short/long forms + +**Why not chosen:** pflag provides a cleaner API for POSIX-style flags with minimal dependency cost and active community maintenance. The original implementation used stdlib `flag` but was migrated to pflag. + +--- + +## Open Questions +- [x] ~~Should `CommandArgs` be plumbed through to execution?~~ Field exists and is parsed; plumbing is a separate concern handled downstream. -Argument parsing is implemented in `internal/flag_parser.go` in the function `ParseOpsArgs()`. +--- -**Data flow:** +## 6. Task Breakdown -1. `ParseOpsArgs` receives the `[]string` slice returned as the second value from `ParseOpsFlags` -2. It checks `len(nonFlagArgs)` -- error if less than 1 (no env) or less than 2 (no command) -3. Returns an `Args` struct with fields populated from slice positions: - - `OpsEnv = nonFlagArgs[0]` - - `OpsCommand = nonFlagArgs[1]` - - `CommandArgs = nonFlagArgs[2:]` +*This feature is fully implemented. Tasks listed retrospectively.* -**Key types and symbols:** +### Phase 1: Foundation +- [x] Define `OpsFlags` and `Args` types +- [x] Implement `ParseOpsFlags` with short/long flag registration +- [x] Implement `ParseOpsArgs` with positional extraction +- [x] Write unit tests for flag and argument parsing -- `Args` struct -- holds `OpsEnv string`, `OpsCommand string`, `CommandArgs []string` -- `ParseOpsArgs(nonFlagArgs []string) (Args, error)` -- the public API -- Note: `CommandArgs` is currently parsed but not yet plumbed through to command execution (the field exists for future use) +### Phase 2: Integration +- [x] Wire `ParseOpsFlags` and `ParseOpsArgs` into `cmd/ops/main.go` +- [x] Handle `ErrHelp` and `--version` short-circuits in main +- [x] Add `--list` flag for command listing +### Phase 3: Polish +- [x] Add `-?` help alias support +- [x] Ensure help flag doesn't prevent other flags from being parsed +- [x] Migrate from stdlib `flag` to `spf13/pflag` for POSIX compatibility +--- diff --git a/docs/feature-command-execution.md b/docs/feature-command-execution.md index 7a67e17..5e60b19 100644 --- a/docs/feature-command-execution.md +++ b/docs/feature-command-execution.md @@ -1,43 +1,40 @@ -# Command Execution +# Feature: Command Execution -When the use passes in arguments to the `ops` cli. It must first map the user's input to the commands as defined in the `Opsfile`. If it -cannot find the command, it will fail. Once, successfully mapped it must execute the resolved commands in the shell for the user, substituting -any variables values defined in the `Opsfile` into the command. -## Command Resolution +## 1. Problem Statement & High-Level Goals -### Functional Requirements +### Problem +After the user specifies an environment and command via the CLI, `ops` must map that input to the correct set of shell lines defined in the Opsfile, substitute any variable references, and execute the resulting commands in the user's shell. Without this, the Opsfile is just a static document with no runtime behavior. +### Goals +- [x] Resolve the correct environment-specific command block from the parsed Opsfile, with fallback to `default` +- [x] Substitute `$(VAR_NAME)` references using the four-level variable priority chain +- [x] Execute resolved shell lines sequentially in the user's shell with full terminal interactivity +- [x] Support `--dry-run` to print resolved commands without executing and `--silent` to suppress output + +### Non-Goals +- Parallel command execution — lines are always run sequentially +- Built-in retry or rollback logic — the tool runs commands as-is +- Remote execution — all commands run locally via the user's shell + +--- + +## 2. Functional Requirements + +### FR-1: Command Resolution - Given a command name and environment, the resolver selects the correct set of shell lines to execute - Environment selection follows this priority: 1. Exact match on the requested environment (e.g. `prod`) 2. Fallback to the `default` environment block if the specific one is absent 3. Error if neither the specific environment nor `default` exists - If the command name does not exist in the parsed commands map, an error is returned -- After environment selection, all `$(VAR_NAME)` tokens in the shell lines are substituted with variable values (see Variable Substitution feature) - -### Implementation Overview - -Command resolution is implemented in `internal/command_resolver.go`. - -**Data flow:** -1. `Resolve(commandName, env string, commands map[string]OpsCommand, vars OpsVariables) (ResolvedCommand, error)` is the public entry point -2. It looks up the command by name in the `commands` map -- error if not found -3. Calls `selectLines(cmd OpsCommand, env string) ([]string, error)` which checks `cmd.Environments[env]` first, then `cmd.Environments["default"]` -4. Iterates over the selected shell lines and calls `substituteVars` on each one -5. Returns a `ResolvedCommand{Lines: []string}` containing the fully substituted shell lines - -**Key types and symbols:** - -- `ResolvedCommand` struct -- holds `Lines []string`, the final shell lines ready for execution -- `Resolve()` -- orchestrates lookup, environment selection, and variable substitution -- `selectLines()` -- implements the env-then-default fallback logic - -## Shell Execution - -### Functional Requirements +### FR-2: Variable Substitution +- After environment selection, all `$(VAR_NAME)` tokens in the shell lines are substituted using the four-level variable priority chain (see feature-variable-substitution.md for details) +- Non-identifier tokens inside `$(...)` (e.g. shell subcommands like `$(shell ...)`) are passed through unchanged +- Unclosed `$(` without a matching `)` is treated as a literal string +### FR-3: Shell Execution - Resolved shell lines are executed sequentially, one at a time - Each line is run via the user's `$SHELL` environment variable; if unset, `/bin/sh` is used as the fallback - Each command is invoked as ` -c ` @@ -46,20 +43,182 @@ Command resolution is implemented in `internal/command_resolver.go`. - The exit code from a failed command is propagated as the exit code of the `ops` process itself - If no shell lines are provided (empty list), execution is a no-op -### Implementation Overview - -Shell execution is implemented in `internal/executor.go`. - -**`Execute(lines []string, shell string) error`:** +### FR-4: Dry-Run and Silent Modes +- When `--dry-run` is set, resolved commands are printed to stdout instead of being executed +- When both `--dry-run` and `--silent` are set, nothing is printed and no commands are executed +- Dry-run and silent flag handling is implemented in `cmd/ops/main.go`, not in the executor itself + +### Example Usage + +Given an Opsfile: +``` +[vars] +CLUSTER=my-cluster + +[command.tail-logs] +prod: kubectl logs -f deployment/app --context=$(CLUSTER) +default: echo "use 'ops prod tail-logs' for production logs" +``` + +```bash +# Executes the prod block with variable substitution +ops prod tail-logs +# → runs: kubectl logs -f deployment/app --context=my-cluster + +# Falls back to default block +ops local tail-logs +# → runs: echo "use 'ops prod tail-logs' for production logs" + +# Dry-run: prints resolved command without executing +ops --dry-run prod tail-logs +# → prints: kubectl logs -f deployment/app --context=my-cluster +``` + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | Minimal overhead — execution latency dominated by the shell commands themselves | No batching or pooling | +| NFR-2 | Compatibility | Uses `$SHELL` with `/bin/sh` fallback; works on Linux, macOS, and Windows | Shell selection in `main.go` | +| NFR-3 | Reliability | Fail-fast on first non-zero exit code with error propagation | Exit code forwarded to OS | +| NFR-4 | Interactivity | Full stdin/stdout/stderr passthrough for interactive commands | e.g. `ssh`, `vim`, `less` | +| NFR-5 | Maintainability | Resolver and executor are separate packages with distinct responsibilities | Testable in isolation | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +Command execution is split into two stages: resolution (selecting + substituting) and execution (running in shell). The resolver lives in `internal/command_resolver.go` and the executor in `internal/executor.go`. Dry-run/silent logic is handled at the call site in `main.go`. + +### Component Design + +**Command Resolver (`internal/command_resolver.go`):** +- `Resolve()` — orchestrates command lookup, environment selection, and variable substitution +- `selectLines()` — implements the env-then-default fallback logic +- `substituteVars()` — replaces `$(VAR_NAME)` tokens with resolved values +- `resolveVar()` — implements the four-level priority chain lookup -1. Iterates over `lines` sequentially -2. For each line, creates `exec.Command(shell, "-c", line)` -3. Wires `cmd.Stdin`, `cmd.Stdout`, `cmd.Stderr` to `os.Stdin`, `os.Stdout`, `os.Stderr` -4. Calls `cmd.Run()` -- blocks until the command completes -5. On error, returns `fmt.Errorf("running %q: %w", line, err)` which wraps the underlying `*exec.ExitError` -6. In `main()`, the error is unwrapped with `errors.As(err, &exitErr)` to extract and propagate the exit code via `os.Exit` +**Executor (`internal/executor.go`):** +- `Execute()` — runs shell lines sequentially via `exec.Command` -**Key symbols:** +### Data Flow +``` +(commandName, env, commands, vars) -> Resolve() -> ResolvedCommand{Lines} + | + main.go: dry-run check + | + Execute(lines, shell) + | + exec.Command(shell, "-c", line) +``` + +#### Sequence Diagram +``` +main.go + │ + ├─ Resolve(commandName, env, commands, vars) + │ │ + │ ├─ Lookup command in commands map + │ ├─ selectLines(cmd, env) + │ │ ├─ Try cmd.Environments[env] + │ │ └─ Fallback to cmd.Environments["default"] + │ └─ substituteVars(line, env, vars) for each line + │ └─ resolveVar(token, env, vars) + │ ├─ 1. vars[env+"_"+VAR] + │ ├─ 2. os.LookupEnv(env+"_"+VAR) + │ ├─ 3. vars[VAR] + │ └─ 4. os.LookupEnv(VAR) + │ + ├─ If --dry-run: print lines (unless --silent), return + │ + └─ Execute(lines, shell) + └─ For each line: + ├─ exec.Command(shell, "-c", line) + ├─ Wire stdin/stdout/stderr + └─ cmd.Run() — stop on first error +``` + +### Key Design Decisions +- **Separate resolver and executor:** Keeps variable substitution testable without shell execution, and allows dry-run to operate on resolved output without touching the executor +- **Fail-fast execution:** Stops on first non-zero exit code rather than continuing, matching `set -e` shell behavior and preventing cascading failures +- **Shell selection in main.go:** The executor receives the shell as a parameter rather than looking it up itself, keeping it a pure function of its inputs +- **Pass-through for non-identifier `$(...)` tokens:** Preserves shell subcommand syntax like `$(date +%s)` while substituting Opsfile variables + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `internal/command_resolver.go` | Exists | Command lookup, environment selection, variable substitution | +| `internal/executor.go` | Exists | Sequential shell line execution | +| `internal/command_resolver_test.go` | Exists | Tests for resolution, env fallback, and substitution | +| `internal/executor_test.go` | Exists | Tests for execution behavior | +| `cmd/ops/main.go` | Exists | Wires resolver and executor; handles dry-run/silent logic | + +--- + +## 5. Alternatives Considered + +### Alternative A: Single Execute Function with Embedded Resolution + +**Description:** Combine resolution and execution into one function that takes raw command name, env, and vars. + +**Pros:** +- Simpler call site — one function call instead of two + +**Cons:** +- Harder to test resolution logic independently +- No clean way to support dry-run without executing +- Violates single-responsibility principle + +**Why not chosen:** Separating resolution from execution enables dry-run mode, isolated testing, and clearer code organization. + +--- + +### Alternative B: Shell Script Generation + +**Description:** Generate a temporary shell script from resolved lines and execute it as a single file. + +**Pros:** +- Single process invocation +- Could support shell features like `set -e` natively + +**Cons:** +- Temp file management and cleanup +- Harder to attribute errors to specific lines +- Security concerns with temp file permissions + +**Why not chosen:** Per-line execution is simpler, provides better error attribution, and avoids temp file management. + +--- + +## Open Questions +- (none currently) + +--- + +## 6. Task Breakdown + +*This feature is fully implemented. Tasks listed retrospectively.* + +### Phase 1: Foundation +- [x] Define `ResolvedCommand` type +- [x] Implement `selectLines` with env-then-default fallback +- [x] Implement `substituteVars` with identifier detection and pass-through for non-identifiers +- [x] Implement `resolveVar` with four-level priority chain +- [x] Write unit tests for command resolution + +### Phase 2: Integration +- [x] Implement `Execute` function with sequential shell line execution +- [x] Wire resolver and executor into `main.go` +- [x] Add shell selection logic (`$SHELL` with `/bin/sh` fallback) +- [x] Add exit code propagation via `errors.As` + `exec.ExitError` + +### Phase 3: Polish +- [x] Add dry-run support (print resolved lines without executing) +- [x] Add silent mode support (suppress dry-run output) +- [x] Write executor tests -- `Execute(lines []string, shell string) error` -- the public API -- Shell selection logic in `cmd/ops/main.go`: `os.Getenv("SHELL")` with `/bin/sh` fallback +--- diff --git a/docs/feature-opsfile-discovery.md b/docs/feature-opsfile-discovery.md index 3cf09b9..7adf0b4 100644 --- a/docs/feature-opsfile-discovery.md +++ b/docs/feature-opsfile-discovery.md @@ -1,28 +1,162 @@ -# Opsfile Discovery +# Feature: Opsfile Discovery -## Functional Requirements -- When the user runs `ops`, the tool must locate an `Opsfile` in the current working directory or the nearest parent directory -- The search walks upward from `cwd` toward the filesystem root, checking each directory for a file named `Opsfile` -- Directories named `Opsfile` are skipped -- only regular files match -- If no `Opsfile` is found in any ancestor directory, the tool exits with an error -- The user can bypass discovery entirely by passing `-D ` / `--directory `, which uses the Opsfile in the specified directory instead +## 1. Problem Statement & High-Level Goals -## Implementation Overview +### Problem +When a user runs `ops` from any directory within a project, the tool needs to locate the project's `Opsfile` — similar to how `git` finds a `.git` directory by walking up the directory tree. Without this, users would need to always run `ops` from the exact directory containing the Opsfile, which is inconvenient for deeply nested project structures. -Discovery is implemented in `cmd/ops/main.go` in the function `getClosestOpsfilePath()`. +### Goals +- [x] Automatically locate the nearest `Opsfile` by walking up the directory tree from `cwd` +- [x] Skip directories named `Opsfile` (only match regular files) +- [x] Provide a `-D` / `--directory` flag to bypass discovery and specify an explicit directory +- [x] Exit with a clear error if no `Opsfile` is found in any ancestor directory -**Data flow:** +### Non-Goals +- Discovery does not search child/sibling directories — only the current directory and its ancestors +- Discovery does not support alternative filenames (e.g., `opsfile`, `.opsfile`) -1. `os.Getwd()` obtains the current working directory -2. A loop calls `os.Stat(filepath.Join(currPath, "Opsfile"))` at each level -3. If the stat succeeds but the result is a directory (`file.IsDir()`), the entry is skipped and the walk continues upward -4. If the stat fails with `os.IsNotExist`, the path moves to `filepath.Dir(currPath)` (the parent) -5. The loop terminates when the current path equals its own parent (filesystem root) -- at that point an error is returned -6. On success, the directory containing the `Opsfile` is returned to `main()` +--- -**Key files and symbols:** +## 2. Functional Requirements -- `cmd/ops/main.go` -- `getClosestOpsfilePath() (string, error)` -- The constant `opsFileName` (value `"Opsfile"`) defines the target filename -- The `-D` / `--directory` flag in `main()` short-circuits discovery by providing the directory directly +### FR-1: Parent Directory Walk +When the user runs `ops`, the tool starts in `os.Getwd()` and checks for a file named `Opsfile`. If not found, it moves to the parent directory (`filepath.Dir`) and repeats. The walk terminates at the filesystem root. + +### FR-2: Skip Directories Named Opsfile +If an entry named `Opsfile` exists but is a directory (`file.IsDir()`), it is ignored and the walk continues upward. + +### FR-3: Directory Flag Override +The `-D` / `--directory` flag allows the user to specify a directory directly, bypassing the discovery walk entirely. When set, the tool uses the Opsfile in the specified directory. + +### FR-4: Error on Not Found +If the walk reaches the filesystem root without finding an Opsfile, the tool exits with the error message: `"could not find Opsfile in any parent directory"`. + +### Example Usage + +Running `ops` from a subdirectory that doesn't contain an Opsfile: +```bash +$ cd /project/src/deep/nested +$ ops deploy +# discovers /project/Opsfile and runs the deploy command +``` + +Overriding discovery with `-D`: +```bash +$ ops -D /other/project deploy +# uses /other/project/Opsfile directly +``` + +Error when no Opsfile exists: +```bash +$ cd /tmp/empty +$ ops deploy +ERROR finding Opsfile: could not find Opsfile in any parent directory +``` + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | Discovery should complete in negligible time relative to command execution | Directory walks are bounded by filesystem depth | +| NFR-2 | Compatibility | Works on Linux, macOS, and Windows | Uses `filepath` for cross-platform path handling | +| NFR-3 | Reliability | Graceful error for missing Opsfile and stat failures | Wraps errors with `fmt.Errorf` context | +| NFR-4 | Maintainability | Test coverage for found-in-cwd, found-in-parent, not-found, and directory-skip cases | Tests in `cmd/ops/main_test.go` | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +Discovery is implemented as a single function `getClosestOpsfilePath()` in `cmd/ops/main.go`. A helper function `resolveOpsfileDir()` wraps the logic to prefer the `-D` flag when set. The constant `opsFileName` (`"Opsfile"`) defines the target filename. + +### Component Design +- **`getClosestOpsfilePath()`** — Core discovery logic. Walks the directory tree upward using `os.Stat` and `filepath.Dir`. +- **`resolveOpsfileDir(flagDir)`** — Thin wrapper that returns `flagDir` if non-empty, otherwise delegates to `getClosestOpsfilePath()`. +- **`main()`** — Calls `resolveOpsfileDir` (or `getClosestOpsfilePath` directly when the `-D` flag is not set) and passes the resulting directory to the parser pipeline. + +### Data Flow +``` +os.Getwd() -> stat(currPath/Opsfile) -> [found file?] -> return directory + | | + v (not found / is dir) | + filepath.Dir(currPath) | + | | + v (at root?) | + return error <-----------------+ +``` + +### Key Design Decisions +- **Walk upward, not downward:** Matches the git/make convention where the tool is usable from any subdirectory within a project. Upward-only search is deterministic and fast. +- **Skip directories named Opsfile:** Prevents false positives if a user happens to have a directory with that name. +- **Single-function implementation:** The discovery logic is simple enough that it doesn't warrant its own package or file. + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `cmd/ops/main.go` | Existing | Contains `getClosestOpsfilePath()`, `resolveOpsfileDir()`, and the `opsFileName` constant | +| `cmd/ops/main_test.go` | Existing | Tests for found-in-cwd, found-in-parent, not-found, and directory-named-Opsfile-skipped | + +--- + +## 5. Alternatives Considered + +### Alternative A: Search Downward Into Subdirectories + +**Description:** Instead of walking upward, search the current directory and its children for an Opsfile. + +**Pros:** +- Could find Opsfiles in nested project structures + +**Cons:** +- Non-deterministic when multiple Opsfiles exist in different subdirectories +- Slower — recursive directory traversal +- Breaks the established convention used by git, make, and similar tools + +**Why not chosen:** Upward search is the standard pattern for project-root discovery tools and provides deterministic, fast results. + +--- + +### Alternative B: Separate Discovery Package + +**Description:** Extract discovery into its own `internal/discovery` package with a dedicated interface. + +**Pros:** +- More testable in isolation +- Could support future discovery strategies (e.g., config files, environment variables) + +**Cons:** +- Over-engineered for a single, simple function +- Adds package overhead with no current benefit + +**Why not chosen:** KISS — the current single-function implementation is clear and sufficient. + +--- + +## Open Questions +- (none — feature is fully implemented) + +--- + +## 6. Task Breakdown + +> *Retrospective — all tasks completed in initial implementation.* + +### Phase 1: Foundation +- [x] Define `opsFileName` constant +- [x] Implement `getClosestOpsfilePath()` with upward directory walk +- [x] Implement directory-skip logic for entries named `Opsfile` + +### Phase 2: Integration +- [x] Wire `getClosestOpsfilePath()` into `main()` pipeline +- [x] Add `-D` / `--directory` flag to bypass discovery +- [x] Add `resolveOpsfileDir()` helper for flag-or-discovery logic + +### Phase 3: Polish +- [x] Write unit tests (found-in-cwd, found-in-parent, not-found, directory-skip) +- [x] Clear error messaging when Opsfile not found + +--- diff --git a/docs/feature-opsfile-parsing.md b/docs/feature-opsfile-parsing.md index e5f62c1..f45a8bd 100644 --- a/docs/feature-opsfile-parsing.md +++ b/docs/feature-opsfile-parsing.md @@ -1,55 +1,211 @@ -# Opsfile Parsing - -## Functional Requirements - -- The parser reads an Opsfile and produces two outputs: a map of variables and a map of commands -- **Variables** are declared at the top level as `NAME=value` lines - - Variable names are bare identifiers (letters, digits, hyphens, underscores) - - Values may be unquoted, double-quoted, or single-quoted - - Quoted values preserve `#` characters inside; unquoted values treat `#` preceded by whitespace as an inline comment - - Unclosed quotes are a parse error - - A `=` with no name on the left is a parse error -- **Commands** are declared as top-level lines ending with `:` (e.g. `tail-logs:`) - - Duplicate command names are a parse error - - Each command contains one or more **environment blocks**, declared as indented lines ending with `:` (e.g. ` default:`, ` prod:`) - - Environment headers must be bare identifiers followed by `:` -- **Shell lines** are indented lines inside an environment block - - **Backslash continuation**: a line ending with `\` is joined with the next line (the `\` is stripped) - - **Indent-based continuation**: a line indented deeper than the previous shell line is joined to it with a space - - A new line at the same indent level as the previous shell line starts a new shell command -- Blank lines and lines starting with `#` (after trimming) are skipped -- An Opsfile containing no variables and no commands is rejected as empty -- Line numbers are included in parse error messages - -## Implementation Overview - -Parsing is implemented in `internal/opsfile_parser.go` using a line-by-line state machine. - -**State machine:** - -The `parser` struct tracks state via a `parseState` enum with three values: - -- `topLevel` -- expecting variables (`NAME=value`) or command headers (`name:`) -- `inCommand` -- inside a command block, expecting environment headers (`env:`) -- `inEnvironment` -- inside an environment block, collecting shell lines - -A non-indented line always resets state to `topLevel`. - -**Key functions:** - -- `ParseOpsFile(path string) (OpsVariables, map[string]OpsCommand, error)` -- public entry point; opens the file, creates a `parser`, scans lines, validates, and returns results -- `processLine(raw string)` -- dispatches each raw line to the correct handler based on current state -- `handleTopLevel(line string)` -- routes to `parseVariable` or `startCommand` -- `handleInCommand(line string)` -- detects environment headers via `isEnvHeader` -- `handleInEnvironment(line string, rawIndent int)` -- handles shell lines, backslash continuation, indent continuation, and new environment headers -- `parseVariable(line string)` -- splits on `=`, calls `extractVariableValue` for comment/quote handling -- `extractVariableValue(raw string)` -- strips quotes or inline comments from the value portion -- `flushContinuation()` -- appends any buffered backslash-continuation fragments as a complete shell line -- `joinLastShellLine(suffix string)` -- implements indent-based continuation by appending to the last collected shell line -- `validate()` -- post-parse check that the file is not empty - -**Key types:** - -- `OpsVariables` -- `map[string]string` of variable name to value -- `OpsCommand` -- struct with `Name string` and `Environments map[string][]string` -- `parser` -- internal struct holding `variables`, `commands`, `state`, `currentCommand`, `currentEnv`, `continuationBuf`, `lastShellIndent`, `lineNum` +# Feature: Opsfile Parsing + + +## 1. Problem Statement & High-Level Goals + +### Problem +The `ops` tool needs to read an Opsfile — a structured text file defining variables and commands — and produce typed data structures that downstream stages (command resolution, execution) can consume. The file format must be simple enough for users to author by hand, while supporting features like multi-environment commands, inline comments, and multi-line shell statements. + +### Goals +- [x] Parse an Opsfile into variables (`OpsVariables`) and commands (`map[string]OpsCommand`) +- [x] Support per-environment shell blocks within each command +- [x] Support backslash and indent-based line continuation for multi-line shell commands +- [x] Handle quoted and unquoted variable values with inline comment stripping +- [x] Preserve command and environment declaration order for display purposes +- [x] Produce clear, line-numbered error messages for malformed files +- [x] Capture command descriptions from preceding `#` comment lines + +### Non-Goals +- Variable resolution/substitution (handled by `command_resolver.go`) +- Executing shell lines (handled by `executor.go`) +- Supporting alternative file formats (YAML, TOML, JSON) + +--- + +## 2. Functional Requirements + +### FR-1: Variable Parsing +Variables are declared at the top level as `NAME=value` lines. Variable names are bare identifiers (letters, digits, hyphens, underscores). Values may be unquoted, double-quoted, or single-quoted. Quoted values preserve `#` characters inside; unquoted values treat `#` preceded by whitespace as an inline comment. Unclosed quotes and missing variable names are parse errors. + +### FR-2: Command and Environment Block Parsing +Commands are declared as non-indented lines ending with `:` (e.g., `tail-logs:`). Duplicate command names are a parse error. Each command contains one or more environment blocks declared as indented lines ending with `:` (e.g., ` default:`, ` prod:`). Environment headers must be bare identifiers followed by `:`. + +### FR-3: Shell Line Collection with Continuation +Shell lines are indented lines inside an environment block. Two continuation mechanisms are supported: +- **Backslash continuation**: a line ending with `\` is joined with the next line (the `\` is stripped) +- **Indent-based continuation**: a line indented deeper than the previous shell line is joined to it with a space + +A new line at the same indent level as the previous shell line starts a new shell command. + +### FR-4: Comments, Blank Lines, and Command Descriptions +Blank lines and lines starting with `#` (after trimming) are skipped during parsing. The first line of a comment block immediately preceding a command declaration is captured as that command's `Description` field. Blank lines reset the comment capture. + +### FR-5: Validation and Error Reporting +An Opsfile containing no variables and no commands is rejected as empty. All parse error messages include the 1-based line number where the error occurred. + +### FR-6: Ordered Output +The parser tracks and returns command declaration order and environment declaration order as separate `[]string` slices, used by the `--list` flag and help output for display. + +### Example Usage + +Given this Opsfile: +``` +REGION=us-east-1 +ACCOUNT=123456789 # AWS account + +# Show recent application logs +tail-logs: + default: + echo "tailing logs for $REGION" + prod: + aws logs tail /prod/app \ + --follow \ + --region $(REGION) +``` + +The parser produces: +- **Variables**: `{"REGION": "us-east-1", "ACCOUNT": "123456789"}` +- **Commands**: `{"tail-logs": {Name: "tail-logs", Description: "Show recent application logs", Environments: {"default": [...], "prod": [...]}}}` +- **Command order**: `["tail-logs"]` +- **Env order**: `["default", "prod"]` + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | Parse time should be negligible relative to command execution | Line-by-line streaming via `bufio.Scanner` | +| NFR-2 | Compatibility | Works on Linux, macOS, and Windows | No OS-specific file handling | +| NFR-3 | Reliability | All parse errors include line numbers and context | Uses `fmt.Errorf("line %d: %w", ...)` | +| NFR-4 | Security | No arbitrary code execution during parsing | Parser only reads and tokenizes text | +| NFR-5 | Maintainability | Comprehensive test suite covering edge cases | 30+ test cases in `opsfile_parser_test.go` | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +Parsing is implemented in `internal/opsfile_parser.go` using a line-by-line state machine. The `parser` struct holds mutable state as it scans through the file, dispatching each line to a handler based on the current parse state. + +### Component Design + +- **`ParseOpsFile(path)`** — Public entry point. Opens the file, creates a `parser`, scans lines via `bufio.Scanner`, calls `processLine()` for each, flushes any trailing continuation, validates, and returns results. +- **`parser` struct** — Internal state machine holding `variables`, `commands`, `state`, `currentCommand`, `currentEnv`, `continuationBuf`, `seenShellLine`, `lastShellIndent`, `lineNum`, `lastComment`, `order`, and `envOrder`. +- **`processLine(raw)`** — Dispatches each raw line based on `parseState`: `topLevel`, `inCommand`, or `inEnvironment`. Non-indented lines always reset state to `topLevel`. +- **`handleTopLevel(line)`** — Routes to `parseVariable()` (contains `=`) or `startCommand()` (ends with `:`). +- **`handleInCommand(line)`** — Detects environment headers via `isEnvHeader()`. +- **`handleInEnvironment(line, rawIndent)`** — Handles new env headers, backslash continuation, indent continuation, and regular shell lines. +- **`parseVariable(line)`** / **`extractVariableValue(raw)`** — Split on `=`, handle quoting and inline comment stripping. +- **`flushContinuation()`** — Appends any buffered backslash-continuation fragments as a complete shell line. +- **`joinLastShellLine(suffix)`** — Implements indent-based continuation by appending to the last collected shell line. +- **`validate()`** — Post-parse check that the file is not empty. +- **Helper functions**: `leadingWhitespace()`, `isEnvHeader()`, `isIdentifier()`, `isIdentChar()`, `indexComment()`. + +### Data Flow +``` +File (path) -> os.Open -> bufio.Scanner -> processLine() per line + | + [topLevel] -> parseVariable() or startCommand() + [inCommand] -> handleInCommand() -> startEnv() + [inEnvironment] -> handleInEnvironment() -> appendShellLine() + | + flushContinuation() -> validate() -> return (OpsVariables, map[string]OpsCommand, order, envOrder) +``` + +#### Sequence Diagram +``` +User File ParseOpsFile parser state machine + | | | + |--- open file --------->| | + | |--- scan lines -------->| + | | |--- topLevel: variable or command? + | | |--- inCommand: env header? + | | |--- inEnvironment: shell line / continuation? + | |<--- flush & validate --| + |<--- return results ----| | +``` + +### Key Design Decisions +- **State machine over recursive descent:** The Opsfile format is simple and line-oriented, making a state machine clearer and simpler than a tree-based parser. +- **Streaming line-by-line:** Uses `bufio.Scanner` rather than reading the entire file into memory, keeping memory usage proportional to line length, not file size. +- **Two continuation mechanisms:** Backslash continuation (`\`) for explicit multi-line and indent-based continuation for natural readability — both common in shell-adjacent tools. +- **Comment-based descriptions:** Captures the comment line immediately above a command declaration as metadata, enabling `--list` to show human-readable command summaries without requiring a separate description syntax. +- **Order tracking:** Command and environment declaration order are tracked separately from the map keys, preserving the author's intended display order. + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `internal/opsfile_parser.go` | Existing | Core parser: `ParseOpsFile()`, state machine, variable/command/env parsing, continuation logic | +| `internal/opsfile_parser_test.go` | Existing | 30+ test cases covering variables, commands, environments, continuations, edge cases, errors, ordering, and descriptions | + +--- + +## 5. Alternatives Considered + +### Alternative A: Regex-Based Parsing + +**Description:** Use regular expressions to match variable declarations, command headers, environment headers, and shell lines. + +**Pros:** +- Concise pattern matching for simple cases +- Familiar to many developers + +**Cons:** +- Difficult to handle stateful constructs like continuation lines and nested blocks +- Harder to produce line-numbered error messages +- Complex regexes become unreadable quickly + +**Why not chosen:** The state machine approach is more maintainable for a format with contextual meaning (indentation, continuation) and produces better error messages. + +--- + +### Alternative B: Use an Existing Config Parser (YAML/TOML Library) + +**Description:** Define the Opsfile format as YAML or TOML and use an existing parsing library. + +**Pros:** +- Mature, well-tested parsing +- Standardized format familiar to users + +**Cons:** +- Adds an external dependency +- Neither YAML nor TOML naturally maps to the Opsfile's command/environment/shell-lines structure +- Less control over error messages and file format evolution +- YAML indentation semantics differ from the Opsfile convention + +**Why not chosen:** A custom format allows the simplest possible syntax for the use case (operations commands) without forcing users to learn YAML/TOML rules. Zero external dependencies is a project goal. + +--- + +## Open Questions +- (none — feature is fully implemented) + +--- + +## 6. Task Breakdown + +> *Retrospective — all tasks completed in initial implementation.* + +### Phase 1: Foundation +- [x] Define `OpsVariables`, `OpsCommand`, `parseState`, and `parser` types +- [x] Implement `ParseOpsFile()` entry point with file open and line scanning +- [x] Implement `processLine()` state dispatch and `handleTopLevel()` +- [x] Implement `parseVariable()` and `extractVariableValue()` with quote/comment handling + +### Phase 2: Command and Environment Parsing +- [x] Implement `startCommand()` with duplicate detection +- [x] Implement `handleInCommand()` and `startEnv()` for environment blocks +- [x] Implement `handleInEnvironment()` with shell line collection +- [x] Implement backslash continuation (`flushContinuation()`) +- [x] Implement indent-based continuation (`joinLastShellLine()`) + +### Phase 3: Polish +- [x] Add `validate()` for empty-file rejection +- [x] Add line-number context to error messages +- [x] Add command description capture from preceding comment lines +- [x] Track and return command order and environment order +- [x] Write comprehensive unit tests (30+ cases) + +--- diff --git a/docs/feature-supported-flags.md b/docs/feature-supported-flags.md index 62052eb..c14da3f 100644 --- a/docs/feature-supported-flags.md +++ b/docs/feature-supported-flags.md @@ -1,243 +1,243 @@ -# Supported CLI Flags +# Feature: Supported CLI Flags -## Dry-Run Mode -### Functional Requirements +## 1. Problem Statement & High-Level Goals -- When `--dry-run` or `-d` is passed, `ops` prints the fully resolved shell lines to stdout without executing them -- Each line is printed on its own line via `fmt.Println` -- If `--silent` / `-s` is also set, even the dry-run output is suppressed (nothing is printed and nothing is executed) -- Variable substitution and environment selection still occur in dry-run mode -- the user sees the exact commands that would be executed +### Problem +The `ops` CLI needs to support standard operational flags that modify its behavior — such as previewing commands without execution, suppressing output, printing version info, listing available commands, specifying an alternate Opsfile directory, and displaying usage help. Without these flags, users have no way to inspect, debug, or control `ops` behavior beyond running commands directly. -### Implementation Overview +### Goals +- [x] Provide `--dry-run` / `-d` to preview resolved commands without executing them +- [x] Provide `--silent` / `-s` to suppress command output during execution +- [x] Provide `--version` / `-v` to display the current ops version and build info +- [x] Provide `--help` / `-h` / `-?` to display usage information +- [x] Provide `--directory` / `-D` to specify an alternate Opsfile location +- [x] Provide `--list` / `-l` to list available commands and environments from the Opsfile -Dry-run mode is handled in `cmd/ops/main.go` in the `main()` function. +### Non-Goals +- Flags are not subcommand-specific; they apply globally to `ops` +- No interactive/prompt-based flag input +- No configuration file for default flag values -**Data flow:** +--- -1. After `Resolve()` produces a `ResolvedCommand`, `main()` checks `flags.DryRun` -2. If true and `!flags.Silent`, it loops over `resolved.Lines` and prints each with `fmt.Println` -3. The function returns immediately without calling `Execute()` +## 2. Functional Requirements -**Key symbols:** +### FR-1: Dry-Run Mode (`--dry-run` / `-d`) +When `--dry-run` or `-d` is passed, `ops` prints the fully resolved shell lines to stdout without executing them. Each line is printed on its own line via `fmt.Println`. Variable substitution and environment selection still occur — the user sees the exact commands that *would* be executed. If `--silent` is also set, even the dry-run output is suppressed (nothing is printed and nothing is executed). -- `OpsFlags.DryRun` -- the flag value from `ParseOpsFlags()` -- `OpsFlags.Silent` -- checked in combination with `DryRun` to suppress output -- The dry-run branch in `main()` at approximately lines 63-69 +### FR-2: Silent Mode (`--silent` / `-s`) +When `--silent` or `-s` is passed, command execution proceeds normally but stdout/stderr output is not explicitly suppressed at the `ops` level — the executor connects stdin/stdout/stderr to the terminal. Silent mode's primary effect is suppressing dry-run output when combined with `--dry-run`. -## Help / Usage Output +### FR-3: Version Reporting (`--version` / `-v`) +When `-v` or `--version` is passed, `ops` prints its version string and exits with code 0. The output format is: `ops version (commit: ) /` (e.g. `ops version 0.8.5 (commit: abc1234) darwin/arm64`). The version and commit are set at build time via Go linker flags; defaults are `0.0.0-dev` and `none`. -### Functional Requirements +### FR-4: Help / Usage Output (`--help` / `-h` / `-?`) +When `-h`, `--help`, or `-?` is passed, `ops` prints a usage banner and flag descriptions to stderr, then exits with code 0. All help tokens are stripped from args before parsing so that flags appearing after the help flag (e.g. `--help -D /path`) are still parsed. The usage banner includes the version info, a description of what `ops` does, invocation syntax, an example, and the full list of available flags via `pflag.PrintDefaults()`. When an Opsfile is discoverable, help output also appends a listing of available commands and environments (best-effort; Opsfile errors are silently ignored). -- When `-h`, `--help`, or `-?` is passed, `ops` prints a usage banner and flag descriptions to stderr, then exits with code 0 -- The usage banner includes: - - A description of what `ops` does - - The invocation syntax: `ops [flags] [command-args]` - - Examples: `ops preprod open-dashboard`, `ops --dry-run prod tail-logs` - - A list of all available flags with their descriptions -- `-?` is handled specially before `flag.Parse` because it is not a valid Go `flag` package flag name +### FR-5: Directory Override (`--directory` / `-D`) +When `-D ` or `--directory ` is passed, `ops` uses the Opsfile from the specified directory instead of walking parent directories. This interacts with all other flags — `--list` and `--help` will use the specified directory's Opsfile. -### Implementation Overview +### FR-6: List Commands (`--list` / `-l`) +When `--list` or `-l` is passed, `ops` prints a summary of the Opsfile's available environments and commands, then exits with code 0. No command resolution or execution occurs — even if an environment and command are also supplied on the CLI. See the List Commands section below for full details. -Help output is implemented in `internal/flag_parser.go` within `ParseOpsFlags()`. +### Example Usage -**Data flow:** +**Dry-run preview:** +```bash +$ ops --dry-run prod tail-logs +aws logs tail /ecs/prod-app --follow --since 5m +``` -1. Before `fs.Parse()`, `slices.Contains(osArgs, "-?")` checks for the `-?` token; if found, calls `fs.Usage()` and returns `ErrHelp` -2. For `-h` and `--help`, the standard `flag` package triggers `fs.Usage()` and returns `flag.ErrHelp`, which is translated to the `ErrHelp` sentinel -3. `fs.Usage` is a closure that writes the banner text with `fmt.Fprint(fs.Output(), ...)` and then calls `fs.PrintDefaults()` for flag descriptions -4. In `main()`, `errors.Is(err, internal.ErrHelp)` catches the sentinel and calls `os.Exit(0)` +**Version output:** +```bash +$ ops --version +ops version 0.8.5 (commit: abc1234) darwin/arm64 +``` -**Key symbols:** +**Help output:** +```bash +$ ops --help +ops version 0.8.5 (commit: abc1234) darwin/arm64 -- `ErrHelp` -- package-level sentinel: `errors.New("help requested")` -- `fs.Usage` closure in `ParseOpsFlags()` -- defines the help text -- The `-?` pre-check using `slices.Contains` +The 'ops' command runs commonly-used live-operation commands... -## Version Reporting +Usage: ops [flags] [command-args] + ex. 'ops preprod open-dashboard' or 'ops --dry-run prod tail-logs' -### Functional Requirements +Flags: + -D, --directory directory use Opsfile in the given directory + -d, --dry-run print commands without executing + -l, --list list available commands and environments + -s, --silent execute without printing output + -v, --version print the ops version and exit +``` -- When `-v` or `--version` is passed, `ops` prints its version string and exits with code 0 -- The output format is: `ops version (/)` (e.g. `ops version 0.0.1 (darwin/arm64)`) -- The version is set at build time via Go linker flags; the default is `0.0.1` +**List output:** +```bash +$ ops --list -### Implementation Overview +Commands Found in [./Opsfile]: -Version reporting is split across two files. +Environments: + default local preprod prod -**`internal/version.go`:** +Commands: + show-profile Using AWS profile + tail-logs Tail CloudWatch logs + list-instance-ips List the private IPs of running instances +``` -- Declares `var Version = "0.0.1"` -- the default version -- Build-time override: `go build -ldflags="-X sean_seannery/opsfile/internal.Version=1.2.3" ./cmd/ops/` +--- -**`cmd/ops/main.go`:** +## 3. Non-Functional Requirements -- After parsing flags, checks `flags.Version` -- If true, prints `fmt.Printf("ops version %s (%s/%s)\n", internal.Version, runtime.GOOS, runtime.GOARCH)` and calls `os.Exit(0)` +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | Flag parsing adds negligible overhead (< 1ms) | Uses pflag which is standard in Go CLIs | +| NFR-2 | Compatibility | All flags work on Linux, macOS, and Windows | Uses Go standard library + pflag | +| NFR-3 | Reliability | Unknown flags produce a clear error and exit non-zero | pflag returns an error for unrecognised flags | +| NFR-4 | Maintainability | New flags only require adding to `OpsFlags` struct and registering with pflag | Centralized in `flag_parser.go` | -## List Commands (`--list` / `-l`) — Issue #19 +--- -### Problem +## 4. Architecture & Implementation Proposal -There is no way to discover what commands and environments an Opsfile provides without opening the file and reading it manually. Under time-pressure (e.g. an incident), this adds unnecessary friction. +### Overview +Flag parsing is centralized in `internal/flag_parser.go` using the `pflag` library. The `ParseOpsFlags` function parses all ops-level flags from the raw CLI arguments and returns an `OpsFlags` struct plus remaining positional arguments. `cmd/ops/main.go` inspects the flags struct to determine which early-exit path to take (help, version, list) before proceeding to command resolution and execution. -### Functional Requirements +### Component Design -- When `--list` or `-l` is passed, `ops` prints a summary of the Opsfile's available environments and commands, then exits with code 0 -- **Environment listing:** all unique, explicitly defined environment names across every command are collected into a sorted, deduplicated list and printed under an "Environments" heading. The synthetic name `default` is included only if at least one command defines a `default:` block -- **Command listing:** every command is printed in Opsfile-declaration order with its description (if any) and the environments it supports -- **Descriptions from comments:** a `#` comment line immediately preceding a command declaration (e.g. `# Tail CloudWatch logs`) is captured as that command's description. Only the single comment line directly above the command name line is used. If no such comment exists, the description is left blank (no error) -- **No execution:** when `--list` is active, no command resolution or execution occurs — even if an environment and command are supplied on the CLI -- **`--help` integration:** when `-h`, `--help`, or `-?` is passed, after printing the standard flag/usage text, `ops` attempts to locate and parse the Opsfile and appends the same command listing. If the Opsfile cannot be found or parsed, the help output is still shown (silent failure for the listing portion only) -- **`--directory` interaction:** if `-D ` is combined with `--list`, the listing uses the Opsfile from the specified directory +**`internal/flag_parser.go`** — Defines `OpsFlags` struct, `ErrHelp` sentinel, `ParseOpsFlags()`, and `ParseOpsArgs()`. Uses `pflag.NewFlagSet` with `ContinueOnError` error handling and `SetInterspersed(false)` to stop flag parsing at the first positional argument. Help tokens (`-h`, `--help`, `-?`) are stripped before parsing to ensure all other flags are still processed. -### Example Output +**`internal/version.go`** — Declares `Version` and `Commit` variables with defaults (`0.0.0-dev` / `none`), overridden at build time via `-ldflags`. -Given the example Opsfile at `examples/Opsfile`: +**`internal/lister.go`** — `FormatCommandList()` writes a formatted summary of environments and commands to an `io.Writer`, enabling testability without touching stdout. +**`cmd/ops/main.go`** — Sequences the flag checks: help → version → parse Opsfile → list → parse args → resolve → dry-run check → execute. + +### Data Flow +``` +os.Args -> ParseOpsFlags() -> OpsFlags + positionals + | + ├─ ErrHelp? -> print usage + best-effort command listing -> exit 0 + ├─ Version? -> print version string -> exit 0 + ├─ List? -> ParseOpsFile -> FormatCommandList -> exit 0 + └─ otherwise -> ParseOpsArgs -> Resolve -> DryRun check -> Execute ``` -$ ops --list -Commands Found in [./relative/path/to/Opsfile]: +#### Sequence Diagram +``` +User -> main.go: os.Args[1:] +main.go -> flag_parser.go: ParseOpsFlags(args, nil) +flag_parser.go -> pflag: fs.Parse(filtered) +flag_parser.go --> main.go: OpsFlags, positionals, err + +alt ErrHelp + main.go -> flag_parser.go: (usage already printed by fs.Usage closure) + main.go -> opsfile_parser.go: ParseOpsFile (best-effort) + main.go -> lister.go: FormatCommandList (best-effort) + main.go -> os: Exit(0) +end + +alt flags.Version + main.go -> os: Printf version, Exit(0) +end + +alt flags.List + main.go -> opsfile_parser.go: ParseOpsFile + main.go -> lister.go: FormatCommandList(os.Stdout, ...) + main.go -> os: Exit(0) +end + +main.go -> flag_parser.go: ParseOpsArgs(positionals) +main.go -> command_resolver.go: Resolve(cmd, env, commands, vars) + +alt flags.DryRun + main.go -> os: Println each resolved line (unless Silent) +else + main.go -> executor.go: Execute(lines, shell) +end +``` -Environments: - default local preprod prod +### Key Design Decisions +- **pflag over stdlib `flag`:** pflag supports POSIX-style short flags (`-d`) and long flags (`--dry-run`) natively, which the standard `flag` package does not. This is the only external dependency in the project. +- **Help token stripping:** Help flags are manually stripped before `fs.Parse()` so that flags appearing after `--help` (e.g. `--help -D /path`) are still parsed. This ensures the `--directory` flag is available when rendering the best-effort command listing in help output. +- **`SetInterspersed(false)`:** Stops flag parsing at the first non-flag argument, matching stdlib `flag` behavior. This ensures positional arguments (env, command, command-args) are not misinterpreted as flags. +- **Best-effort help listing:** The `--help` path attempts to find and parse the Opsfile for the command listing but silently ignores errors, so help always works even without an Opsfile. -Commands: - show-profile Using AWS profile - tail-logs Tail CloudWatch logs - list-instance-ips List the private IPs of running instances -``` +### Files to Create / Modify -When no descriptions exist, the right-hand column is simply empty: +| File | Action | Description | +|------|--------|-------------| +| `internal/flag_parser.go` | Exists | Defines `OpsFlags`, `ErrHelp`, `ParseOpsFlags()`, `ParseOpsArgs()` | +| `internal/version.go` | Exists | Declares `Version` and `Commit` build-time variables | +| `internal/lister.go` | Exists | `FormatCommandList()` for `--list` and `--help` command listing | +| `cmd/ops/main.go` | Exists | Wires flag checks into the execution pipeline | +| `internal/flag_parser_test.go` | Exists | Tests for all flag parsing behavior | +| `internal/lister_test.go` | Exists | Tests for list output formatting | -``` -Commands: - show-profile - tail-logs - list-instance-ips -``` +--- -### Acceptance Criteria - -1. `-l` and `--list` both trigger the listing and exit 0 -2. Environments are deduplicated across all commands and listed in the order they appear in Opsfile -3. Commands are listed in the order they appear in the Opsfile (parser insertion order) -4. Description is extracted from the first line of a `# comment` block immediately above the `command-name:` declaration; for multi-line comment blocks, the first line (title/summary) is used as the description -5. Missing description produces a blank — no error -6. Passing `--list` alongside a command and environment does not execute anything -7. `--help` appends the command listing after the flag summary; Opsfile errors are silently ignored in this path -8. Unit tests for description extraction in the parser -9. Integration tests for `--list` output format (environments + commands + descriptions) -10. Integration test confirming `--help` shows command listing when an Opsfile is available -11. README.md, this doc, github site and the help/usage banner updated with the new flag -12. Example Opsfiles do not need changes — existing `#` comments already follow the convention - -### Implementation Plan - -The feature touches four areas: the parser (description capture), the flag parser (new flag), a new lister module (formatting), and `main.go` (wiring). - -#### 1. Parser — capture command descriptions (`internal/opsfile_parser.go`) - -- Add a `Description` field to `OpsCommand`: - ```go - type OpsCommand struct { - Name string - Description string // from # comment above declaration - Environments map[string][]string - } - ``` -- Add a `lastComment string` field to the `parser` struct. On each call to `processLine`: - - If the trimmed line starts with `#`, store the text (minus the `# ` prefix) in `lastComment` - - If the trimmed line is blank, clear `lastComment` (a blank line between comment and command breaks the association) - - When `startCommand()` is called, copy `lastComment` into the new command's `Description` and clear `lastComment` -- This is the only parser change. Comments are currently discarded at line 89 of `processLine` — the `#`-prefix check needs to happen before the early return, capturing the text before discarding the line from command parsing - -#### 2. Flag parser — add `--list` / `-l` (`internal/flag_parser.go`) - -- Add `List bool` to `OpsFlags` -- Register with pflag: `fs.BoolP("list", "l", false, "list available commands and environments")` -- Wire `*list` into the returned `OpsFlags` -- Update `fs.Usage` closure to include the new flag in the usage banner (pflag handles this automatically via `PrintDefaults`) - -#### 3. New lister module (`internal/lister.go`) - -Create a new file with a single public function: - -```go -// FormatCommandList writes a human-readable summary of environments and -// commands to w. Commands are printed in the order provided by cmds. -func FormatCommandList(w io.Writer, cmds []OpsCommand) -``` +## 5. Alternatives Considered -- **Why a slice, not a map?** Maps don't preserve insertion order. The parser should return an ordered slice (or `main.go` can build one from the map using the order tracked during parsing — see note below). -- **Environment collection:** iterate all commands, collect environment names into a `map[string]struct{}` set, sort the keys, and print them space-separated under an `Environments:` header -- **Command table:** compute the max command-name length for column alignment, then print each command name left-padded with two spaces followed by the description (if any) -- The function writes to an `io.Writer` so tests can capture output without touching stdout +### Alternative A: Standard Library `flag` Package -**Parser ordering note:** the current parser returns `map[string]OpsCommand`. To preserve declaration order for the listing, add an `order []string` slice to the `parser` struct, appending each command name in `startCommand()`. Expose this as a second return value from `ParseOpsFile` or embed order in a wrapper type. The simplest approach: return an additional `[]string` (command names in order) from `ParseOpsFile`: +**Description:** Use Go's built-in `flag` package instead of `pflag`. -```go -func ParseOpsFile(path string) (OpsVariables, map[string]OpsCommand, []string, error) -``` +**Pros:** +- Zero external dependencies +- Part of the standard library, always available -All existing callers pass `_` for the new return value, so this is backward-compatible at the call site. +**Cons:** +- No POSIX short flag support (`-d` requires manual aliasing) +- No built-in `-?` support (not a valid flag name) +- More boilerplate for short/long flag pairs -#### 4. Main wiring (`cmd/ops/main.go`) +**Why not chosen:** pflag provides a significantly better user experience with POSIX-style short flags and cleaner help output with minimal dependency cost. It is widely used in the Go ecosystem (used by cobra, kubectl, etc.). -**`--list` path:** +### Alternative B: Cobra Command Framework -``` -flags parsed → find Opsfile → ParseOpsFile → if flags.List → FormatCommandList(os.Stdout, ...) → os.Exit(0) -``` +**Description:** Use the Cobra framework for full CLI command/subcommand support. -Insert this check after `ParseOpsFile` succeeds and before `ParseOpsArgs`. This means `--list` does not require positional args. +**Pros:** +- Rich subcommand support, auto-generated help, shell completion +- Very popular in Go ecosystem -**`--help` path:** +**Cons:** +- Heavyweight dependency for a single-command CLI +- Adds complexity that `ops` doesn't need (no subcommands) +- Would change the invocation syntax -After the existing `errors.Is(err, internal.ErrHelp)` block (which currently calls `os.Exit(0)`), insert a best-effort listing: +**Why not chosen:** `ops` is intentionally a flat CLI (`ops [flags] `) — Cobra's subcommand model doesn't fit. pflag (which Cobra uses internally) provides the needed flag features without the overhead. -```go -if errors.Is(err, internal.ErrHelp) { - // Best-effort: try to show available commands - if dir, derr := resolveOpsfileDir(flags.Directory); derr == nil { - if _, cmds, order, perr := internal.ParseOpsFile(filepath.Join(dir, opsFileName)); perr == nil { - fmt.Fprintln(os.Stderr) - internal.FormatCommandList(os.Stderr, /* ordered cmds */) - } - } - os.Exit(0) -} -``` +--- -Note: the help listing writes to stderr (matching the existing help output destination). +## Open Questions +- [x] All current open questions resolved — feature is fully implemented -#### 5. Tests +--- -| Test file | What it covers | -|-----------|---------------| -| `internal/opsfile_parser_test.go` | Description extraction: comment directly above command, blank line gap clears description, no comment yields empty string, multi-line comments only use last line | -| `internal/lister_test.go` | `FormatCommandList` output format: column alignment, sorted environments, empty descriptions, single command, many commands | -| `internal/flag_parser_test.go` | `--list` and `-l` parsed correctly into `OpsFlags.List` | -| `cmd/ops/main_test.go` or integration test | End-to-end: `ops --list` against example Opsfile produces expected output; `ops --help` includes command listing; `ops --list prod tail-logs` does not execute | +## 6. Task Breakdown -#### 6. Documentation updates +### Phase 1: Foundation (completed) +- [x] Define `OpsFlags` struct with all flag fields +- [x] Implement `ParseOpsFlags()` with pflag registration +- [x] Implement `ParseOpsArgs()` for positional argument extraction +- [x] Implement help token stripping for `-h`/`--help`/`-?` +- [x] Write unit tests for flag parsing -- `docs/feature-supported-flags.md` — this section (already done) -- `README.md` — add `--list` to the flags table and add a usage example -- Help/usage banner in `flag_parser.go` — pflag auto-includes registered flags in `PrintDefaults()`, so no manual text change needed beyond the registration +### Phase 2: Integration (completed) +- [x] Wire flag checks into `main.go` execution pipeline +- [x] Implement dry-run path in `main.go` +- [x] Implement version reporting with build-time ldflags +- [x] Implement `--list` flag with `FormatCommandList()` +- [x] Implement best-effort command listing in `--help` output +- [x] Write tests for lister formatting -### Files Changed (summary) +### Phase 3: Polish (completed) +- [x] Ensure `--directory` interacts correctly with `--list` and `--help` +- [x] Update help/usage banner text +- [x] Update documentation -| File | Change | -|------|--------| -| `internal/opsfile_parser.go` | Add `Description` field to `OpsCommand`, capture `lastComment` in parser, return command order | -| `internal/flag_parser.go` | Add `List` to `OpsFlags`, register `--list`/`-l` flag | -| `internal/lister.go` | **New file** — `FormatCommandList()` | -| `cmd/ops/main.go` | Wire `--list` exit path, add command listing to `--help` path | -| `internal/opsfile_parser_test.go` | Tests for description capture and ordering | -| `internal/lister_test.go` | **New file** — tests for output formatting | -| `internal/flag_parser_test.go` | Tests for new flag | -| `README.md` | Document `--list` flag | -| `docs/feature-supported-flags.md` | This section | +--- diff --git a/docs/feature-variable-substitution.md b/docs/feature-variable-substitution.md index 8056bc4..cd723d5 100644 --- a/docs/feature-variable-substitution.md +++ b/docs/feature-variable-substitution.md @@ -1,98 +1,240 @@ -# Variable Substitution +# Feature: Variable Substitution -## Functional Requirements -- `$(VAR_NAME)` tokens in shell lines are replaced with their values from the Opsfile variables -- Only tokens whose content is a bare identifier (letters, digits, hyphens, underscores) are treated as Opsfile variable references -- Tokens containing spaces or non-identifier characters (e.g. `$(aws ec2 describe-instances)`) are passed through unchanged, preserving shell subcommand syntax -- Variable lookup uses environment-scoped priority: - 1. `env_VAR_NAME` is checked first (e.g. for env `prod` and variable `CLUSTER`, check `prod_CLUSTER`) - 2. `VAR_NAME` is checked as a fallback (unscoped) - 3. If neither is defined, an error is returned -- An unclosed `$(` (no matching `)`) is written through literally as `$(` +## 1. Problem Statement & High-Level Goals -## Implementation Overview +### Problem +Opsfile commands often need to reference configuration values that vary by environment (e.g. cluster names, regions, account IDs). Without variable substitution, users would have to duplicate commands across environments or rely entirely on shell environment variables with inconsistent syntax. Users also need the ability to use shell subcommands like `$(aws sts get-caller-identity)` without `ops` interfering with them. -Variable substitution is implemented in `internal/command_resolver.go` in two functions. +### Goals +- [x] Replace `$(VAR_NAME)` tokens in command lines with values from Opsfile-defined variables +- [x] Support environment-scoped variable resolution (e.g. `prod_CLUSTER` takes precedence over `CLUSTER`) +- [x] Fall back to shell environment variables when a variable is not defined in the Opsfile +- [x] Pass through non-identifier `$(...)` tokens unchanged to preserve shell subcommand syntax -**`substituteVars(line, env string, vars OpsVariables) (string, error)`:** +### Non-Goals +- No support for nested variable references (e.g. `$($(VAR))`) +- No expression evaluation inside `$(...)` — only bare identifiers are resolved +- No default/fallback values syntax (e.g. `$(VAR:-default)`) +- No variable escaping mechanism (e.g. `$$(VAR)` to produce a literal `$(VAR)`) -1. Scans `line` for `$(` markers using `strings.Index` -2. For each `$(...)` token, extracts the content between `$(` and `)` -3. Calls `isIdentifier(token)` to decide whether this is a variable reference or a shell subcommand -4. If identifier: calls `resolveVar` to look up the value and writes it to a `strings.Builder` -5. If not identifier: writes `$(`, the token, and `)` back unchanged -6. If no closing `)` is found, writes `$(` literally and continues scanning - -**`resolveVar(varName, env string, vars OpsVariables) (string, error)`:** +--- -1. Checks `vars[env+"_"+varName]` (scoped lookup) -2. Falls back to `vars[varName]` (unscoped lookup) -3. Returns an error if neither key exists +## 2. Functional Requirements -**Key symbols:** +### FR-1: Variable Reference Syntax +`$(VAR_NAME)` tokens in shell lines are replaced with their resolved values. Only tokens whose content is a bare identifier (letters, digits, hyphens, underscores) are treated as variable references. Tokens containing spaces or non-identifier characters (e.g. `$(aws ec2 describe-instances)`) are passed through unchanged, preserving shell subcommand syntax. -- `substituteVars()` -- per-line substitution engine -- `resolveVar()` -- env-scoped variable lookup with fallback -- `isIdentifier()` (from `opsfile_parser.go`) -- shared helper that determines whether a `$(...)` token is a variable reference +### FR-2: Four-Level Priority Chain +Variable lookup uses a four-level priority chain, exhausting env-scoped lookups before falling back to unscoped: ---- +1. **Opsfile env-scoped** — `vars["env_VAR_NAME"]` (e.g. `prod_CLUSTER`) +2. **Shell environment env-scoped** — `os.LookupEnv("env_VAR_NAME")` (e.g. `$prod_CLUSTER`) +3. **Opsfile unscoped** — `vars["VAR_NAME"]` (e.g. `CLUSTER`) +4. **Shell environment unscoped** — `os.LookupEnv("VAR_NAME")` (e.g. `$CLUSTER`) -## Shell Environment Variable Injection +This mirrors Makefile behavior: Opsfile-defined variables override shell environment variables at the same scope level. -### Overview +### FR-3: Undefined Variable Error +If a variable reference cannot be resolved at any of the four priority levels, an error is returned. The error message includes the variable name and the environment being resolved. -Shell environment variables (from the user's terminal session) can be referenced in Opsfile command lines using the same `$(VAR_NAME)` syntax already used for Opsfile-defined variables. This matches Makefile behaviour: no new syntax is introduced; environment variable lookup is added as a fallback in the existing resolution chain. +### FR-4: Unclosed Token Handling +An unclosed `$(` (no matching `)`) is written through literally as `$(` — no error is raised. This allows partial shell syntax to pass through safely. -### Functional Requirements +### FR-5: Identifier Detection +A token is considered an identifier if it is non-empty and consists only of letters (`a-z`, `A-Z`), digits (`0-9`), hyphens (`-`), and underscores (`_`). The `isIdentifier()` function in `opsfile_parser.go` implements this check and is shared with the parser. -- `$(VAR_NAME)` tokens that are valid identifiers are resolved using a four-level priority chain: - 1. Opsfile env-scoped variable (e.g. `prod_VAR_NAME`) - 2. Shell environment env-scoped variable (e.g. `os.Getenv("prod_VAR_NAME")`) - 3. Opsfile unscoped variable (e.g. `VAR_NAME`) - 4. Shell environment unscoped variable (e.g. `os.Getenv("VAR_NAME")`) -- If a variable is not found at any level, an error is returned (existing behaviour, unchanged) -- Shell environment variable lookup is case-sensitive; `$(PATH)` and `$(path)` are distinct -- Tokens that fail `isIdentifier()` (e.g. `$(aws ec2 describe-instances)`) continue to pass through unchanged — shell subcommand syntax is unaffected -- The feature is transparent: users do not need to declare which variables come from the environment vs. the Opsfile +### Example Usage -### Syntax +**Opsfile with variables and env-scoped overrides:** +``` +CLUSTER=my-app +prod_CLUSTER=my-app-prod +AWS_REGION=us-east-1 -Identical to existing Opsfile variable references: +tail-logs: + default: + aws logs tail /ecs/$(CLUSTER) --follow --region $(AWS_REGION) + prod: + aws logs tail /ecs/$(CLUSTER) --follow --region $(AWS_REGION) +``` +Running `ops prod tail-logs` resolves to: +```bash +aws logs tail /ecs/my-app-prod --follow --region us-east-1 ``` -# Opsfile +Here `$(CLUSTER)` resolves to `my-app-prod` (Opsfile env-scoped `prod_CLUSTER`) while `$(AWS_REGION)` resolves to `us-east-1` (Opsfile unscoped). +**Shell environment fallback:** +``` deploy: default: aws s3 cp ./dist s3://$(BUCKET)/$(APP_VERSION) --region $(AWS_REGION) ``` +If `BUCKET` is defined in the Opsfile but `APP_VERSION` and `AWS_REGION` are not, they are resolved from the shell environment (`$APP_VERSION`, `$AWS_REGION`). + +**Shell subcommand passthrough:** +``` +show-caller: + default: + echo "Caller: $(aws sts get-caller-identity --query Account --output text)" +``` +The `$(aws sts ...)` token contains spaces, so `isIdentifier()` returns false and the token passes through unchanged for the shell to evaluate. + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | Variable substitution is O(n) per line where n is line length | Single-pass scan using `strings.Index` | +| NFR-2 | Compatibility | Shell environment lookup is case-sensitive | Matches POSIX behavior; `$(PATH)` and `$(path)` are distinct | +| NFR-3 | Reliability | Undefined variables produce clear error messages with variable name and environment | Prevents silent misconfiguration | +| NFR-4 | Compatibility | Shell subcommand syntax is never modified | `isIdentifier()` filter ensures only bare identifiers are resolved | +| NFR-5 | Maintainability | Substitution logic is isolated in `command_resolver.go` | Single file to modify for resolution behavior changes | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +Variable substitution is implemented in `internal/command_resolver.go` as part of the command resolution pipeline. After environment selection picks the correct shell lines for the requested environment, each line is scanned for `$(...)` tokens and resolved against the four-level variable priority chain. The `Resolve()` function orchestrates both environment selection and variable substitution. + +### Component Design + +**`Resolve(commandName, env, commands, vars)`** — Public entry point. Looks up the command by name, calls `selectLines()` for environment selection, then iterates each line through `substituteVars()`. + +**`selectLines(cmd, env)`** — Selects shell lines for the given environment, falling back to the `default` block if the specific environment isn't defined. Returns an error if neither exists. + +**`substituteVars(line, env, vars)`** — Scans a single line for `$(...)` tokens. For each token, checks `isIdentifier()` to determine if it's a variable reference or a shell subcommand. Variable references are resolved via `resolveVar()`; non-identifiers are passed through unchanged. + +**`resolveVar(varName, env, vars)`** — Implements the four-level priority chain: Opsfile env-scoped → shell env-scoped → Opsfile unscoped → shell unscoped. Returns the first match or an error. + +**`isIdentifier(s)`** (in `opsfile_parser.go`) — Returns true if the string is non-empty and contains only identifier characters (letters, digits, hyphens, underscores). Shared between the parser and resolver. + +### Data Flow +``` +Resolve(commandName, env, commands, vars) + -> selectLines(cmd, env) -> []string (raw shell lines) + -> for each line: + substituteVars(line, env, vars) + -> scan for "$(" markers + -> extract token between "$(" and ")" + -> isIdentifier(token)? + yes -> resolveVar(token, env, vars) -> substituted value + no -> pass through "$(token)" unchanged + -> ResolvedCommand{Lines: resolved} +``` + +#### Sequence Diagram +``` +main.go -> command_resolver.go: Resolve(cmdName, env, commands, vars) +command_resolver.go -> command_resolver.go: selectLines(cmd, env) + alt exact env match + return lines for env + else fallback + return lines for "default" + else no match + return error + end + +loop each line + command_resolver.go -> command_resolver.go: substituteVars(line, env, vars) + loop each "$(" token + command_resolver.go -> opsfile_parser.go: isIdentifier(token) + alt is identifier + command_resolver.go -> command_resolver.go: resolveVar(token, env, vars) + alt Opsfile env-scoped found + return value + else shell env-scoped found + return os.LookupEnv(env_VAR) + else Opsfile unscoped found + return value + else shell unscoped found + return os.LookupEnv(VAR) + else not found + return error + end + else not identifier + pass through unchanged + end + end +end +command_resolver.go --> main.go: ResolvedCommand{Lines} +``` + +### Key Design Decisions +- **`strings.Index` scanning over regex:** A simple index-based scan is more readable, faster, and avoids regex compilation overhead. The `$(...)` syntax is straightforward enough that regex is unnecessary. +- **Shared `isIdentifier()` function:** Rather than duplicating identifier-checking logic, the resolver imports the same function used by the parser. This ensures consistent behavior for what constitutes a valid variable name. +- **Four-level priority chain:** Modeled after Makefile behavior where file-defined variables override environment variables. The env-scoped layer is exhausted before falling back to unscoped, so `prod_VAR` in the shell environment takes precedence over an unscoped `VAR` in the Opsfile. +- **`os.LookupEnv` over `os.Getenv`:** `LookupEnv` distinguishes between an unset variable and one set to an empty string, which is important for correct fallback behavior. +- **Error on undefined:** Rather than silently passing through undefined variables (which could lead to broken commands), an explicit error is returned. This catches typos and missing configuration early. + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `internal/command_resolver.go` | Exists | `Resolve()`, `selectLines()`, `substituteVars()`, `resolveVar()` | +| `internal/opsfile_parser.go` | Exists | `isIdentifier()` and `isIdentChar()` shared helpers | +| `internal/command_resolver_test.go` | Exists | Tests for substitution, resolution priority, error cases | + +--- -If `BUCKET` is defined in the Opsfile and `APP_VERSION` is not, `APP_VERSION` is resolved from the shell environment. `AWS_REGION` follows the same fallback chain. +## 5. Alternatives Considered -### Priority Behaviour (Makefile-Compatible) +### Alternative A: Regex-Based Substitution -Makefiles treat environment variables as the lowest-priority source: a variable defined in the Makefile overrides an environment variable of the same name. `opsfile` mirrors this: +**Description:** Use `regexp.ReplaceAllStringFunc` with a pattern like `\$\(([^)]+)\)` to find and replace variable references. -| Source | Priority | -|---|---| -| Opsfile env-scoped (`prod_VAR`) | 1 (highest) | -| Shell environment env-scoped (`$prod_VAR`) | 2 | -| Opsfile unscoped (`VAR`) | 3 | -| Shell environment unscoped (`$VAR`) | 4 (lowest) | +**Pros:** +- More concise replacement logic +- Well-understood pattern matching -Env-scoped lookups (Opsfile then shell) are exhausted before falling back to unscoped lookups, so a `prod_`-prefixed shell variable takes precedence over an unscoped Opsfile definition. +**Cons:** +- Regex compilation overhead on every line +- Harder to handle the identifier vs. non-identifier distinction cleanly +- Less readable for Go developers unfamiliar with the regex -### Implementation Overview +**Why not chosen:** The index-based scan is simpler, faster, and makes the identifier check explicit. The `$(...)` syntax is simple enough that regex adds complexity without benefit. -The only change required is in `internal/command_resolver.go` in `resolveVar`. +### Alternative B: Go `text/template` Syntax -**Updated `resolveVar(varName, env string, vars OpsVariables) (string, error)`:** +**Description:** Use Go's template syntax (`{{ .VAR }}`) instead of `$(VAR)`. -1. Checks `vars[env+"_"+varName]` (env-scoped Opsfile variable) — returns value if found -2. Calls `os.Getenv(env+"_"+varName)` (env-scoped shell variable) — returns value if set -3. Checks `vars[varName]` (unscoped Opsfile variable) — returns value if found -4. Calls `os.Getenv(varName)` (unscoped shell variable) — returns value if set -5. Returns an error if all four lookups fail +**Pros:** +- Built-in template engine with rich features +- No custom parser needed -No changes to `substituteVars`, `isIdentifier`, or any other part of the resolution pipeline. +**Cons:** +- Conflicts with shell syntax expectations — users expect `$(...)` from Makefile/shell familiarity +- Would require escaping `$()` in shell subcommands +- Template errors are harder to understand for ops users + +**Why not chosen:** The `$(VAR)` syntax is familiar to Makefile and shell users, which is the target audience. Using Go templates would add cognitive overhead and break the Makefile-like mental model. + +--- + +## Open Questions +- [x] All current open questions resolved — feature is fully implemented + +--- + +## 6. Task Breakdown + +### Phase 1: Foundation (completed) +- [x] Implement `substituteVars()` with `$(...)` token scanning +- [x] Implement `isIdentifier()` to distinguish variable refs from shell subcommands +- [x] Implement `resolveVar()` with Opsfile-only lookup (env-scoped then unscoped) +- [x] Write unit tests for substitution and identifier detection + +### Phase 2: Shell Environment Integration (completed) +- [x] Extend `resolveVar()` with `os.LookupEnv` fallback at both scoped and unscoped levels +- [x] Implement four-level priority chain +- [x] Write tests for shell environment variable resolution and priority ordering + +### Phase 3: Polish (completed) +- [x] Handle edge cases: unclosed `$(`, empty tokens, non-identifier tokens +- [x] Ensure error messages include variable name and environment context +- [x] Update documentation + +--- diff --git a/docs/templates/feature-doc.md b/docs/templates/feature-doc.md new file mode 100644 index 0000000..993eeb4 --- /dev/null +++ b/docs/templates/feature-doc.md @@ -0,0 +1,139 @@ +# Feature: [Feature Name] + + +## 1. Problem Statement & High-Level Goals + +### Problem +[Describe the problem this feature solves. What pain point exists today? Who is affected?] +[Reference related github issue] + +### Goals +- [ ] [Primary goal] +- [ ] [Secondary goal] +- [ ] [Tertiary goal] + +### Non-Goals +- [Explicitly list what this feature will NOT do] + +--- + +## 2. Functional Requirements + +### FR-1: [Requirement Name] +[Description of the requirement and expected behavior] + +### FR-2: [Requirement Name] +[Description of the requirement and expected behavior] + +### FR-3: [Requirement Name] +[Description of the requirement and expected behavior] + +### Example Usage + +[example user scenario] +[examples of relevant bash cli call and output] +[example of relevant Opsfile configuration] + +--- + +## 3. Non-Functional Requirements + +| ID | Category | Requirement | Notes | +|----|----------|-------------|-------| +| NFR-1 | Performance | [e.g., < 100ms response time] | | +| NFR-2 | Compatibility | [e.g., Linux, macOS, Windows] | | +| NFR-3 | Reliability | [e.g., graceful error handling] | | +| NFR-4 | Security | [e.g., no credential leakage] | | +| NFR-5 | Maintainability | [e.g., test coverage >= existing] | | + +--- + +## 4. Architecture & Implementation Proposal + +### Overview +[High-level description of the proposed implementation approach] + +### Component Design +[Describe new or modified components and their responsibilities] + +### Data Flow () +``` +[Step 1] -> [Step 2] -> [Step 3] -> [Output] +``` + +#### Sequence Diagram +[Insert a UML sequence diagram explaing the flow] + +### Key Design Decisions +- **[Decision 1]:** [Rationale] +- **[Decision 2]:** [Rationale] + +### Files to Create / Modify + +| File | Action | Description | +|------|--------|-------------| +| `**/[file].go` | Create | [What it does] | +| `**/[file].go` | Modify | [What changes] | +| `**/[file]_test.go` | Create | [Test coverage] | + +--- + +## 5. Alternatives Considered + +### Alternative A: [Name] + +**Description:** [How this approach would work] + +**Pros:** +- [Advantage 1] +- [Advantage 2] + +**Cons:** +- [Disadvantage 1] +- [Disadvantage 2] + +**Why not chosen:** [Brief explanation] + +--- + +### Alternative B: [Name] + +**Description:** [How this approach would work] + +**Pros:** +- [Advantage 1] +- [Advantage 2] + +**Cons:** +- [Disadvantage 1] +- [Disadvantage 2] + +**Why not chosen:** [Brief explanation] + +--- + +## Open Questions +- [ ] [Any unresolved questions or decisions needed] + +--- + +## 6. Task Breakdown + +### Phase 1: Foundation +- [ ] [Task 1 -- e.g., define types/interfaces] +- [ ] [Task 2 -- e.g., implement core logic] +- [ ] [Task 3 -- e.g., write unit tests for core logic] + +### Phase 2: Integration +- [ ] [Task 4 -- e.g., wire into CLI entry point] +- [ ] [Task 5 -- e.g., add flag/argument parsing support] +- [ ] [Task 6 -- e.g., write integration tests] + +### Phase 3: Polish +- [ ] [Task 7 -- e.g., error messages and edge cases] +- [ ] [Task 8 -- e.g., update README / user docs] +- [ ] [Task 9 -- e.g., add example Opsfile if applicable] + +--- + + diff --git a/docs/templates/test-plan.md b/docs/templates/test-plan.md new file mode 100644 index 0000000..b0fc4e8 --- /dev/null +++ b/docs/templates/test-plan.md @@ -0,0 +1,93 @@ +# Test Plan: [Feature Name] + +**Feature Doc:** [Link to feature document if applicable] + +--- + +## 1. Scope + +### In Scope +- [Component or behavior being tested] +- [Component or behavior being tested] +- [Regression Testing for dependant feature] + +### Out of Scope +- [What is explicitly NOT covered by this test plan] + +--- + +## 2. Test Objectives + +- [What this test plan aims to validate — e.g., "Verify flag parsing handles all documented short/long forms"] +- [Secondary objective if applicable] + +--- + +## 3. Entry & Exit Criteria + +### Entry Criteria (prerequisites before testing begins) +- [ ] Feature implementation is code-complete +- [ ] Code compiles without errors (`make build`) +- [ ] Existing tests still pass (`make test`) + +### Exit Criteria (conditions to consider testing complete) +- [ ] All green path automated tests pass +- [ ] All red path automated tests pass +- [ ] All green/red manual tests have been run and pass +- [ ] No regressions in related features +- [ ] Test coverage does not decrease + +### Acceptance Criteria + +- [ ] [Criterion that must be true for the feature to be considered working — e.g., "All documented flag forms are parsed correctly"] +- [ ] [Criterion — e.g., "Error messages include actionable context"] +- [ ] [Criterion — e.g., "No breaking changes to existing Opsfile behavior"] + + +--- + +## 4. Green Path Tests + +| Short Title | Test Name | Input / Setup | Expected Outcome | Type | +|---|---|---|---|---| +| [Short Title] | [Descriptive name] | [Input data or preconditions] | [Expected result] | automated (unit/integ) / e2e / manual verification| +| [Short Title] | [Descriptive name] | [Input data or preconditions] | [Expected result] | automated (unit/integ) / e2e / manual verification| + +--- + +## 5. Red Path Tests + +| Short Title | Test Name | Input / Setup | Expected Outcome | Type | +|---|---|---|---|---| +| [Short Title] | [Descriptive name] | [Invalid/edge-case input] | [Expected error or behavior] | automated (unit/integ) / e2e / manual verification| +| [Short Title] | [Descriptive name] | [Invalid/edge-case input] | [Expected error or behavior] | automated (unit/integ) / e2e / manual verification| + +--- + +## 6. Edge Cases & Boundary Conditions + +| Short Title | Test Name | Input / Setup | Expected Outcome | Type | +|---|---|---|---|---| +| [Short Title] | [Descriptive name] | [Boundary value or unusual condition] | [Expected result] | unit / e2e / manual verification| +| [Short Title] | [Descriptive name] | [Boundary value or unusual condition] | [Expected result] | unit / e2e / manual verification| + +--- + +## 9. Existing Automated Tests + +- [List any existing test functions/files that cover related behavior, or state "None"] +- Example: `TestParseOpsFlags` in `internal/flag_parser_test.go` (line 8) — 13 subtests + +--- + +## 10. Missing Automated Tests + +| Scenario | Type | What It Validates | +|---|---|---| +| [Describe untested scenario] | unit / e2e | [What gap this test would fill] | +| [Describe untested scenario] | unit / e2e | [What gap this test would fill] | + +--- + +## Notes +- [Any additional context, dependencies on other features, or open questions]