diff --git a/.gitignore b/.gitignore index 5f12e8c..e83ac44 100644 --- a/.gitignore +++ b/.gitignore @@ -75,5 +75,8 @@ local.properties # Agent .agent +# OpenSpec +.gemini/ + # VSCode .vscode diff --git a/.specify/memory/constitution.md b/.specify/memory/constitution.md deleted file mode 100644 index ae6bdbc..0000000 --- a/.specify/memory/constitution.md +++ /dev/null @@ -1,54 +0,0 @@ -# WorldTides Constitution - -## Core Principles - -### I. Kotlin-First, Java-Compatible -This library is developed in Kotlin to leverage modern language features, safety, and conciseness. However, **Java interoperability is a first-class citizen**. -- All public APIs must be consumable from Java without awkward syntax (e.g., use `@JvmOverloads`, `@JvmStatic` where appropriate). -- Asynchronous operations must provide both Suspend functions (for Kotlin) and Callback interfaces (for Java). -- Avoid exposing Kotlin-specific types (like `inline` classes) in the public API if they degrade the Java experience. - -### II. Strict API Fidelity -The library serves as a strongly-typed proxy for the [WorldTides API](https://www.worldtides.info/apidocs). -- **Naming**: Library models and methods should mirror the API's terminology unless it conflicts with standard Java/Kotlin naming conventions. -- **Completeness**: Aim to support all parameters and response fields available in the API version we target. -- **Updates**: Changes in the WorldTides API (v2, v3, etc.) should be reflected in the library with appropriate versioning. - -### III. Type Safety & Null Safety -Users of this library should never have to parse raw JSON. -- All API responses must be mapped to strongly-typed data classes. -- Nullability must be strictly modeled: if a field can be missing in the API response, it must be nullable in the model. -- `Result` or similar wrappers should be used to handle success/failure states explicitly. - -## Architecture Standards - -### Networking Layer -- **Retrofit & Moshi**: The library uses Retrofit for networking and Moshi for JSON parsing. -- **Encapsulation**: The internal networking stack (OkHttp client) is encapsulated to provide a simple "one-line" initialization experience. -- *Future Consideration*: Allow dependency injection for `OkHttpClient` to enable advanced configuration (caching, interceptors) by the consumer. - -### Data Models -- **Immutability**: All data models (`TideExtremes`, `Extreme`, etc.) are immutable `data class` structures. -- **Serialization**: Use annotations (e.g., `@Json`) to map API JSON keys to clean Kotlin property names. - -## Development Workflow - -### Versioning -- **Semantic Versioning**: Follow [SemVer](https://semver.org/) (MAJOR.MINOR.PATCH). - - MAJOR: Breaking API changes. - - MINOR: New features (e.g., new API endpoints supported) in a backward-compatible manner. - - PATCH: Bug fixes. - -### Documentation -- **KDoc & Javadoc**: All public classes and methods must be documented. -- **Samples**: Code samples in README must include both Kotlin and Java examples side-by-side. - -## Governance - -### QA & Testing -- **Mandatory Testing**: 100% of the codebase must be covered by tests. -- **Unit Tests**: Required for all logic, data transformations, and utility functions. -- **Integration Tests**: Verification of API mapping and inter-component communication is critical. -- **Public API Review**: Any change to the public API requires a review focusing on "How does this look in Java?" and "How does this look in Kotlin?". - -**Version**: 1.0.0 | **Ratified**: 2026-01-09 diff --git a/.specify/scripts/bash/check-prerequisites.sh b/.specify/scripts/bash/check-prerequisites.sh deleted file mode 100755 index 98e387c..0000000 --- a/.specify/scripts/bash/check-prerequisites.sh +++ /dev/null @@ -1,166 +0,0 @@ -#!/usr/bin/env bash - -# Consolidated prerequisite checking script -# -# This script provides unified prerequisite checking for Spec-Driven Development workflow. -# It replaces the functionality previously spread across multiple scripts. -# -# Usage: ./check-prerequisites.sh [OPTIONS] -# -# OPTIONS: -# --json Output in JSON format -# --require-tasks Require tasks.md to exist (for implementation phase) -# --include-tasks Include tasks.md in AVAILABLE_DOCS list -# --paths-only Only output path variables (no validation) -# --help, -h Show help message -# -# OUTPUTS: -# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]} -# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md -# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc. - -set -e - -# Parse command line arguments -JSON_MODE=false -REQUIRE_TASKS=false -INCLUDE_TASKS=false -PATHS_ONLY=false - -for arg in "$@"; do - case "$arg" in - --json) - JSON_MODE=true - ;; - --require-tasks) - REQUIRE_TASKS=true - ;; - --include-tasks) - INCLUDE_TASKS=true - ;; - --paths-only) - PATHS_ONLY=true - ;; - --help|-h) - cat << 'EOF' -Usage: check-prerequisites.sh [OPTIONS] - -Consolidated prerequisite checking for Spec-Driven Development workflow. - -OPTIONS: - --json Output in JSON format - --require-tasks Require tasks.md to exist (for implementation phase) - --include-tasks Include tasks.md in AVAILABLE_DOCS list - --paths-only Only output path variables (no prerequisite validation) - --help, -h Show this help message - -EXAMPLES: - # Check task prerequisites (plan.md required) - ./check-prerequisites.sh --json - - # Check implementation prerequisites (plan.md + tasks.md required) - ./check-prerequisites.sh --json --require-tasks --include-tasks - - # Get feature paths only (no validation) - ./check-prerequisites.sh --paths-only - -EOF - exit 0 - ;; - *) - echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2 - exit 1 - ;; - esac -done - -# Source common functions -SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -source "$SCRIPT_DIR/common.sh" - -# Get feature paths and validate branch -eval $(get_feature_paths) -check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 - -# If paths-only mode, output paths and exit (support JSON + paths-only combined) -if $PATHS_ONLY; then - if $JSON_MODE; then - # Minimal JSON paths payload (no validation performed) - printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \ - "$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS" - else - echo "REPO_ROOT: $REPO_ROOT" - echo "BRANCH: $CURRENT_BRANCH" - echo "FEATURE_DIR: $FEATURE_DIR" - echo "FEATURE_SPEC: $FEATURE_SPEC" - echo "IMPL_PLAN: $IMPL_PLAN" - echo "TASKS: $TASKS" - fi - exit 0 -fi - -# Validate required directories and files -if [[ ! -d "$FEATURE_DIR" ]]; then - echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2 - echo "Run /speckit.specify first to create the feature structure." >&2 - exit 1 -fi - -if [[ ! -f "$IMPL_PLAN" ]]; then - echo "ERROR: plan.md not found in $FEATURE_DIR" >&2 - echo "Run /speckit.plan first to create the implementation plan." >&2 - exit 1 -fi - -# Check for tasks.md if required -if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then - echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2 - echo "Run /speckit.tasks first to create the task list." >&2 - exit 1 -fi - -# Build list of available documents -docs=() - -# Always check these optional docs -[[ -f "$RESEARCH" ]] && docs+=("research.md") -[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md") - -# Check contracts directory (only if it exists and has files) -if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then - docs+=("contracts/") -fi - -[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md") - -# Include tasks.md if requested and it exists -if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then - docs+=("tasks.md") -fi - -# Output results -if $JSON_MODE; then - # Build JSON array of documents - if [[ ${#docs[@]} -eq 0 ]]; then - json_docs="[]" - else - json_docs=$(printf '"%s",' "${docs[@]}") - json_docs="[${json_docs%,}]" - fi - - printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs" -else - # Text output - echo "FEATURE_DIR:$FEATURE_DIR" - echo "AVAILABLE_DOCS:" - - # Show status of each potential document - check_file "$RESEARCH" "research.md" - check_file "$DATA_MODEL" "data-model.md" - check_dir "$CONTRACTS_DIR" "contracts/" - check_file "$QUICKSTART" "quickstart.md" - - if $INCLUDE_TASKS; then - check_file "$TASKS" "tasks.md" - fi -fi diff --git a/.specify/scripts/bash/common.sh b/.specify/scripts/bash/common.sh deleted file mode 100755 index 2c3165e..0000000 --- a/.specify/scripts/bash/common.sh +++ /dev/null @@ -1,156 +0,0 @@ -#!/usr/bin/env bash -# Common functions and variables for all scripts - -# Get repository root, with fallback for non-git repositories -get_repo_root() { - if git rev-parse --show-toplevel >/dev/null 2>&1; then - git rev-parse --show-toplevel - else - # Fall back to script location for non-git repos - local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - (cd "$script_dir/../../.." && pwd) - fi -} - -# Get current branch, with fallback for non-git repositories -get_current_branch() { - # First check if SPECIFY_FEATURE environment variable is set - if [[ -n "${SPECIFY_FEATURE:-}" ]]; then - echo "$SPECIFY_FEATURE" - return - fi - - # Then check git if available - if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then - git rev-parse --abbrev-ref HEAD - return - fi - - # For non-git repos, try to find the latest feature directory - local repo_root=$(get_repo_root) - local specs_dir="$repo_root/specs" - - if [[ -d "$specs_dir" ]]; then - local latest_feature="" - local highest=0 - - for dir in "$specs_dir"/*; do - if [[ -d "$dir" ]]; then - local dirname=$(basename "$dir") - if [[ "$dirname" =~ ^([0-9]{3})- ]]; then - local number=${BASH_REMATCH[1]} - number=$((10#$number)) - if [[ "$number" -gt "$highest" ]]; then - highest=$number - latest_feature=$dirname - fi - fi - fi - done - - if [[ -n "$latest_feature" ]]; then - echo "$latest_feature" - return - fi - fi - - echo "main" # Final fallback -} - -# Check if we have git available -has_git() { - git rev-parse --show-toplevel >/dev/null 2>&1 -} - -check_feature_branch() { - local branch="$1" - local has_git_repo="$2" - - # For non-git repos, we can't enforce branch naming but still provide output - if [[ "$has_git_repo" != "true" ]]; then - echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2 - return 0 - fi - - if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then - echo "ERROR: Not on a feature branch. Current branch: $branch" >&2 - echo "Feature branches should be named like: 001-feature-name" >&2 - return 1 - fi - - return 0 -} - -get_feature_dir() { echo "$1/specs/$2"; } - -# Find feature directory by numeric prefix instead of exact branch match -# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature) -find_feature_dir_by_prefix() { - local repo_root="$1" - local branch_name="$2" - local specs_dir="$repo_root/specs" - - # Extract numeric prefix from branch (e.g., "004" from "004-whatever") - if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then - # If branch doesn't have numeric prefix, fall back to exact match - echo "$specs_dir/$branch_name" - return - fi - - local prefix="${BASH_REMATCH[1]}" - - # Search for directories in specs/ that start with this prefix - local matches=() - if [[ -d "$specs_dir" ]]; then - for dir in "$specs_dir"/"$prefix"-*; do - if [[ -d "$dir" ]]; then - matches+=("$(basename "$dir")") - fi - done - fi - - # Handle results - if [[ ${#matches[@]} -eq 0 ]]; then - # No match found - return the branch name path (will fail later with clear error) - echo "$specs_dir/$branch_name" - elif [[ ${#matches[@]} -eq 1 ]]; then - # Exactly one match - perfect! - echo "$specs_dir/${matches[0]}" - else - # Multiple matches - this shouldn't happen with proper naming convention - echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2 - echo "Please ensure only one spec directory exists per numeric prefix." >&2 - echo "$specs_dir/$branch_name" # Return something to avoid breaking the script - fi -} - -get_feature_paths() { - local repo_root=$(get_repo_root) - local current_branch=$(get_current_branch) - local has_git_repo="false" - - if has_git; then - has_git_repo="true" - fi - - # Use prefix-based lookup to support multiple branches per spec - local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch") - - cat </dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; } - diff --git a/.specify/scripts/bash/create-new-feature.sh b/.specify/scripts/bash/create-new-feature.sh deleted file mode 100755 index c40cfd7..0000000 --- a/.specify/scripts/bash/create-new-feature.sh +++ /dev/null @@ -1,297 +0,0 @@ -#!/usr/bin/env bash - -set -e - -JSON_MODE=false -SHORT_NAME="" -BRANCH_NUMBER="" -ARGS=() -i=1 -while [ $i -le $# ]; do - arg="${!i}" - case "$arg" in - --json) - JSON_MODE=true - ;; - --short-name) - if [ $((i + 1)) -gt $# ]; then - echo 'Error: --short-name requires a value' >&2 - exit 1 - fi - i=$((i + 1)) - next_arg="${!i}" - # Check if the next argument is another option (starts with --) - if [[ "$next_arg" == --* ]]; then - echo 'Error: --short-name requires a value' >&2 - exit 1 - fi - SHORT_NAME="$next_arg" - ;; - --number) - if [ $((i + 1)) -gt $# ]; then - echo 'Error: --number requires a value' >&2 - exit 1 - fi - i=$((i + 1)) - next_arg="${!i}" - if [[ "$next_arg" == --* ]]; then - echo 'Error: --number requires a value' >&2 - exit 1 - fi - BRANCH_NUMBER="$next_arg" - ;; - --help|-h) - echo "Usage: $0 [--json] [--short-name ] [--number N] " - echo "" - echo "Options:" - echo " --json Output in JSON format" - echo " --short-name Provide a custom short name (2-4 words) for the branch" - echo " --number N Specify branch number manually (overrides auto-detection)" - echo " --help, -h Show this help message" - echo "" - echo "Examples:" - echo " $0 'Add user authentication system' --short-name 'user-auth'" - echo " $0 'Implement OAuth2 integration for API' --number 5" - exit 0 - ;; - *) - ARGS+=("$arg") - ;; - esac - i=$((i + 1)) -done - -FEATURE_DESCRIPTION="${ARGS[*]}" -if [ -z "$FEATURE_DESCRIPTION" ]; then - echo "Usage: $0 [--json] [--short-name ] [--number N] " >&2 - exit 1 -fi - -# Function to find the repository root by searching for existing project markers -find_repo_root() { - local dir="$1" - while [ "$dir" != "/" ]; do - if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then - echo "$dir" - return 0 - fi - dir="$(dirname "$dir")" - done - return 1 -} - -# Function to get highest number from specs directory -get_highest_from_specs() { - local specs_dir="$1" - local highest=0 - - if [ -d "$specs_dir" ]; then - for dir in "$specs_dir"/*; do - [ -d "$dir" ] || continue - dirname=$(basename "$dir") - number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0") - number=$((10#$number)) - if [ "$number" -gt "$highest" ]; then - highest=$number - fi - done - fi - - echo "$highest" -} - -# Function to get highest number from git branches -get_highest_from_branches() { - local highest=0 - - # Get all branches (local and remote) - branches=$(git branch -a 2>/dev/null || echo "") - - if [ -n "$branches" ]; then - while IFS= read -r branch; do - # Clean branch name: remove leading markers and remote prefixes - clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||') - - # Extract feature number if branch matches pattern ###-* - if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then - number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0") - number=$((10#$number)) - if [ "$number" -gt "$highest" ]; then - highest=$number - fi - fi - done <<< "$branches" - fi - - echo "$highest" -} - -# Function to check existing branches (local and remote) and return next available number -check_existing_branches() { - local specs_dir="$1" - - # Fetch all remotes to get latest branch info (suppress errors if no remotes) - git fetch --all --prune 2>/dev/null || true - - # Get highest number from ALL branches (not just matching short name) - local highest_branch=$(get_highest_from_branches) - - # Get highest number from ALL specs (not just matching short name) - local highest_spec=$(get_highest_from_specs "$specs_dir") - - # Take the maximum of both - local max_num=$highest_branch - if [ "$highest_spec" -gt "$max_num" ]; then - max_num=$highest_spec - fi - - # Return next number - echo $((max_num + 1)) -} - -# Function to clean and format a branch name -clean_branch_name() { - local name="$1" - echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//' -} - -# Resolve repository root. Prefer git information when available, but fall back -# to searching for repository markers so the workflow still functions in repositories that -# were initialised with --no-git. -SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - -if git rev-parse --show-toplevel >/dev/null 2>&1; then - REPO_ROOT=$(git rev-parse --show-toplevel) - HAS_GIT=true -else - REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")" - if [ -z "$REPO_ROOT" ]; then - echo "Error: Could not determine repository root. Please run this script from within the repository." >&2 - exit 1 - fi - HAS_GIT=false -fi - -cd "$REPO_ROOT" - -SPECS_DIR="$REPO_ROOT/specs" -mkdir -p "$SPECS_DIR" - -# Function to generate branch name with stop word filtering and length filtering -generate_branch_name() { - local description="$1" - - # Common stop words to filter out - local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$" - - # Convert to lowercase and split into words - local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g') - - # Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original) - local meaningful_words=() - for word in $clean_name; do - # Skip empty words - [ -z "$word" ] && continue - - # Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms) - if ! echo "$word" | grep -qiE "$stop_words"; then - if [ ${#word} -ge 3 ]; then - meaningful_words+=("$word") - elif echo "$description" | grep -q "\b${word^^}\b"; then - # Keep short words if they appear as uppercase in original (likely acronyms) - meaningful_words+=("$word") - fi - fi - done - - # If we have meaningful words, use first 3-4 of them - if [ ${#meaningful_words[@]} -gt 0 ]; then - local max_words=3 - if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi - - local result="" - local count=0 - for word in "${meaningful_words[@]}"; do - if [ $count -ge $max_words ]; then break; fi - if [ -n "$result" ]; then result="$result-"; fi - result="$result$word" - count=$((count + 1)) - done - echo "$result" - else - # Fallback to original logic if no meaningful words found - local cleaned=$(clean_branch_name "$description") - echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//' - fi -} - -# Generate branch name -if [ -n "$SHORT_NAME" ]; then - # Use provided short name, just clean it up - BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME") -else - # Generate from description with smart filtering - BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION") -fi - -# Determine branch number -if [ -z "$BRANCH_NUMBER" ]; then - if [ "$HAS_GIT" = true ]; then - # Check existing branches on remotes - BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR") - else - # Fall back to local directory check - HIGHEST=$(get_highest_from_specs "$SPECS_DIR") - BRANCH_NUMBER=$((HIGHEST + 1)) - fi -fi - -# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal) -FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))") -BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}" - -# GitHub enforces a 244-byte limit on branch names -# Validate and truncate if necessary -MAX_BRANCH_LENGTH=244 -if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then - # Calculate how much we need to trim from suffix - # Account for: feature number (3) + hyphen (1) = 4 chars - MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4)) - - # Truncate suffix at word boundary if possible - TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH) - # Remove trailing hyphen if truncation created one - TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//') - - ORIGINAL_BRANCH_NAME="$BRANCH_NAME" - BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}" - - >&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit" - >&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)" - >&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)" -fi - -if [ "$HAS_GIT" = true ]; then - git checkout -b "$BRANCH_NAME" -else - >&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME" -fi - -FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME" -mkdir -p "$FEATURE_DIR" - -TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md" -SPEC_FILE="$FEATURE_DIR/spec.md" -if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi - -# Set the SPECIFY_FEATURE environment variable for the current session -export SPECIFY_FEATURE="$BRANCH_NAME" - -if $JSON_MODE; then - printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM" -else - echo "BRANCH_NAME: $BRANCH_NAME" - echo "SPEC_FILE: $SPEC_FILE" - echo "FEATURE_NUM: $FEATURE_NUM" - echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME" -fi diff --git a/.specify/scripts/bash/setup-plan.sh b/.specify/scripts/bash/setup-plan.sh deleted file mode 100755 index d01c6d6..0000000 --- a/.specify/scripts/bash/setup-plan.sh +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# Parse command line arguments -JSON_MODE=false -ARGS=() - -for arg in "$@"; do - case "$arg" in - --json) - JSON_MODE=true - ;; - --help|-h) - echo "Usage: $0 [--json]" - echo " --json Output results in JSON format" - echo " --help Show this help message" - exit 0 - ;; - *) - ARGS+=("$arg") - ;; - esac -done - -# Get script directory and load common functions -SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -source "$SCRIPT_DIR/common.sh" - -# Get all paths and variables from common functions -eval $(get_feature_paths) - -# Check if we're on a proper feature branch (only for git repos) -check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 - -# Ensure the feature directory exists -mkdir -p "$FEATURE_DIR" - -# Copy plan template if it exists -TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md" -if [[ -f "$TEMPLATE" ]]; then - cp "$TEMPLATE" "$IMPL_PLAN" - echo "Copied plan template to $IMPL_PLAN" -else - echo "Warning: Plan template not found at $TEMPLATE" - # Create a basic plan file if template doesn't exist - touch "$IMPL_PLAN" -fi - -# Output results -if $JSON_MODE; then - printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \ - "$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT" -else - echo "FEATURE_SPEC: $FEATURE_SPEC" - echo "IMPL_PLAN: $IMPL_PLAN" - echo "SPECS_DIR: $FEATURE_DIR" - echo "BRANCH: $CURRENT_BRANCH" - echo "HAS_GIT: $HAS_GIT" -fi - diff --git a/.specify/scripts/bash/update-agent-context.sh b/.specify/scripts/bash/update-agent-context.sh deleted file mode 100755 index 6d3e0b3..0000000 --- a/.specify/scripts/bash/update-agent-context.sh +++ /dev/null @@ -1,799 +0,0 @@ -#!/usr/bin/env bash - -# Update agent context files with information from plan.md -# -# This script maintains AI agent context files by parsing feature specifications -# and updating agent-specific configuration files with project information. -# -# MAIN FUNCTIONS: -# 1. Environment Validation -# - Verifies git repository structure and branch information -# - Checks for required plan.md files and templates -# - Validates file permissions and accessibility -# -# 2. Plan Data Extraction -# - Parses plan.md files to extract project metadata -# - Identifies language/version, frameworks, databases, and project types -# - Handles missing or incomplete specification data gracefully -# -# 3. Agent File Management -# - Creates new agent context files from templates when needed -# - Updates existing agent files with new project information -# - Preserves manual additions and custom configurations -# - Supports multiple AI agent formats and directory structures -# -# 4. Content Generation -# - Generates language-specific build/test commands -# - Creates appropriate project directory structures -# - Updates technology stacks and recent changes sections -# - Maintains consistent formatting and timestamps -# -# 5. Multi-Agent Support -# - Handles agent-specific file paths and naming conventions -# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI -# - Can update single agents or all existing agent files -# - Creates default Claude file if no agent files exist -# -# Usage: ./update-agent-context.sh [agent_type] -# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder -# Leave empty to update all existing agent files - -set -e - -# Enable strict error handling -set -u -set -o pipefail - -#============================================================================== -# Configuration and Global Variables -#============================================================================== - -# Get script directory and load common functions -SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -source "$SCRIPT_DIR/common.sh" - -# Get all paths and variables from common functions -eval $(get_feature_paths) - -NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code -AGENT_TYPE="${1:-}" - -# Agent-specific file paths -CLAUDE_FILE="$REPO_ROOT/CLAUDE.md" -GEMINI_FILE="$REPO_ROOT/GEMINI.md" -COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md" -CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc" -QWEN_FILE="$REPO_ROOT/QWEN.md" -AGENTS_FILE="$REPO_ROOT/AGENTS.md" -WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md" -KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md" -AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md" -ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md" -CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md" -QODER_FILE="$REPO_ROOT/QODER.md" -AMP_FILE="$REPO_ROOT/AGENTS.md" -SHAI_FILE="$REPO_ROOT/SHAI.md" -Q_FILE="$REPO_ROOT/AGENTS.md" -BOB_FILE="$REPO_ROOT/AGENTS.md" - -# Template file -TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md" - -# Global variables for parsed plan data -NEW_LANG="" -NEW_FRAMEWORK="" -NEW_DB="" -NEW_PROJECT_TYPE="" - -#============================================================================== -# Utility Functions -#============================================================================== - -log_info() { - echo "INFO: $1" -} - -log_success() { - echo "✓ $1" -} - -log_error() { - echo "ERROR: $1" >&2 -} - -log_warning() { - echo "WARNING: $1" >&2 -} - -# Cleanup function for temporary files -cleanup() { - local exit_code=$? - rm -f /tmp/agent_update_*_$$ - rm -f /tmp/manual_additions_$$ - exit $exit_code -} - -# Set up cleanup trap -trap cleanup EXIT INT TERM - -#============================================================================== -# Validation Functions -#============================================================================== - -validate_environment() { - # Check if we have a current branch/feature (git or non-git) - if [[ -z "$CURRENT_BRANCH" ]]; then - log_error "Unable to determine current feature" - if [[ "$HAS_GIT" == "true" ]]; then - log_info "Make sure you're on a feature branch" - else - log_info "Set SPECIFY_FEATURE environment variable or create a feature first" - fi - exit 1 - fi - - # Check if plan.md exists - if [[ ! -f "$NEW_PLAN" ]]; then - log_error "No plan.md found at $NEW_PLAN" - log_info "Make sure you're working on a feature with a corresponding spec directory" - if [[ "$HAS_GIT" != "true" ]]; then - log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first" - fi - exit 1 - fi - - # Check if template exists (needed for new files) - if [[ ! -f "$TEMPLATE_FILE" ]]; then - log_warning "Template file not found at $TEMPLATE_FILE" - log_warning "Creating new agent files will fail" - fi -} - -#============================================================================== -# Plan Parsing Functions -#============================================================================== - -extract_plan_field() { - local field_pattern="$1" - local plan_file="$2" - - grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \ - head -1 | \ - sed "s|^\*\*${field_pattern}\*\*: ||" | \ - sed 's/^[ \t]*//;s/[ \t]*$//' | \ - grep -v "NEEDS CLARIFICATION" | \ - grep -v "^N/A$" || echo "" -} - -parse_plan_data() { - local plan_file="$1" - - if [[ ! -f "$plan_file" ]]; then - log_error "Plan file not found: $plan_file" - return 1 - fi - - if [[ ! -r "$plan_file" ]]; then - log_error "Plan file is not readable: $plan_file" - return 1 - fi - - log_info "Parsing plan data from $plan_file" - - NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file") - NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file") - NEW_DB=$(extract_plan_field "Storage" "$plan_file") - NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file") - - # Log what we found - if [[ -n "$NEW_LANG" ]]; then - log_info "Found language: $NEW_LANG" - else - log_warning "No language information found in plan" - fi - - if [[ -n "$NEW_FRAMEWORK" ]]; then - log_info "Found framework: $NEW_FRAMEWORK" - fi - - if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then - log_info "Found database: $NEW_DB" - fi - - if [[ -n "$NEW_PROJECT_TYPE" ]]; then - log_info "Found project type: $NEW_PROJECT_TYPE" - fi -} - -format_technology_stack() { - local lang="$1" - local framework="$2" - local parts=() - - # Add non-empty parts - [[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang") - [[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework") - - # Join with proper formatting - if [[ ${#parts[@]} -eq 0 ]]; then - echo "" - elif [[ ${#parts[@]} -eq 1 ]]; then - echo "${parts[0]}" - else - # Join multiple parts with " + " - local result="${parts[0]}" - for ((i=1; i<${#parts[@]}; i++)); do - result="$result + ${parts[i]}" - done - echo "$result" - fi -} - -#============================================================================== -# Template and Content Generation Functions -#============================================================================== - -get_project_structure() { - local project_type="$1" - - if [[ "$project_type" == *"web"* ]]; then - echo "backend/\\nfrontend/\\ntests/" - else - echo "src/\\ntests/" - fi -} - -get_commands_for_language() { - local lang="$1" - - case "$lang" in - *"Python"*) - echo "cd src && pytest && ruff check ." - ;; - *"Rust"*) - echo "cargo test && cargo clippy" - ;; - *"JavaScript"*|*"TypeScript"*) - echo "npm test \\&\\& npm run lint" - ;; - *) - echo "# Add commands for $lang" - ;; - esac -} - -get_language_conventions() { - local lang="$1" - echo "$lang: Follow standard conventions" -} - -create_new_agent_file() { - local target_file="$1" - local temp_file="$2" - local project_name="$3" - local current_date="$4" - - if [[ ! -f "$TEMPLATE_FILE" ]]; then - log_error "Template not found at $TEMPLATE_FILE" - return 1 - fi - - if [[ ! -r "$TEMPLATE_FILE" ]]; then - log_error "Template file is not readable: $TEMPLATE_FILE" - return 1 - fi - - log_info "Creating new agent context file from template..." - - if ! cp "$TEMPLATE_FILE" "$temp_file"; then - log_error "Failed to copy template file" - return 1 - fi - - # Replace template placeholders - local project_structure - project_structure=$(get_project_structure "$NEW_PROJECT_TYPE") - - local commands - commands=$(get_commands_for_language "$NEW_LANG") - - local language_conventions - language_conventions=$(get_language_conventions "$NEW_LANG") - - # Perform substitutions with error checking using safer approach - # Escape special characters for sed by using a different delimiter or escaping - local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g') - local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g') - local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g') - - # Build technology stack and recent change strings conditionally - local tech_stack - if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then - tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)" - elif [[ -n "$escaped_lang" ]]; then - tech_stack="- $escaped_lang ($escaped_branch)" - elif [[ -n "$escaped_framework" ]]; then - tech_stack="- $escaped_framework ($escaped_branch)" - else - tech_stack="- ($escaped_branch)" - fi - - local recent_change - if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then - recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework" - elif [[ -n "$escaped_lang" ]]; then - recent_change="- $escaped_branch: Added $escaped_lang" - elif [[ -n "$escaped_framework" ]]; then - recent_change="- $escaped_branch: Added $escaped_framework" - else - recent_change="- $escaped_branch: Added" - fi - - local substitutions=( - "s|\[PROJECT NAME\]|$project_name|" - "s|\[DATE\]|$current_date|" - "s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|" - "s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g" - "s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|" - "s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|" - "s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|" - ) - - for substitution in "${substitutions[@]}"; do - if ! sed -i.bak -e "$substitution" "$temp_file"; then - log_error "Failed to perform substitution: $substitution" - rm -f "$temp_file" "$temp_file.bak" - return 1 - fi - done - - # Convert \n sequences to actual newlines - newline=$(printf '\n') - sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file" - - # Clean up backup files - rm -f "$temp_file.bak" "$temp_file.bak2" - - return 0 -} - - - - -update_existing_agent_file() { - local target_file="$1" - local current_date="$2" - - log_info "Updating existing agent context file..." - - # Use a single temporary file for atomic update - local temp_file - temp_file=$(mktemp) || { - log_error "Failed to create temporary file" - return 1 - } - - # Process the file in one pass - local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK") - local new_tech_entries=() - local new_change_entry="" - - # Prepare new technology entries - if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then - new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)") - fi - - if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then - new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)") - fi - - # Prepare new change entry - if [[ -n "$tech_stack" ]]; then - new_change_entry="- $CURRENT_BRANCH: Added $tech_stack" - elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then - new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB" - fi - - # Check if sections exist in the file - local has_active_technologies=0 - local has_recent_changes=0 - - if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then - has_active_technologies=1 - fi - - if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then - has_recent_changes=1 - fi - - # Process file line by line - local in_tech_section=false - local in_changes_section=false - local tech_entries_added=false - local changes_entries_added=false - local existing_changes_count=0 - local file_ended=false - - while IFS= read -r line || [[ -n "$line" ]]; do - # Handle Active Technologies section - if [[ "$line" == "## Active Technologies" ]]; then - echo "$line" >> "$temp_file" - in_tech_section=true - continue - elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then - # Add new tech entries before closing the section - if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then - printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" - tech_entries_added=true - fi - echo "$line" >> "$temp_file" - in_tech_section=false - continue - elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then - # Add new tech entries before empty line in tech section - if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then - printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" - tech_entries_added=true - fi - echo "$line" >> "$temp_file" - continue - fi - - # Handle Recent Changes section - if [[ "$line" == "## Recent Changes" ]]; then - echo "$line" >> "$temp_file" - # Add new change entry right after the heading - if [[ -n "$new_change_entry" ]]; then - echo "$new_change_entry" >> "$temp_file" - fi - in_changes_section=true - changes_entries_added=true - continue - elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then - echo "$line" >> "$temp_file" - in_changes_section=false - continue - elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then - # Keep only first 2 existing changes - if [[ $existing_changes_count -lt 2 ]]; then - echo "$line" >> "$temp_file" - ((existing_changes_count++)) - fi - continue - fi - - # Update timestamp - if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then - echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file" - else - echo "$line" >> "$temp_file" - fi - done < "$target_file" - - # Post-loop check: if we're still in the Active Technologies section and haven't added new entries - if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then - printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" - tech_entries_added=true - fi - - # If sections don't exist, add them at the end of the file - if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then - echo "" >> "$temp_file" - echo "## Active Technologies" >> "$temp_file" - printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" - tech_entries_added=true - fi - - if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then - echo "" >> "$temp_file" - echo "## Recent Changes" >> "$temp_file" - echo "$new_change_entry" >> "$temp_file" - changes_entries_added=true - fi - - # Move temp file to target atomically - if ! mv "$temp_file" "$target_file"; then - log_error "Failed to update target file" - rm -f "$temp_file" - return 1 - fi - - return 0 -} -#============================================================================== -# Main Agent File Update Function -#============================================================================== - -update_agent_file() { - local target_file="$1" - local agent_name="$2" - - if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then - log_error "update_agent_file requires target_file and agent_name parameters" - return 1 - fi - - log_info "Updating $agent_name context file: $target_file" - - local project_name - project_name=$(basename "$REPO_ROOT") - local current_date - current_date=$(date +%Y-%m-%d) - - # Create directory if it doesn't exist - local target_dir - target_dir=$(dirname "$target_file") - if [[ ! -d "$target_dir" ]]; then - if ! mkdir -p "$target_dir"; then - log_error "Failed to create directory: $target_dir" - return 1 - fi - fi - - if [[ ! -f "$target_file" ]]; then - # Create new file from template - local temp_file - temp_file=$(mktemp) || { - log_error "Failed to create temporary file" - return 1 - } - - if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then - if mv "$temp_file" "$target_file"; then - log_success "Created new $agent_name context file" - else - log_error "Failed to move temporary file to $target_file" - rm -f "$temp_file" - return 1 - fi - else - log_error "Failed to create new agent file" - rm -f "$temp_file" - return 1 - fi - else - # Update existing file - if [[ ! -r "$target_file" ]]; then - log_error "Cannot read existing file: $target_file" - return 1 - fi - - if [[ ! -w "$target_file" ]]; then - log_error "Cannot write to existing file: $target_file" - return 1 - fi - - if update_existing_agent_file "$target_file" "$current_date"; then - log_success "Updated existing $agent_name context file" - else - log_error "Failed to update existing agent file" - return 1 - fi - fi - - return 0 -} - -#============================================================================== -# Agent Selection and Processing -#============================================================================== - -update_specific_agent() { - local agent_type="$1" - - case "$agent_type" in - claude) - update_agent_file "$CLAUDE_FILE" "Claude Code" - ;; - gemini) - update_agent_file "$GEMINI_FILE" "Gemini CLI" - ;; - copilot) - update_agent_file "$COPILOT_FILE" "GitHub Copilot" - ;; - cursor-agent) - update_agent_file "$CURSOR_FILE" "Cursor IDE" - ;; - qwen) - update_agent_file "$QWEN_FILE" "Qwen Code" - ;; - opencode) - update_agent_file "$AGENTS_FILE" "opencode" - ;; - codex) - update_agent_file "$AGENTS_FILE" "Codex CLI" - ;; - windsurf) - update_agent_file "$WINDSURF_FILE" "Windsurf" - ;; - kilocode) - update_agent_file "$KILOCODE_FILE" "Kilo Code" - ;; - auggie) - update_agent_file "$AUGGIE_FILE" "Auggie CLI" - ;; - roo) - update_agent_file "$ROO_FILE" "Roo Code" - ;; - codebuddy) - update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI" - ;; - qoder) - update_agent_file "$QODER_FILE" "Qoder CLI" - ;; - amp) - update_agent_file "$AMP_FILE" "Amp" - ;; - shai) - update_agent_file "$SHAI_FILE" "SHAI" - ;; - q) - update_agent_file "$Q_FILE" "Amazon Q Developer CLI" - ;; - bob) - update_agent_file "$BOB_FILE" "IBM Bob" - ;; - *) - log_error "Unknown agent type '$agent_type'" - log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder" - exit 1 - ;; - esac -} - -update_all_existing_agents() { - local found_agent=false - - # Check each possible agent file and update if it exists - if [[ -f "$CLAUDE_FILE" ]]; then - update_agent_file "$CLAUDE_FILE" "Claude Code" - found_agent=true - fi - - if [[ -f "$GEMINI_FILE" ]]; then - update_agent_file "$GEMINI_FILE" "Gemini CLI" - found_agent=true - fi - - if [[ -f "$COPILOT_FILE" ]]; then - update_agent_file "$COPILOT_FILE" "GitHub Copilot" - found_agent=true - fi - - if [[ -f "$CURSOR_FILE" ]]; then - update_agent_file "$CURSOR_FILE" "Cursor IDE" - found_agent=true - fi - - if [[ -f "$QWEN_FILE" ]]; then - update_agent_file "$QWEN_FILE" "Qwen Code" - found_agent=true - fi - - if [[ -f "$AGENTS_FILE" ]]; then - update_agent_file "$AGENTS_FILE" "Codex/opencode" - found_agent=true - fi - - if [[ -f "$WINDSURF_FILE" ]]; then - update_agent_file "$WINDSURF_FILE" "Windsurf" - found_agent=true - fi - - if [[ -f "$KILOCODE_FILE" ]]; then - update_agent_file "$KILOCODE_FILE" "Kilo Code" - found_agent=true - fi - - if [[ -f "$AUGGIE_FILE" ]]; then - update_agent_file "$AUGGIE_FILE" "Auggie CLI" - found_agent=true - fi - - if [[ -f "$ROO_FILE" ]]; then - update_agent_file "$ROO_FILE" "Roo Code" - found_agent=true - fi - - if [[ -f "$CODEBUDDY_FILE" ]]; then - update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI" - found_agent=true - fi - - if [[ -f "$SHAI_FILE" ]]; then - update_agent_file "$SHAI_FILE" "SHAI" - found_agent=true - fi - - if [[ -f "$QODER_FILE" ]]; then - update_agent_file "$QODER_FILE" "Qoder CLI" - found_agent=true - fi - - if [[ -f "$Q_FILE" ]]; then - update_agent_file "$Q_FILE" "Amazon Q Developer CLI" - found_agent=true - fi - - if [[ -f "$BOB_FILE" ]]; then - update_agent_file "$BOB_FILE" "IBM Bob" - found_agent=true - fi - - # If no agent files exist, create a default Claude file - if [[ "$found_agent" == false ]]; then - log_info "No existing agent files found, creating default Claude file..." - update_agent_file "$CLAUDE_FILE" "Claude Code" - fi -} -print_summary() { - echo - log_info "Summary of changes:" - - if [[ -n "$NEW_LANG" ]]; then - echo " - Added language: $NEW_LANG" - fi - - if [[ -n "$NEW_FRAMEWORK" ]]; then - echo " - Added framework: $NEW_FRAMEWORK" - fi - - if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then - echo " - Added database: $NEW_DB" - fi - - echo - - log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]" -} - -#============================================================================== -# Main Execution -#============================================================================== - -main() { - # Validate environment before proceeding - validate_environment - - log_info "=== Updating agent context files for feature $CURRENT_BRANCH ===" - - # Parse the plan file to extract project information - if ! parse_plan_data "$NEW_PLAN"; then - log_error "Failed to parse plan data" - exit 1 - fi - - # Process based on agent type argument - local success=true - - if [[ -z "$AGENT_TYPE" ]]; then - # No specific agent provided - update all existing agent files - log_info "No agent specified, updating all existing agent files..." - if ! update_all_existing_agents; then - success=false - fi - else - # Specific agent provided - update only that agent - log_info "Updating specific agent: $AGENT_TYPE" - if ! update_specific_agent "$AGENT_TYPE"; then - success=false - fi - fi - - # Print summary - print_summary - - if [[ "$success" == true ]]; then - log_success "Agent context update completed successfully" - exit 0 - else - log_error "Agent context update completed with errors" - exit 1 - fi -} - -# Execute main function if script is run directly -if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then - main "$@" -fi - diff --git a/.specify/templates/agent-file-template.md b/.specify/templates/agent-file-template.md deleted file mode 100644 index 4cc7fd6..0000000 --- a/.specify/templates/agent-file-template.md +++ /dev/null @@ -1,28 +0,0 @@ -# [PROJECT NAME] Development Guidelines - -Auto-generated from all feature plans. Last updated: [DATE] - -## Active Technologies - -[EXTRACTED FROM ALL PLAN.MD FILES] - -## Project Structure - -```text -[ACTUAL STRUCTURE FROM PLANS] -``` - -## Commands - -[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] - -## Code Style - -[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE] - -## Recent Changes - -[LAST 3 FEATURES AND WHAT THEY ADDED] - - - diff --git a/.specify/templates/checklist-template.md b/.specify/templates/checklist-template.md deleted file mode 100644 index 806657d..0000000 --- a/.specify/templates/checklist-template.md +++ /dev/null @@ -1,40 +0,0 @@ -# [CHECKLIST TYPE] Checklist: [FEATURE NAME] - -**Purpose**: [Brief description of what this checklist covers] -**Created**: [DATE] -**Feature**: [Link to spec.md or relevant documentation] - -**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements. - - - -## [Category 1] - -- [ ] CHK001 First checklist item with clear action -- [ ] CHK002 Second checklist item -- [ ] CHK003 Third checklist item - -## [Category 2] - -- [ ] CHK004 Another category item -- [ ] CHK005 Item with specific criteria -- [ ] CHK006 Final item in this category - -## Notes - -- Check items off as completed: `[x]` -- Add comments or findings inline -- Link to relevant resources or documentation -- Items are numbered sequentially for easy reference diff --git a/.specify/templates/plan-template.md b/.specify/templates/plan-template.md deleted file mode 100644 index 6a8bfc6..0000000 --- a/.specify/templates/plan-template.md +++ /dev/null @@ -1,104 +0,0 @@ -# Implementation Plan: [FEATURE] - -**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link] -**Input**: Feature specification from `/specs/[###-feature-name]/spec.md` - -**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow. - -## Summary - -[Extract from feature spec: primary requirement + technical approach from research] - -## Technical Context - - - -**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION] -**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION] -**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A] -**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION] -**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION] -**Project Type**: [single/web/mobile - determines source structure] -**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION] -**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION] -**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION] - -## Constitution Check - -*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.* - -[Gates determined based on constitution file] - -## Project Structure - -### Documentation (this feature) - -```text -specs/[###-feature]/ -├── plan.md # This file (/speckit.plan command output) -├── research.md # Phase 0 output (/speckit.plan command) -├── data-model.md # Phase 1 output (/speckit.plan command) -├── quickstart.md # Phase 1 output (/speckit.plan command) -├── contracts/ # Phase 1 output (/speckit.plan command) -└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan) -``` - -### Source Code (repository root) - - -```text -# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT) -src/ -├── models/ -├── services/ -├── cli/ -└── lib/ - -tests/ -├── contract/ -├── integration/ -└── unit/ - -# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected) -backend/ -├── src/ -│ ├── models/ -│ ├── services/ -│ └── api/ -└── tests/ - -frontend/ -├── src/ -│ ├── components/ -│ ├── pages/ -│ └── services/ -└── tests/ - -# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected) -api/ -└── [same as backend above] - -ios/ or android/ -└── [platform-specific structure: feature modules, UI flows, platform tests] -``` - -**Structure Decision**: [Document the selected structure and reference the real -directories captured above] - -## Complexity Tracking - -> **Fill ONLY if Constitution Check has violations that must be justified** - -| Violation | Why Needed | Simpler Alternative Rejected Because | -|-----------|------------|-------------------------------------| -| [e.g., 4th project] | [current need] | [why 3 projects insufficient] | -| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] | diff --git a/.specify/templates/spec-template.md b/.specify/templates/spec-template.md deleted file mode 100644 index c67d914..0000000 --- a/.specify/templates/spec-template.md +++ /dev/null @@ -1,115 +0,0 @@ -# Feature Specification: [FEATURE NAME] - -**Feature Branch**: `[###-feature-name]` -**Created**: [DATE] -**Status**: Draft -**Input**: User description: "$ARGUMENTS" - -## User Scenarios & Testing *(mandatory)* - - - -### User Story 1 - [Brief Title] (Priority: P1) - -[Describe this user journey in plain language] - -**Why this priority**: [Explain the value and why it has this priority level] - -**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"] - -**Acceptance Scenarios**: - -1. **Given** [initial state], **When** [action], **Then** [expected outcome] -2. **Given** [initial state], **When** [action], **Then** [expected outcome] - ---- - -### User Story 2 - [Brief Title] (Priority: P2) - -[Describe this user journey in plain language] - -**Why this priority**: [Explain the value and why it has this priority level] - -**Independent Test**: [Describe how this can be tested independently] - -**Acceptance Scenarios**: - -1. **Given** [initial state], **When** [action], **Then** [expected outcome] - ---- - -### User Story 3 - [Brief Title] (Priority: P3) - -[Describe this user journey in plain language] - -**Why this priority**: [Explain the value and why it has this priority level] - -**Independent Test**: [Describe how this can be tested independently] - -**Acceptance Scenarios**: - -1. **Given** [initial state], **When** [action], **Then** [expected outcome] - ---- - -[Add more user stories as needed, each with an assigned priority] - -### Edge Cases - - - -- What happens when [boundary condition]? -- How does system handle [error scenario]? - -## Requirements *(mandatory)* - - - -### Functional Requirements - -- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"] -- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"] -- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"] -- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"] -- **FR-005**: System MUST [behavior, e.g., "log all security events"] - -*Example of marking unclear requirements:* - -- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?] -- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified] - -### Key Entities *(include if feature involves data)* - -- **[Entity 1]**: [What it represents, key attributes without implementation] -- **[Entity 2]**: [What it represents, relationships to other entities] - -## Success Criteria *(mandatory)* - - - -### Measurable Outcomes - -- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"] -- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"] -- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"] -- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"] diff --git a/.specify/templates/tasks-template.md b/.specify/templates/tasks-template.md deleted file mode 100644 index 60f9be4..0000000 --- a/.specify/templates/tasks-template.md +++ /dev/null @@ -1,251 +0,0 @@ ---- - -description: "Task list template for feature implementation" ---- - -# Tasks: [FEATURE NAME] - -**Input**: Design documents from `/specs/[###-feature-name]/` -**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/ - -**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification. - -**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story. - -## Format: `[ID] [P?] [Story] Description` - -- **[P]**: Can run in parallel (different files, no dependencies) -- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3) -- Include exact file paths in descriptions - -## Path Conventions - -- **Single project**: `src/`, `tests/` at repository root -- **Web app**: `backend/src/`, `frontend/src/` -- **Mobile**: `api/src/`, `ios/src/` or `android/src/` -- Paths shown below assume single project - adjust based on plan.md structure - - - -## Phase 1: Setup (Shared Infrastructure) - -**Purpose**: Project initialization and basic structure - -- [ ] T001 Create project structure per implementation plan -- [ ] T002 Initialize [language] project with [framework] dependencies -- [ ] T003 [P] Configure linting and formatting tools - ---- - -## Phase 2: Foundational (Blocking Prerequisites) - -**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented - -**⚠️ CRITICAL**: No user story work can begin until this phase is complete - -Examples of foundational tasks (adjust based on your project): - -- [ ] T004 Setup database schema and migrations framework -- [ ] T005 [P] Implement authentication/authorization framework -- [ ] T006 [P] Setup API routing and middleware structure -- [ ] T007 Create base models/entities that all stories depend on -- [ ] T008 Configure error handling and logging infrastructure -- [ ] T009 Setup environment configuration management - -**Checkpoint**: Foundation ready - user story implementation can now begin in parallel - ---- - -## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP - -**Goal**: [Brief description of what this story delivers] - -**Independent Test**: [How to verify this story works on its own] - -### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️ - -> **NOTE: Write these tests FIRST, ensure they FAIL before implementation** - -- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py -- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py - -### Implementation for User Story 1 - -- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py -- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py -- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013) -- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py -- [ ] T016 [US1] Add validation and error handling -- [ ] T017 [US1] Add logging for user story 1 operations - -**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently - ---- - -## Phase 4: User Story 2 - [Title] (Priority: P2) - -**Goal**: [Brief description of what this story delivers] - -**Independent Test**: [How to verify this story works on its own] - -### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️ - -- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py -- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py - -### Implementation for User Story 2 - -- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py -- [ ] T021 [US2] Implement [Service] in src/services/[service].py -- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py -- [ ] T023 [US2] Integrate with User Story 1 components (if needed) - -**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently - ---- - -## Phase 5: User Story 3 - [Title] (Priority: P3) - -**Goal**: [Brief description of what this story delivers] - -**Independent Test**: [How to verify this story works on its own] - -### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️ - -- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py -- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py - -### Implementation for User Story 3 - -- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py -- [ ] T027 [US3] Implement [Service] in src/services/[service].py -- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py - -**Checkpoint**: All user stories should now be independently functional - ---- - -[Add more user story phases as needed, following the same pattern] - ---- - -## Phase N: Polish & Cross-Cutting Concerns - -**Purpose**: Improvements that affect multiple user stories - -- [ ] TXXX [P] Documentation updates in docs/ -- [ ] TXXX Code cleanup and refactoring -- [ ] TXXX Performance optimization across all stories -- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/ -- [ ] TXXX Security hardening -- [ ] TXXX Run quickstart.md validation - ---- - -## Dependencies & Execution Order - -### Phase Dependencies - -- **Setup (Phase 1)**: No dependencies - can start immediately -- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories -- **User Stories (Phase 3+)**: All depend on Foundational phase completion - - User stories can then proceed in parallel (if staffed) - - Or sequentially in priority order (P1 → P2 → P3) -- **Polish (Final Phase)**: Depends on all desired user stories being complete - -### User Story Dependencies - -- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories -- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable -- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable - -### Within Each User Story - -- Tests (if included) MUST be written and FAIL before implementation -- Models before services -- Services before endpoints -- Core implementation before integration -- Story complete before moving to next priority - -### Parallel Opportunities - -- All Setup tasks marked [P] can run in parallel -- All Foundational tasks marked [P] can run in parallel (within Phase 2) -- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows) -- All tests for a user story marked [P] can run in parallel -- Models within a story marked [P] can run in parallel -- Different user stories can be worked on in parallel by different team members - ---- - -## Parallel Example: User Story 1 - -```bash -# Launch all tests for User Story 1 together (if tests requested): -Task: "Contract test for [endpoint] in tests/contract/test_[name].py" -Task: "Integration test for [user journey] in tests/integration/test_[name].py" - -# Launch all models for User Story 1 together: -Task: "Create [Entity1] model in src/models/[entity1].py" -Task: "Create [Entity2] model in src/models/[entity2].py" -``` - ---- - -## Implementation Strategy - -### MVP First (User Story 1 Only) - -1. Complete Phase 1: Setup -2. Complete Phase 2: Foundational (CRITICAL - blocks all stories) -3. Complete Phase 3: User Story 1 -4. **STOP and VALIDATE**: Test User Story 1 independently -5. Deploy/demo if ready - -### Incremental Delivery - -1. Complete Setup + Foundational → Foundation ready -2. Add User Story 1 → Test independently → Deploy/Demo (MVP!) -3. Add User Story 2 → Test independently → Deploy/Demo -4. Add User Story 3 → Test independently → Deploy/Demo -5. Each story adds value without breaking previous stories - -### Parallel Team Strategy - -With multiple developers: - -1. Team completes Setup + Foundational together -2. Once Foundational is done: - - Developer A: User Story 1 - - Developer B: User Story 2 - - Developer C: User Story 3 -3. Stories complete and integrate independently - ---- - -## Notes - -- [P] tasks = different files, no dependencies -- [Story] label maps task to specific user story for traceability -- Each user story should be independently completable and testable -- Verify tests fail before implementing -- Commit after each task or logical group -- Stop at any checkpoint to validate story independently -- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence diff --git a/openspec/config.yaml b/openspec/config.yaml new file mode 100644 index 0000000..34fe797 --- /dev/null +++ b/openspec/config.yaml @@ -0,0 +1,22 @@ +schema: spec-driven +context: | + Stack: Kotlin 1.9+ (Java 8 compat), Gradle, Retrofit, Moshi. + Core: Strong-typed WorldTides proxy, immutable models, strict null-safety (Result). + Constraints: First-class Java interop (Callbacks + Suspend, @JvmStatic). 100% test coverage. + Conventions: SemVer, Conventional Commits, KDoc/Javadoc required. No raw JSON exposure. + +rules: + specs/**/*.md: + - Use Given/When/Then for behavioral requirements. + - Keep scenarios atomic and testable. + proposal: + - Keep proposals under 500 words + - Always include a "Non-goals" section + tasks: + - Break tasks into chunks of max 2 hours + README.md: + - Include side-by-side Kotlin and Java examples for all samples. + CHANGELOG.md: + - Log updates for every change in the library. + - Use semantic versioning. + - Only date the changes once a release is cut and lib version is bumped. diff --git a/openspec/specs/sample-client.md b/openspec/specs/sample-client.md new file mode 100644 index 0000000..8fae118 --- /dev/null +++ b/openspec/specs/sample-client.md @@ -0,0 +1,36 @@ +# Feature Specification: E2E Sample Client + +**Feature Branch**: `002-sample-client` +**Created**: 2026-01-16 +**Status**: Implemented + +## User Scenarios + +### User Story 1 - Verify All Library API Methods Work (Priority: P1) + +A CI pipeline operator wants to verify that the WorldTides library correctly integrates with the live World Tides API. They manually trigger an E2E workflow that runs the sample client against all supported API methods using a real API key. + +**Acceptance Scenarios**: + +1. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTideExtremes()`, **Then** the response contains a non-empty list of tide extremes with valid dates, heights, and tide types (High/Low). +2. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTideHeights()`, **Then** the response contains a non-empty list of tide heights with valid dates and height values. +3. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with HEIGHTS only, **Then** the response contains tide heights and extremes is null. +4. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with EXTREMES only, **Then** the response contains tide extremes and heights is null. +5. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with both HEIGHTS and EXTREMES, **Then** the response contains both heights and extremes data. + +### User Story 2 - Error Handling Verification (Priority: P2) + +A developer wants to verify that the library properly returns errors when given invalid input or credentials. The sample client should demonstrate proper error handling behavior. + +**Acceptance Scenarios**: + +1. **Given** an invalid API key, **When** any API method is called, **Then** the callback receives an error result (not a success with empty data). + +### User Story 3 - CI Integration (Priority: P3) + +A release engineer wants a GitHub Actions workflow that can be manually triggered to run E2E tests. The workflow should only execute on demand to control API usage costs. + +**Acceptance Scenarios**: + +1. **Given** the `e2e.yml` workflow exists, **When** a maintainer clicks "Run workflow" from GitHub Actions, **Then** the workflow runs the sample client tests with the configured API key secret. +2. **Given** the E2E workflow, **When** a push is made to any branch, **Then** the E2E workflow does NOT automatically trigger. diff --git a/openspec/specs/support-heights-api.md b/openspec/specs/support-heights-api.md new file mode 100644 index 0000000..6cd6f4d --- /dev/null +++ b/openspec/specs/support-heights-api.md @@ -0,0 +1,37 @@ +# Feature Specification: Support Heights API + +**Feature Branch**: `001-support-heights-api` +**Created**: 2026-01-09 +**Status**: Implemented + +## User Scenarios + +### User Story 1 - Retrieve Tide Heights (Priority: P1) + +As a developer using the library, I want to fetch predicted tide heights for a specific location and date range so that I can use this data to display tide curves/charts in my application. + +**Acceptance Scenarios**: + +1. **Given** the library is initialized with a valid API key, **When** requesting tide heights for a valid location and time range, **Then** the result contains a list of tide height records. +2. **Given** an invalid API key, **When** requesting tide heights, **Then** an error indicating authentication failure is returned. +3. **Given** a location with no available data, **When** requesting tide heights, **Then** the library returns an empty result or appropriate specific error, not a generic crash. + +### User Story 2 - Retrieve Tides with Flexible Data Types (Priority: P2) + +As a developer, I want to fetch tide data by specifying which data types to include (e.g., heights, extremes) in a single API call so that I can reduce network overhead and get exactly the data I need. + +**Acceptance Scenarios**: + +1. **Given** the library is initialized with a valid API key, **When** requesting `getTides([HEIGHTS, EXTREMES])`, **Then** the result contains both heights and extremes data. +2. **Given** one data type is unavailable for a location, **When** requesting multiple data types, **Then** the available data is returned and the unavailable part is empty or null. +3. **Given** `getTides([HEIGHTS])` is called, **When** the request completes, **Then** only heights data is populated in the result. + +### User Story 3 - Generic Callback Interface (Priority: P1) + +As a library maintainer, I want the callback interface to be generic (`TidesCallback`) so that it can be reused for different response types without code duplication. + +**Acceptance Scenarios**: + +1. **Given** a generic callback, **When** used with Heights request, **Then** it correctly receives `TideHeights`. +2. **Given** a generic callback, **When** used with Extremes request, **Then** it correctly receives `TideExtremes`. +3. **Given** a generic callback, **When** used with `getTides` request, **Then** it correctly receives `Tides`. diff --git a/specs/001-support-heights-api/checklists/requirements.md b/specs/001-support-heights-api/checklists/requirements.md deleted file mode 100644 index 634e9d9..0000000 --- a/specs/001-support-heights-api/checklists/requirements.md +++ /dev/null @@ -1,34 +0,0 @@ -# Specification Quality Checklist: Support Heights API - -**Purpose**: Validate specification completeness and quality before proceeding to planning -**Created**: 2026-01-09 -**Feature**: [Link to spec.md](../spec.md) - -## Content Quality - -- [x] No implementation details (languages, frameworks, APIs) -- [x] Focused on user value and business needs -- [x] Written for non-technical stakeholders -- [x] All mandatory sections completed - -## Requirement Completeness - -- [x] No [NEEDS CLARIFICATION] markers remain -- [x] Requirements are testable and unambiguous -- [x] Success criteria are measurable -- [x] Success criteria are technology-agnostic (no implementation details) -- [x] All acceptance scenarios are defined -- [x] Edge cases are identified -- [x] Scope is clearly bounded -- [x] Dependencies and assumptions identified - -## Feature Readiness - -- [x] All functional requirements have clear acceptance criteria -- [x] User scenarios cover primary flows -- [x] Feature meets measurable outcomes defined in Success Criteria -- [x] No implementation details leak into specification - -## Notes - -- Spec is ready for Plan/Clarify. diff --git a/specs/001-support-heights-api/contracts/TidesCallback.kt b/specs/001-support-heights-api/contracts/TidesCallback.kt deleted file mode 100644 index d0a2885..0000000 --- a/specs/001-support-heights-api/contracts/TidesCallback.kt +++ /dev/null @@ -1,12 +0,0 @@ -package com.oleksandrkruk.worldtides - -/** - * Generic callback interface for WorldTides API responses. - * Supports TideExtremes, TideHeights, TideCombined, and future types. - * - * @param T The result type. - */ -interface TidesCallback { - fun result(data: T) - fun error(error: Error) -} diff --git a/specs/001-support-heights-api/contracts/WorldTides.kt b/specs/001-support-heights-api/contracts/WorldTides.kt deleted file mode 100644 index 52f1ebc..0000000 --- a/specs/001-support-heights-api/contracts/WorldTides.kt +++ /dev/null @@ -1,90 +0,0 @@ -package com.oleksandrkruk.worldtides - -import com.oleksandrkruk.worldtides.heights.models.TideHeights -import com.oleksandrkruk.worldtides.models.Tides -import com.oleksandrkruk.worldtides.models.TideDataType -import java.util.Date - -// This contract defines the changes to the public API -class WorldTides { - - // ... existing properties and constructor ... - - /** - * Returns the predicted tide heights between [date] and number of [days] in future. - * - * @param date starting date - * @param days number of days (duration) - * @param lat latitude - * @param lon longitude - * @param callback result handler (Kotlin lambda) - */ - fun getTideHeights( - date: Date, - days: Int, - lat: String, - lon: String, - callback: (Result) -> Unit - ) { - // Implementation - } - - /** - * Java-Compatible overload for tide heights. - */ - fun getTideHeights( - date: Date, - days: Int, - lat: String, - lon: String, - callback: TidesCallback - ) { - // Implementation - } - - /** - * Returns tide data based on the specified [dataTypes]. - * Supports stacking multiple data types in a single API call. - * - * @param date starting date - * @param days number of days (duration) - * @param lat latitude - * @param lon longitude - * @param dataTypes list of data types to request (e.g., HEIGHTS, EXTREMES) - * @param callback result handler (Kotlin lambda) - */ - fun getTides( - date: Date, - days: Int, - lat: String, - lon: String, - dataTypes: List, - callback: (Result) -> Unit - ) { - // Implementation - } - - /** - * Java-Compatible overload for flexible tide data. - */ - fun getTides( - date: Date, - days: Int, - lat: String, - lon: String, - dataTypes: List, - callback: TidesCallback - ) { - // Implementation - } -} - -/** - * Enum representing the types of tide data that can be requested. - * Future versions will add STATIONS, DATUMS, etc. - */ -enum class TideDataType { - HEIGHTS, - EXTREMES - // Future: STATIONS, DATUMS -} diff --git a/specs/001-support-heights-api/data-model.md b/specs/001-support-heights-api/data-model.md deleted file mode 100644 index e60adff..0000000 --- a/specs/001-support-heights-api/data-model.md +++ /dev/null @@ -1,98 +0,0 @@ -# Data Model: Heights API - -## Core Entities - -### TideHeights -Wrapper for the list of tide heights. - -| Field | Type | Description | -|-------|------|-------------| -| `heights` | `List` | Collection of predicted tide heights | - -### Height -Represents a single tide height prediction at a specific time. - -| Field | Type | Description | -|-------|------|-------------| -| `date` | `Date` (parsed from String) | The timestamp of the prediction | -| `height` | `Double` (or `Float`) | The height level (datum relative) | -| `dt` | `Long` | Unix timestamp (epoch) | - -## Internal DTOs (Data Transfer Objects) - -### TideHeightsResponse -Parsed directly from JSON API response. - -```kotlin -data class TideHeightsResponse( - val status: Int, - val error: String?, - val heights: List -) -``` - -### HeightResponse -Parsed directly from JSON item. - -```kotlin -data class HeightResponse( - val dt: Long, - val date: String, // format: "yyyy-MM-ddTHH:mm+0000" - val height: Double, - // Note: 'type' is not expected for simple heights, or ignored if present -) -``` - ---- - -## Flexible Tides Data - -### Tides -Represents a response containing any combination of requested tide data types. - -| Field | Type | Description | -|-------|------|-------------| -| `heights` | `TideHeights?` | Tide heights (nullable if not requested) | -| `extremes` | `TideExtremes?` | Tide extremes (nullable if not requested) | -| `stations` | `TideStations?` | *(Future)* Station metadata | -| `datums` | `TideDatums?` | *(Future)* Datum information | - -### TidesResponse (DTO) -Parsed from stacked API response (e.g., `?heights&extremes`). - -```kotlin -data class TidesResponse( - val status: Int, - val error: String?, - val heights: List?, - val extremes: List? - // Future: stations, datums -) -``` - -### TideDataType (Enum) -Specifies which data types to request from the API. - -```kotlin -enum class TideDataType { - HEIGHTS, - EXTREMES - // Future: STATIONS, DATUMS -} -``` - ---- - -## Generic Callback - -### TidesCallback -Generic callback interface to support multiple result types. - -```kotlin -interface TidesCallback { - fun result(data: T) - fun error(error: Error) -} -``` - -**Note**: This is a **breaking change** for existing consumers. Migration strategy: Deprecate existing `TidesCallback` and introduce `TidesCallback` as the new standard. diff --git a/specs/001-support-heights-api/plan.md b/specs/001-support-heights-api/plan.md deleted file mode 100644 index 426ff20..0000000 --- a/specs/001-support-heights-api/plan.md +++ /dev/null @@ -1,82 +0,0 @@ -# Implementation Plan: Support Heights API - -**Branch**: `001-support-heights-api` | **Date**: 2026-01-09 | **Spec**: [spec.md](./spec.md) -**Input**: Feature specification from `specs/001-support-heights-api/spec.md` - -## Summary - -Implement `getTideHeights` and `getTides` in the WorldTides library to enable fetching tide height predictions and flexible combined tide data from the WorldTides v2 API. This involves: -1. Refactoring `TidesCallback` to be generic (`TidesCallback`). -2. Creating new data models for Heights (`TideHeights`, `Height`) and flexible Tides (`Tides`, `TideDataType`). -3. Extending the existing Retrofit/Repository architecture with new endpoints. - -> [!IMPORTANT] -> **Breaking Change**: Refactoring `TidesCallback` to a generic interface is a breaking change for existing consumers using the Java-style callback. - -## Technical Context - -**Language/Version**: Kotlin 1.9+ (Java 11+ compatible) -**Primary Dependencies**: Retrofit 2, Moshi (existing in project) -**Storage**: N/A (Stateless library) -**Testing**: JUnit 4/5, MockK (assumed based on standard Android/Kotlin libs) -**Target Platform**: Android / JVM -**Project Type**: Library -**Performance Goals**: Standard network latency; efficient parsing of potential large lists. -**Constraints**: Must maintain Java interoperability and strict API fidelity. -**Scale/Scope**: 2 new API methods (`getTideHeights`, `getTides`), ~6-8 new classes, 1 refactored interface, 1 enum. - -## Constitution Check - -*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.* - -- [x] **Kotlin-First, Java-Compatible**: Plan includes both Suspend (internal/future) or Callback (current pattern) interfaces. (Actually, current pattern assumes Callback for both Java/Kotlin via `Result`). -- [x] **Strict API Fidelity**: Naming will mirror API (`Heights`, `dt`, `height`). -- [x] **Type Safety**: Using `Result` and strong types. -- [x] **Architecture Standards**: Using Retrofit/Moshi, separating Repository/Service. -- [x] **QA & Testing**: Plan includes unit testing. - -## Project Structure - -### Documentation (this feature) - -```text -specs/001-support-heights-api/ -├── plan.md # This file -├── research.md # Phase 0 output -├── data-model.md # Phase 1 output -├── quickstart.md # Phase 1 output -├── contracts/ # Phase 1 output -└── tasks.md # Phase 2 output -``` - -### Source Code - -```text -worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/ -├── WorldTides.kt # Main entry point (Modify: add getTideHeights, getTides) -├── WorldTidesRepository.kt # (MOVE from extremes + Modify: add heights, tides methods) -├── WorldTidesGateway.kt # (MOVE from extremes + Modify: add heights, tides endpoints) -├── TidesCallback.kt # (REFACTOR: Make generic TidesCallback) -├── models/ # [NEW] Shared models -│ ├── Tides.kt # [NEW] Flexible result container -│ ├── TidesResponse.kt # [NEW] API response DTO -│ └── TideDataType.kt # [NEW] Enum for data types -├── extremes/ # Existing feature models -│ ├── data/ -│ │ └── ... # Existing DTOs -│ └── models/ -│ └── ... # Existing models (TideExtremes, Extreme) -├── heights/ # [NEW] -│ ├── data/ -│ │ ├── TideHeightsResponse.kt # [NEW] -│ │ └── HeightResponse.kt # [NEW] -│ └── models/ -│ ├── TideHeights.kt # [NEW] -│ └── Height.kt # [NEW] -``` - -**Structure Decision**: Repository (`WorldTidesRepository`) and Gateway (`WorldTidesGateway`) moved to package root as shared infrastructure. Feature-specific models remain in their respective packages (`extremes/`, `heights/`). Shared models (`Tides`, `TideDataType`) in `models/` package. - -## Complexity Tracking - -N/A - Standard implementation. diff --git a/specs/001-support-heights-api/quickstart.md b/specs/001-support-heights-api/quickstart.md deleted file mode 100644 index ce7548f..0000000 --- a/specs/001-support-heights-api/quickstart.md +++ /dev/null @@ -1,80 +0,0 @@ -# Quickstart: Tide Data Requests - -## Fetch Tide Heights (Kotlin) - -```kotlin -val worldTides = WorldTides.Builder().build(apiKey) - -worldTides.getTideHeights(Date(), 7, "51.5074", "-0.1278") { result -> - result.onSuccess { tideHeights -> - println("Fetched ${tideHeights.heights.size} height points") - tideHeights.heights.forEach { - println("Height at ${it.date}: ${it.height}") - } - } - result.onFailure { error -> - println("Error: ${error.message}") - } -} -``` - -## Fetch Tide Heights (Java) - -```java -WorldTides wt = new WorldTides.Builder().build(apiKey); - -wt.getTideHeights(new Date(), 7, "51.5074", "-0.1278", new TidesCallback() { - @Override - public void result(TideHeights tides) { - System.out.println("Fetched " + tides.getHeights().size() + " points"); - } - - @Override - public void error(Error error) { - System.out.println("Error: " + error.getMessage()); - } -}); -``` - ---- - -## Fetch Flexible Tides Data (Kotlin) - -Request multiple data types in a single API call: - -```kotlin -val dataTypes = listOf(TideDataType.HEIGHTS, TideDataType.EXTREMES) - -worldTides.getTides(Date(), 7, "51.5074", "-0.1278", dataTypes) { result -> - result.onSuccess { tides -> - tides.heights?.let { println("Heights: ${it.heights.size} points") } - tides.extremes?.let { println("Extremes: ${it.extremes.size} points") } - } - result.onFailure { error -> - println("Error: ${error.message}") - } -} -``` - -## Fetch Flexible Tides Data (Java) - -```java -List dataTypes = Arrays.asList(TideDataType.HEIGHTS, TideDataType.EXTREMES); - -wt.getTides(new Date(), 7, "51.5074", "-0.1278", dataTypes, new TidesCallback() { - @Override - public void result(Tides tides) { - if (tides.getHeights() != null) { - System.out.println("Heights: " + tides.getHeights().getHeights().size()); - } - if (tides.getExtremes() != null) { - System.out.println("Extremes: " + tides.getExtremes().getExtremes().size()); - } - } - - @Override - public void error(Error error) { - System.out.println("Error: " + error.getMessage()); - } -}); -``` diff --git a/specs/001-support-heights-api/research.md b/specs/001-support-heights-api/research.md deleted file mode 100644 index 81079f2..0000000 --- a/specs/001-support-heights-api/research.md +++ /dev/null @@ -1,53 +0,0 @@ -# Research: Support Heights API - -**Status**: Complete -**Date**: 2026-01-09 - -## Decisions - -### 1. API Endpoint -- **Decision**: Use `v2` endpoint with `heights` query parameter. -- **Rationale**: README confirms `Heights` is a `v2` API request. Existing `WorldTidesGateway` uses `@GET("v2?extremes")`. -- **Implementation**: `@GET("v2?heights")` in `WorldTidesGateway`. - -### 2. Data Models -- **Decision**: Create `TideHeights` (wrapper) and `Height` (entity) models, mirroring `TideExtremes` and `Extreme`. -- **Rationale**: Consistency with existing codebase patterns. -- **Structure**: - - `TideHeightsResponse` (Status, Error, HeightsList) - - `HeightResponse` (dt, date, height) - Note: `type` field from `Extreme` is likely not present or not relevant for raw heights, unless it denotes prediction vs observation. We will assume prediction (default) for now. - -### 3. Package Structure -- **Decision**: Move `WorldTidesRepository` and `WorldTidesGateway` to package root. Create new `heights` and `models` package hierarchies. -- **Rationale**: User request. Repository and Gateway are shared infrastructure, not specific to `extremes`. -- **Implementation**: - - `com.oleksandrkruk.worldtides.WorldTidesRepository` (moved) - - `com.oleksandrkruk.worldtides.WorldTidesGateway` (moved) - - `com.oleksandrkruk.worldtides.heights.models.*` - - `com.oleksandrkruk.worldtides.heights.data.*` - - `com.oleksandrkruk.worldtides.models.Tides` - - `com.oleksandrkruk.worldtides.models.TideDataType` - -### 4. Flexible getTides Endpoint -- **Decision**: Support stacked API requests via dynamic query parameters based on `TideDataType` list. -- **Rationale**: User request. The WorldTides API allows stacking multiple data types (e.g., `?heights&extremes`). -- **Implementation**: `getTides(dataTypes: List)` dynamically builds the endpoint query. - -### 5. Generic TidesCallback -- **Decision**: Refactor `TidesCallback` to be generic (`TidesCallback`). -- **Rationale**: User request to avoid proliferating type-specific callback interfaces. -- **Breaking Change**: Yes. Existing consumers using `TidesCallback` will need to update their code. - -## Alternatives Considered - -### Co-location in 'extremes' package -- **Idea**: Reuse existing `extremes` package. -- **Rejected**: User explicitly requested separation into `heights` package and shared `models` package. - -### Suspend Functions -- **Idea**: Use Kotlin Coroutines (`suspend`) for the new method. -- **Rejected**: Constitution requires Java interoperability. Existing pattern uses `Callback`. We will stick to `Callback` (wrapped in `(Result) -> Unit`) to match `getTideExtremes`. - -### Type-Specific Callbacks (e.g., TideHeightsCallback) -- **Idea**: Create a separate callback interface for each data type. -- **Rejected**: User explicitly requested a generic `TidesCallback` to reduce code duplication. diff --git a/specs/001-support-heights-api/spec.md b/specs/001-support-heights-api/spec.md deleted file mode 100644 index 7a3c02a..0000000 --- a/specs/001-support-heights-api/spec.md +++ /dev/null @@ -1,93 +0,0 @@ -# Feature Specification: Support Heights API - -**Feature Branch**: `001-support-heights-api` -**Created**: 2026-01-09 -**Status**: Draft -**Input**: User description: "based on the supported API calls documented in the readme create a specification to extend the library to support the Hights API request" - -## User Scenarios & Testing - -### User Story 1 - Retrieve Tide Heights (Priority: P1) - -As a developer using the library, I want to fetch predicted tide heights for a specific location and date range so that I can use this data to display tide curves/charts in my application. - -**Why this priority**: "Heights" is a fundamental dataset provided by the API that enables detailed visualization, which is currently missing from the library. - -**Independent Test**: -A developer can write a script or test case that invokes the new "Heights" method with valid credentials and receives a collection of height objects with timestamps and values. - -**Acceptance Scenarios**: - -1. **Given** the library is initialized with a valid API key, **When** requesting tide heights for a valid location and time range, **Then** the result contains a list of tide height records. -2. **Given** an invalid API key, **When** requesting tide heights, **Then** an error indicating authentication failure is returned. -3. **Given** a location with no available data, **When** requesting tide heights, **Then** the library returns an empty result or appropriate specific error, not a generic crash. - ---- - -### User Story 2 - Retrieve Tides with Flexible Data Types (Priority: P2) - -As a developer, I want to fetch tide data by specifying which data types to include (e.g., heights, extremes) in a single API call so that I can reduce network overhead and get exactly the data I need. - -**Why this priority**: The WorldTides API supports stacking multiple data types (e.g., `?heights&extremes`). Enabling this in the library provides efficiency and future extensibility (stations, datums, etc.). - -**Independent Test**: -A developer can invoke `getTides` with a list of data types and receive a response containing the requested data. - -**Acceptance Scenarios**: - -1. **Given** the library is initialized with a valid API key, **When** requesting `getTides([HEIGHTS, EXTREMES])`, **Then** the result contains both heights and extremes data. -2. **Given** one data type is unavailable for a location, **When** requesting multiple data types, **Then** the available data is returned and the unavailable part is empty or null. -3. **Given** `getTides([HEIGHTS])` is called, **When** the request completes, **Then** only heights data is populated in the result. - ---- - -### User Story 3 - Generic Callback Interface (Priority: P1) - -As a library maintainer, I want the callback interface to be generic (`TidesCallback`) so that it can be reused for different response types without code duplication. - -**Why this priority**: Enables clean architecture and avoids proliferating type-specific callback interfaces. - -**Independent Test**: -Unit tests verify that `TidesCallback`, `TidesCallback`, and `TidesCallback` all compile and function correctly. - -**Acceptance Scenarios**: - -1. **Given** a generic callback, **When** used with Heights request, **Then** it correctly receives `TideHeights`. -2. **Given** a generic callback, **When** used with Extremes request, **Then** it correctly receives `TideExtremes`. -3. **Given** a generic callback, **When** used with `getTides` request, **Then** it correctly receives `Tides`. - -### Edge Cases - -- **Invalid Parameters**: Requesting negative duration or invalid date formats. -- **Network Issues**: Connection timeout or loss during request. -- **API Changes**: Unexpected response format from the server. -- **Partial Responses (Combined)**: API returns one data type but not the other. - -## Requirements - -### Functional Requirements - -- **FR-001**: The library MUST provide a method to request "Heights" data from the remote API. -- **FR-002**: The request method MUST accept parameters for Location (Latitude, Longitude), Start Date, and Duration (Days). -- **FR-003**: The library MUST parse the API response into a strongly-typed data structure representing Tide Heights (Time and Height). -- **FR-004**: The library MUST provide error handling mechanisms to report failures (Network, Auth, Validation) to the caller. -- **FR-005**: The feature MUST be fully interoperable with both Kotlin and Java applications. -- **FR-006**: The usage pattern (method signature, callback style) MUST remain consistent with existing library features (e.g. `getTideExtremes`). -- **FR-007**: The library MUST refactor `TidesCallback` to be generic (`TidesCallback`) for type-safe result handling. -- **FR-008**: The library MUST support stacked API requests via a flexible `getTides` method that accepts a list of data types. -- **FR-009**: The library MUST provide a `Tides` data model that can hold any combination of requested data (Heights, Extremes, and future types like Stations, Datums). -- **FR-010**: The library MUST provide a `TideDataType` enum to specify which data types to request. - -### Key Entities - -- **TideHeights/TideHeight**: Data structure representing the height of the tide at a specific point in time. -- **Tides**: Data structure containing optional Heights, Extremes, and future data type lists. -- **TideDataType**: Enum representing the types of tide data that can be requested (HEIGHTS, EXTREMES, future: STATIONS, DATUMS). - -## Success Criteria - -### Measurable Outcomes - -- **SC-001**: Developers can successfully retrieve and parse tide heights for a standard 7-day request. -- **SC-002**: The API surface for "Heights" matches the conventions of "Extremes" (consistency). -- **SC-003**: Test coverage extends to both Java and Kotlin consumers for this feature. diff --git a/specs/001-support-heights-api/tasks.md b/specs/001-support-heights-api/tasks.md deleted file mode 100644 index 26fe44e..0000000 --- a/specs/001-support-heights-api/tasks.md +++ /dev/null @@ -1,188 +0,0 @@ -# Tasks: Support Heights API - -**Input**: Design documents from `/specs/001-support-heights-api/` -**Prerequisites**: plan.md, spec.md, research.md, data-model.md, contracts/ - -**Organization**: Tasks are grouped by user story. Each task produces a compilable, testable, committable increment. - -## Format: `[ID] [P?] [Story] Description` - -- **[P]**: Can run in parallel (different files, no dependencies) -- **[Story]**: Which user story this task belongs to (US1, US2, US3) -- Include exact file paths in descriptions - ---- - -## Phase 1: Setup (Shared Infrastructure) - -**Purpose**: Move shared infrastructure to package root level. - -- [x] T001 Move `WorldTidesRepository.kt` from `extremes/` to package root `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesRepository.kt` -- [x] T002 Move `WorldTidesGateway.kt` from `extremes/` to package root `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesGateway.kt` -- [x] T003 Update imports in `WorldTides.kt` to reference new package locations -- [x] T004 Verify project compiles and existing tests pass after refactoring - -**Checkpoint**: Shared infrastructure relocated. Codebase compiles and tests pass. - ---- - -## Phase 2: Foundational (Generic Callback - US3) - -**Purpose**: Refactor `TidesCallback` to generic interface. This is a **prerequisite** for US1 and US2. - -**Goal**: Enable type-safe callbacks for any response type. - -**Independent Test**: Existing `getTideExtremes` works with `TidesCallback`. - -> [!WARNING] -> **Breaking Change**: This refactors the existing `TidesCallback` interface. - -- [x] T005 Refactor `TidesCallback.kt` to `TidesCallback` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/TidesCallback.kt` -- [x] T006 Update `WorldTides.getTideExtremes` Java overload to use `TidesCallback` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTides.kt` -- [x] T007 Update `WorldTidesRepository.extremes` to work with generic callback in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesRepository.kt` -- [x] T008 Verify project compiles and existing `getTideExtremes` tests pass - -**Checkpoint**: Generic callback ready. Existing functionality unchanged. Codebase compiles and tests pass. - ---- - -## Phase 3: User Story 1 - Retrieve Tide Heights (Priority: P1) 🎯 MVP - -**Goal**: Enable developers to fetch tide heights for a location and date range. - -**Independent Test**: Call `getTideHeights()` and receive a list of height objects with timestamps. - -### Implementation for User Story 1 - -#### Models - -- [x] T009 [P] [US1] Create `Height.kt` data class in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/heights/models/Height.kt` -- [x] T010 [P] [US1] Create `TideHeights.kt` wrapper in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/heights/models/TideHeights.kt` - -#### DTOs - -- [x] T011 [P] [US1] Create `HeightResponse.kt` DTO in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/heights/data/HeightResponse.kt` -- [x] T012 [P] [US1] Create `TideHeightsResponse.kt` DTO in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/heights/data/TideHeightsResponse.kt` - -#### Gateway & Repository - -- [x] T013 [US1] Add `heights` endpoint to `WorldTidesGateway.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesGateway.kt` -- [x] T014 [US1] Add `heights` method to `WorldTidesRepository.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesRepository.kt` - -#### Public API - -- [x] T015 [US1] Add `getTideHeights` method (Kotlin lambda) to `WorldTides.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTides.kt` -- [x] T016 [US1] Add `getTideHeights` method (Java callback) to `WorldTides.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTides.kt` - -#### Documentation - -- [x] T017 [US1] Update README.md with Heights API usage examples in `README.md` -- [x] T018 [US1] Verify project compiles and `getTideHeights` can be called - -**Checkpoint**: User Story 1 complete. Developers can fetch tide heights. Codebase compiles and tests pass. - ---- - -## Phase 4: User Story 2 - Retrieve Tides with Flexible Data Types (Priority: P2) - -**Goal**: Enable developers to fetch multiple data types (heights, extremes) in a single API call. - -**Independent Test**: Call `getTides([HEIGHTS, EXTREMES])` and receive a response with both data sets. - -### Implementation for User Story 2 - -#### Shared Models - -- [x] T019 [P] [US2] Create `TideDataType.kt` enum in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/models/TideDataType.kt` -- [x] T020 [P] [US2] Create `Tides.kt` wrapper in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/models/Tides.kt` -- [x] T021 [P] [US2] Create `TidesResponse.kt` DTO in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/models/TidesResponse.kt` - -#### Gateway & Repository - -- [x] T022 [US2] Add dynamic `tides` endpoint to `WorldTidesGateway.kt` that accepts query parameters in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesGateway.kt` -- [x] T023 [US2] Add `tides` method to `WorldTidesRepository.kt` that builds query from `TideDataType` list in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTidesRepository.kt` - -#### Public API - -- [x] T024 [US2] Add `getTides` method (Kotlin lambda) to `WorldTides.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTides.kt` -- [x] T025 [US2] Add `getTides` method (Java callback) to `WorldTides.kt` in `worldtides/src/main/kotlin/com/oleksandrkruk/worldtides/WorldTides.kt` - -#### Documentation - -- [x] T026 [US2] Update README.md with `getTides` usage examples in `README.md` -- [x] T027 [US2] Verify project compiles and `getTides` can be called with multiple data types - -**Checkpoint**: User Story 2 complete. Developers can fetch flexible tide data. Codebase compiles and tests pass. - ---- - -## Phase 5: Polish & Cross-Cutting Concerns - -**Purpose**: Final cleanup and documentation. - -- [x] T028 Update README.md supported API table (mark Heights as Yes) in `README.md` -- [x] T029 Add KDoc/Javadoc to all new public methods and classes -- [x] T030 Run full test suite and verify all tests pass -- [x] T031 Verify quickstart.md examples compile and work - ---- - -## Dependencies & Execution Order - -### Phase Dependencies - -- **Phase 1 (Setup)**: No dependencies - start immediately -- **Phase 2 (Foundational/US3)**: Depends on Phase 1 - BLOCKS US1 and US2 -- **Phase 3 (US1)**: Depends on Phase 2 -- **Phase 4 (US2)**: Depends on Phase 2 (can run in parallel with US1 if desired) -- **Phase 5 (Polish)**: Depends on US1 and US2 completion - -### User Story Dependencies - -- **US3 (Generic Callback)**: Foundational - must complete first -- **US1 (Heights)**: Depends on US3 only - can run in parallel with US2 -- **US2 (Flexible Tides)**: Depends on US3 only - can run in parallel with US1 - -### Within Each User Story - -- Models before DTOs (or parallel if independent files) -- DTOs before Gateway/Repository -- Gateway/Repository before Public API -- Public API before Documentation - -### Parallel Opportunities - -```text -# Models (T009, T010) can run in parallel -# DTOs (T011, T012) can run in parallel -# Shared models (T019, T020, T021) can run in parallel -``` - ---- - -## Implementation Strategy - -### MVP First (User Story 1 Only) - -1. Complete Phase 1: Setup (T001-T004) -2. Complete Phase 2: Generic Callback (T005-T008) -3. Complete Phase 3: User Story 1 (T009-T018) -4. **STOP and VALIDATE**: Test `getTideHeights` independently -5. Commit and merge if ready - -### Full Feature Delivery - -1. Complete MVP (above) -2. Add Phase 4: User Story 2 (T019-T027) -3. Complete Phase 5: Polish (T028-T031) -4. Final validation and merge - ---- - -## Notes - -- Each task produces a **compilable codebase** -- Commit after each task or logical group -- [P] tasks can run in parallel (different files) -- Verify tests pass after each phase -- Breaking change in Phase 2 requires version bump diff --git a/specs/002-sample-client/checklists/requirements.md b/specs/002-sample-client/checklists/requirements.md deleted file mode 100644 index b33d1a8..0000000 --- a/specs/002-sample-client/checklists/requirements.md +++ /dev/null @@ -1,93 +0,0 @@ -# Requirements Quality Checklist: E2E Sample Client - -**Purpose**: Validate specification completeness, clarity, and quality -**Created**: 2026-01-16 -**Feature**: [spec.md](file:///Users/oleksandrkruk/projects/worldtides/specs/002-sample-client/spec.md) -**Documents Reviewed**: spec.md, plan.md, tasks.md - ---- - -## Requirement Completeness - -- [x] CHK001 - Are all library API methods covered in functional requirements? [Completeness, Spec §FR-001 to FR-005] -- [x] CHK002 - Are environment configuration requirements specified (API key via env var)? [Completeness, Spec §FR-007] -- [x] CHK003 - Are project structure requirements defined (standalone vs submodule)? [Completeness, Spec §FR-008] -- [x] CHK004 - Are CI workflow trigger requirements specified? [Completeness, Spec §FR-009] -- [x] CHK005 - Are build order dependencies documented (mavenLocal publish)? [Completeness, Spec §FR-010] -- [x] CHK006 - Are error handling test requirements defined? [Completeness, Spec §FR-006] - -## Requirement Clarity - -- [x] CHK007 - Is the test location explicitly specified ("Ferraria, Portugal" or similar)? [Clarity, Plan §Implementation Files] -- [x] CHK008 - Is the date range for tide requests defined (e.g., number of days)? [Clarity, Gap - could add to spec] -- [x] CHK009 - Are async callback synchronization requirements clear (CountDownLatch mentioned)? [Clarity, Plan §E2ETest.kt] -- [x] CHK010 - Is the "valid API key" vs "invalid API key" distinction clear for test scenarios? [Clarity, Spec §US1, US2] - -## Requirement Consistency - -- [x] CHK011 - Are functional requirements aligned with user story acceptance scenarios? [Consistency, Spec §FR vs §US] -- [x] CHK012 - Do tasks.md phases align with implementation plan structure? [Consistency, tasks.md vs plan.md] -- [x] CHK013 - Are success criteria measurable and aligned with functional requirements? [Consistency, Spec §SC vs §FR] -- [x] CHK014 - Is the branch name consistent across all documents (002-sample-client)? [Consistency, All docs] - -## Acceptance Criteria Quality - -- [x] CHK015 - Are all acceptance scenarios in Given/When/Then format? [Measurability, Spec §User Scenarios] -- [x] CHK016 - Can each acceptance scenario be objectively verified? [Measurability, Spec §US1, US2, US3] -- [x] CHK017 - Are success criteria mapped to specific requirements? [Traceability, Spec §SC to §FR] - -## Scenario Coverage - -- [x] CHK018 - Are happy-path scenarios covered for all 5 API method variations? [Coverage, Spec §US1] -- [x] CHK019 - Are error scenarios covered (invalid API key)? [Coverage, Spec §US2] -- [x] CHK020 - Are CI integration scenarios covered (manual trigger, no auto-trigger)? [Coverage, Spec §US3] - -## Edge Case Coverage - -- [ ] CHK021 - Are requirements defined for missing API key (empty env var) scenario? [Gap, Spec §Edge Cases] -- [ ] CHK022 - Are requirements defined for network failure scenarios? [Gap, Spec §Edge Cases] -- [ ] CHK023 - Are requirements defined for invalid location requests? [Gap, Spec §Edge Cases] - -## Dependencies & Assumptions - -- [x] CHK024 - Is the dependency on mavenLocal publish documented? [Dependency, Plan §e2e.yml] -- [x] CHK025 - Is the assumption of valid World Tides API key documented? [Assumption, Spec §US1] -- [x] CHK026 - Are Kotlin/JUnit version dependencies specified? [Dependency, Plan §Technical Context] - -## Traceability - -- [x] CHK027 - Are tasks traced to user stories ([US1], [US2], [US3])? [Traceability, tasks.md] -- [x] CHK028 - Are functional requirements mapped to acceptance scenarios? [Traceability, Spec §FR vs §US] -- [x] CHK029 - Are implementation files mapped to requirements? [Traceability, Plan §Implementation Files] - ---- - -## Summary - -| Quality Dimension | Pass | Fail | Items | -|-------------------|------|------|-------| -| Completeness | 6 | 0 | CHK001-006 | -| Clarity | 4 | 0 | CHK007-010 | -| Consistency | 4 | 0 | CHK011-014 | -| Acceptance Criteria | 3 | 0 | CHK015-017 | -| Scenario Coverage | 3 | 0 | CHK018-020 | -| Edge Case Coverage | 0 | 3 | CHK021-023 | -| Dependencies | 3 | 0 | CHK024-026 | -| Traceability | 3 | 0 | CHK027-029 | -| **Total** | **26** | **3** | **29** | - -## Outstanding Gaps - -The following edge cases are listed as questions in the spec but lack explicit expected behavior requirements: - -1. **CHK021**: Missing API key behavior — should tests skip, fail gracefully, or error? -2. **CHK022**: Network failure behavior — handled by library, sample client just observes? -3. **CHK023**: Invalid location behavior — tests expect API error response? - -**Recommendation**: These gaps are acceptable for an E2E test client since the library handles these scenarios. The sample client's job is to exercise the library, not implement error recovery. - ---- - -## Ready for Implementation - -✅ Specification is sufficiently complete for implementation. diff --git a/specs/002-sample-client/plan.md b/specs/002-sample-client/plan.md deleted file mode 100644 index d575785..0000000 --- a/specs/002-sample-client/plan.md +++ /dev/null @@ -1,188 +0,0 @@ -# Implementation Plan: E2E Sample Client - -**Branch**: `002-sample-client` | **Date**: 2026-01-16 | **Spec**: [spec.md](file:///Users/oleksandrkruk/projects/worldtides/specs/002-sample-client/spec.md) -**Input**: Feature specification from `/specs/002-sample-client/spec.md` - -## Summary - -Create a standalone Gradle project (`sample-client/`) that exercises all WorldTides library API methods for end-to-end integration testing. The sample client will be run via manual GitHub Actions workflow dispatch, consuming the library from mavenLocal. Secrets are configured via `WORLD_TIDES_API_KEY` environment variable. - -## Technical Context - -**Language/Version**: Kotlin 1.9.20 (matches worldtides library) -**Primary Dependencies**: worldtides library (via mavenLocal), JUnit 5, AssertJ -**Storage**: N/A -**Testing**: JUnit 5 with JUnit Platform -**Target Platform**: JVM (CI runner - ubuntu-latest) -**Project Type**: Standalone Gradle project (not submodule) -**Performance Goals**: N/A (E2E tests, not performance testing) -**Constraints**: Must consume library via mavenLocal; API key via environment variable -**Scale/Scope**: Single test class with 6-7 test methods - -## Constitution Check - -*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.* - -| Principle | Status | Notes | -|-----------|--------|-------| -| I. Kotlin-First, Java-Compatible | ✅ Pass | Sample client written in Kotlin; tests Kotlin callback API | -| II. Strict API Fidelity | ✅ Pass | Tests validate all current API methods | -| III. Type Safety & Null Safety | ✅ Pass | Tests validate response structures | -| Networking Layer | ✅ Pass | Not modifying networking; consuming as-is | -| Data Models | ✅ Pass | Tests validate immutable data class responses | -| SemVer | ✅ Pass | No versioning impact | -| Documentation | ✅ Pass | Sample client serves as usage documentation | -| QA & Testing | ✅ Pass | Adds E2E tests complementing unit tests | - -**Result**: All gates pass. No violations to justify. - -## Project Structure - -### Documentation (this feature) - -```text -specs/002-sample-client/ -├── spec.md # Feature specification -├── plan.md # This file -├── research.md # Phase 0 (N/A - no unknowns) -├── tasks.md # Phase 2 output -└── checklists/ - └── requirements.md # Spec quality checklist -``` - -### Source Code (repository root) - -```text -sample-client/ # STANDALONE project (not in settings.gradle.kts) -├── build.gradle.kts # Declares dependency on worldtides via mavenLocal -├── settings.gradle.kts # Self-contained Gradle settings -└── src/ - ├── main/kotlin/ - │ └── com/oleksandrkruk/worldtides/sample/ - │ └── SampleClient.kt # Optional: runnable demo (main function) - └── test/kotlin/ - └── com/oleksandrkruk/worldtides/sample/ - └── E2ETest.kt # JUnit 5 E2E integration tests -``` - -### CI Workflow - -```text -.github/workflows/ -└── e2e.yml # NEW: Manual workflow_dispatch trigger -``` - -**Structure Decision**: Standalone Gradle project in `sample-client/` directory. Completely separate from the worldtides library build - building worldtides does NOT build sample-client. The sample-client consumes worldtides via mavenLocal after explicit publish. - -## Implementation Files - -### 1. sample-client/settings.gradle.kts - -```kotlin -rootProject.name = "sample-client" -``` - -### 2. sample-client/build.gradle.kts - -```kotlin -plugins { - kotlin("jvm") version "1.9.20" - application -} - -repositories { - mavenLocal() // Consume worldtides from local publish - mavenCentral() -} - -dependencies { - implementation("com.oleksandrkruk:worldtides:+") // Latest from mavenLocal - - testImplementation(kotlin("test")) - testImplementation("org.assertj:assertj-core:3.12.2") - testImplementation("org.junit.jupiter:junit-jupiter-api:5.6.2") - testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.6.2") -} - -application { - mainClass.set("com.oleksandrkruk.worldtides.sample.SampleClientKt") -} - -tasks.test { - useJUnitPlatform() - // Pass environment variable to tests - environment("WORLD_TIDES_API_KEY", System.getenv("WORLD_TIDES_API_KEY") ?: "") -} - -kotlin { - jvmToolchain(8) -} -``` - -### 3. sample-client/src/test/kotlin/.../E2ETest.kt - -JUnit 5 tests covering: -- `testGetTideExtremes()` - FR-001 -- `testGetTideHeights()` - FR-002 -- `testGetTidesHeightsOnly()` - FR-003 -- `testGetTidesExtremesOnly()` - FR-004 -- `testGetTidesBothTypes()` - FR-005 -- `testInvalidApiKeyReturnsError()` - FR-006 - -Each test uses `CountDownLatch` for async callback synchronization and validates response structure. - -### 4. .github/workflows/e2e.yml - -```yaml -name: E2E Tests - -on: - workflow_dispatch: # Manual trigger only (FR-009) - -jobs: - e2e: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - uses: actions/setup-java@v4 - with: - distribution: 'jetbrains' - java-version: '21' - - # FR-010: Publish worldtides to mavenLocal first - - name: Publish worldtides to mavenLocal - run: ./gradlew :worldtides:publishToMavenLocal - - # Run sample-client tests with API key - - name: Run E2E Tests - env: - WORLD_TIDES_API_KEY: ${{ secrets.WORLD_TIDES_API_KEY }} - run: | - cd sample-client - ./gradlew test -``` - -## Complexity Tracking - -No constitution violations. Table not needed. - -## Phase 0: Research - -**No unknowns detected.** All technical context is resolved: -- Language/framework matches existing library -- mavenLocal consumption is standard Gradle practice -- JUnit 5 test structure mirrors existing tests - -**Output**: research.md not required (no NEEDS CLARIFICATION items). - -## Phase 1: Design & Contracts - -**Data Model**: N/A - Sample client consumes library models as-is, does not define new entities. - -**API Contracts**: N/A - Sample client is a consumer, not a producer of APIs. - -**Output**: data-model.md and contracts/ not required for this feature. - -## Next Steps - -Run `/speckit.tasks` to break this plan into executable tasks. diff --git a/specs/002-sample-client/spec.md b/specs/002-sample-client/spec.md deleted file mode 100644 index 0cfd554..0000000 --- a/specs/002-sample-client/spec.md +++ /dev/null @@ -1,99 +0,0 @@ -# Feature Specification: E2E Sample Client - -**Feature Branch**: `002-sample-client` -**Created**: 2026-01-16 -**Status**: Implemented -**Input**: User description: "Create a sample client for the WorldTides library that exercises all API use cases for end-to-end testing in CI" - -## User Scenarios & Testing *(mandatory)* - -### User Story 1 - Verify All Library API Methods Work (Priority: P1) - -A CI pipeline operator wants to verify that the WorldTides library correctly integrates with the live World Tides API. They manually trigger an E2E workflow that runs the sample client against all supported API methods using a real API key. - -**Why this priority**: Core purpose of the sample client — validates the library works end-to-end against the production API before releases. - -**Independent Test**: Can be fully tested by running `./gradlew :sample-client:test` with a valid API key and observing that all API methods return valid tide data. - -**Acceptance Scenarios**: - -1. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTideExtremes()`, **Then** the response contains a non-empty list of tide extremes with valid dates, heights, and tide types (High/Low). - -2. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTideHeights()`, **Then** the response contains a non-empty list of tide heights with valid dates and height values. - -3. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with HEIGHTS only, **Then** the response contains tide heights and extremes is null. - -4. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with EXTREMES only, **Then** the response contains tide extremes and heights is null. - -5. **Given** a valid World Tides API key is configured, **When** the E2E tests run for `getTides()` with both HEIGHTS and EXTREMES, **Then** the response contains both heights and extremes data. - ---- - -### User Story 2 - Error Handling Verification (Priority: P2) - -A developer wants to verify that the library properly returns errors when given invalid input or credentials. The sample client should demonstrate proper error handling behavior. - -**Why this priority**: Error handling is important but secondary to happy-path functionality. - -**Independent Test**: Can be tested by running the E2E test with an invalid API key and verifying that an error is returned (not a crash). - -**Acceptance Scenarios**: - -1. **Given** an invalid API key, **When** any API method is called, **Then** the callback receives an error result (not a success with empty data). - ---- - -### User Story 3 - CI Integration (Priority: P3) - -A release engineer wants a GitHub Actions workflow that can be manually triggered to run E2E tests. The workflow should only execute on demand to control API usage costs. - -**Why this priority**: Enables continuous integration but not blocking for library functionality. - -**Independent Test**: Can be tested by manually triggering the workflow from GitHub Actions UI and verifying it completes successfully. - -**Acceptance Scenarios**: - -1. **Given** the `e2e.yml` workflow exists, **When** a maintainer clicks "Run workflow" from GitHub Actions, **Then** the workflow runs the sample client tests with the configured API key secret. - -2. **Given** the E2E workflow, **When** a push is made to any branch, **Then** the E2E workflow does NOT automatically trigger. - ---- - -### Edge Cases - -- What happens when the API key is missing (empty environment variable)? -- What happens when network connectivity fails during an API call? -- What happens when requesting tide data for an invalid location (middle of land)? - -## Requirements *(mandatory)* - -### Functional Requirements - -- **FR-001**: Sample client MUST call `getTideExtremes()` with valid parameters and validate the response structure -- **FR-002**: Sample client MUST call `getTideHeights()` with valid parameters and validate the response structure -- **FR-003**: Sample client MUST call `getTides()` with single data type (HEIGHTS only) and verify only heights are returned -- **FR-004**: Sample client MUST call `getTides()` with single data type (EXTREMES only) and verify only extremes are returned -- **FR-005**: Sample client MUST call `getTides()` with multiple data types (HEIGHTS + EXTREMES) and verify both are returned -- **FR-006**: Sample client MUST verify error handling when an invalid API key is used -- **FR-007**: Sample client MUST read the API key from the `WORLD_TIDES_API_KEY` environment variable -- **FR-008**: Sample client MUST be a standalone Gradle project (not a submodule of worldtides) -- **FR-009**: E2E workflow MUST only trigger on manual `workflow_dispatch` -- **FR-010**: E2E workflow MUST publish worldtides to mavenLocal before running sample client tests - -### Key Entities - -- **WorldTides Client**: The main library under test, initialized with an API key via `WorldTides.Builder().build(apiKey)` -- **TideExtremes**: Contains list of `Extreme` objects (date, height, type) -- **TideHeights**: Contains list of `Height` objects (date, height) -- **Tides**: Combined response containing optional heights and extremes -- **TideDataType**: Enum specifying which data types to request (HEIGHTS, EXTREMES) - -## Success Criteria *(mandatory)* - -### Measurable Outcomes - -- **SC-001**: All 5 API method variations are exercised by the sample client tests -- **SC-002**: Sample client tests pass when given a valid API key (verified by manual E2E workflow run) -- **SC-003**: Sample client test for error handling correctly identifies an invalid API key scenario -- **SC-004**: E2E workflow can be triggered manually from GitHub Actions UI -- **SC-005**: E2E workflow does not run automatically on push or pull request events diff --git a/specs/002-sample-client/tasks.md b/specs/002-sample-client/tasks.md deleted file mode 100644 index ebcf5d4..0000000 --- a/specs/002-sample-client/tasks.md +++ /dev/null @@ -1,160 +0,0 @@ -# Tasks: E2E Sample Client - -**Input**: Design documents from `/specs/002-sample-client/` -**Prerequisites**: plan.md ✓, spec.md ✓ -**Approach**: TDD - write tests FIRST, ensure they FAIL, then implement - -## Format: `[ID] [P?] [Story] Description` - -- **[P]**: Can run in parallel (different files, no dependencies) -- **[Story]**: Which user story this task belongs to (US1, US2, US3) -- Include exact file paths in descriptions - ---- - -## Phase 1: Setup (Project Initialization) - -**Purpose**: Create standalone sample-client project structure - -- [x] T001 Create `sample-client/` directory at repository root -- [x] T002 Create `sample-client/settings.gradle.kts` with project name "sample-client" -- [x] T003 Create `sample-client/build.gradle.kts` with Kotlin JVM, mavenLocal, JUnit 5, and AssertJ dependencies -- [x] T004 Create directory structure: `sample-client/src/test/kotlin/com/oleksandrkruk/worldtides/sample/` -- [x] T005 Verify project compiles: `cd sample-client && ./gradlew build` (should succeed with empty test class) - -**Checkpoint**: Standalone project structure ready. Can now write tests. - ---- - -## Phase 2: User Story 1 - Verify All Library API Methods Work (Priority: P1) 🎯 MVP - -**Goal**: E2E tests validate all 5 API method variations return correct data - -**Independent Test**: Run `WORLD_TIDES_API_KEY= ./gradlew :sample-client:test` - all tests pass with valid API key - -### Tests for User Story 1 (TDD - Write FIRST, verify FAIL) - -> **TDD**: Write failing tests that define expected behavior, then implement code to make them pass - -- [x] T006 [US1] Create test file `sample-client/src/test/kotlin/com/oleksandrkruk/worldtides/sample/E2ETest.kt` with test class skeleton and API key from env var -- [x] T007 [P] [US1] Write failing test `testGetTideExtremes()` - asserts response has non-empty extremes list with valid dates, heights, and TideType (FR-001) -- [x] T008 [P] [US1] Write failing test `testGetTideHeights()` - asserts response has non-empty heights list with valid dates and height values (FR-002) -- [x] T009 [P] [US1] Write failing test `testGetTidesHeightsOnly()` - asserts heights present and extremes null (FR-003) -- [x] T010 [P] [US1] Write failing test `testGetTidesExtremesOnly()` - asserts extremes present and heights null (FR-004) -- [x] T011 [P] [US1] Write failing test `testGetTidesBothTypes()` - asserts both heights and extremes present (FR-005) - -### Implementation for User Story 1 - -- [x] T012 [US1] Publish worldtides library: `./gradlew :worldtides:publishToMavenLocal` -- [x] T013 [US1] Verify all US1 tests pass with valid API key: `WORLD_TIDES_API_KEY= ./gradlew :sample-client:test` - -**Checkpoint**: All 5 happy-path API tests pass. User Story 1 complete. - ---- - -## Phase 3: User Story 2 - Error Handling Verification (Priority: P2) - -**Goal**: E2E test verifies error callback is invoked with invalid API key - -**Independent Test**: Run test with invalid API key - test passes (error correctly returned) - -### Tests for User Story 2 (TDD - Write FIRST, verify FAIL) - -- [x] T014 [US2] Write failing test `testInvalidApiKeyReturnsError()` in `sample-client/src/test/kotlin/com/oleksandrkruk/worldtides/sample/E2ETest.kt` - asserts error callback invoked, not success (FR-006) - -### Implementation for User Story 2 - -- [x] T015 [US2] Verify US2 test passes: error test uses hardcoded invalid key and confirms error result - -**Checkpoint**: Error handling verified. User Story 2 complete. - ---- - -## Phase 4: User Story 3 - CI Integration (Priority: P3) - -**Goal**: GitHub Actions workflow runs E2E tests on manual trigger only - -**Independent Test**: Manually trigger workflow from GitHub Actions UI - tests run successfully - -### Implementation for User Story 3 (No TDD - infrastructure config) - -- [x] T016 [US3] Create `.github/workflows/e2e.yml` with `workflow_dispatch` trigger only (FR-009) -- [x] T017 [US3] Add step to publish worldtides to mavenLocal before running tests (FR-010) -- [x] T018 [US3] Add step to run sample-client tests with `WORLD_TIDES_API_KEY` secret -- [x] T019 [US3] Validate workflow YAML syntax: `yamllint .github/workflows/e2e.yml` or manual review - -**Checkpoint**: CI workflow ready. User Story 3 complete. - ---- - -## Phase 5: Polish & Verification - -**Purpose**: Final validation and documentation - -- [x] T020 Run full test suite locally with valid API key to confirm all tests pass -- [x] T021 [P] Update specs/002-sample-client/ docs to mark feature as implemented -- [x] T022 Commit all changes with descriptive message - ---- - -## Dependencies & Execution Order - -### Phase Dependencies - -- **Setup (Phase 1)**: No dependencies - start immediately -- **US1 (Phase 2)**: Depends on Setup - write tests first, then verify pass -- **US2 (Phase 3)**: Depends on Setup - can run in parallel with US1 -- **US3 (Phase 4)**: Depends on Setup - can run in parallel with US1/US2 -- **Polish (Phase 5)**: Depends on all user stories complete - -### Parallel Opportunities - -```text -After Phase 1 (Setup): -┌─────────────────────────────────────────┐ -│ Phase 2: US1 (API Tests) │ -├─────────────────────────────────────────┤ -│ T007, T008, T009, T010, T011 [P] │ ← Write all 5 tests in parallel -└─────────────────────────────────────────┘ - -After Phase 1 (Setup): -┌─────────────────────────────────────────┐ -│ Phase 3: US2 (Error Test) │ ← Can start in parallel with US1 -└─────────────────────────────────────────┘ - -After Phase 1 (Setup): -┌─────────────────────────────────────────┐ -│ Phase 4: US3 (CI Workflow) │ ← Can start in parallel with US1/US2 -└─────────────────────────────────────────┘ -``` - ---- - -## Implementation Strategy - -### TDD MVP First (User Story 1 Only) - -1. Complete Phase 1: Setup -2. Write all US1 tests (T007-T011) - verify they compile but would fail without library -3. Publish library to mavenLocal (T012) -4. Run tests (T013) - verify all pass -5. **STOP and VALIDATE**: MVP complete, core E2E coverage achieved - -### Incremental Delivery - -1. Setup → US1 (API tests) → MVP! ✓ -2. Add US2 (error handling) → Enhanced coverage -3. Add US3 (CI workflow) → Full automation - ---- - -## Summary - -| Phase | User Story | Task Count | Parallelizable | -|-------|------------|------------|----------------| -| 1 | Setup | 5 | - | -| 2 | US1 - API Tests | 8 | 5 tests [P] | -| 3 | US2 - Error Handling | 2 | - | -| 4 | US3 - CI Workflow | 4 | - | -| 5 | Polish | 3 | 1 [P] | -| **Total** | | **22** | **6** |