Skip to content

Releases: n0zer0d4y/athena-protocol

v0.2.0 - MCP Client Environment Variable Support

15 Nov 13:24

Choose a tag to compare

v0.2.0 - MCP Environment Variable Support

Overview

This release introduces comprehensive MCP client environment variable configuration support, enabling users to configure Athena Protocol directly through MCP client settings without requiring local .env files. This significantly improves the user experience for npm/npx installations.

New Features

MCP Client Configuration Support

  • Environment Variable Priority System: Implemented hierarchical configuration loading where MCP client env variables take precedence over local .env files
  • NPX Detection: Added intelligent detection of npx execution to skip unnecessary .env file loading
  • Flexible Provider Configuration: Support for configuring any combination of LLM providers through MCP client environment variables

GPT-5 Model Support

  • GPT-5 Specific Parameters: Added support for GPT-5 exclusive parameters:
    • OPENAI_MAX_COMPLETION_TOKENS_DEFAULT for controlling completion token limits
    • OPENAI_VERBOSITY_DEFAULT for verbosity control
    • OPENAI_REASONING_EFFORT_DEFAULT for reasoning effort configuration
  • Model-Aware Configuration: System now recognizes GPT-5 models and applies appropriate parameter handling

Google Gemini Integration

  • Native Gemini Support: Added complete support for Google Gemini models
  • Gemini-Specific Configuration: Streamlined configuration options for Gemini models

Documentation

Configuration Guides

  • CLIENT_MCP_CONFIGURATION_EXAMPLES.md: New comprehensive guide with tested MCP client configurations for both GPT-5 and Google Gemini setups
  • README.md Updates: Enhanced installation instructions with clear separation between local and npm usage patterns
  • Future Refactoring Plans: Documented roadmap for GPT-5 parameter optimization in upcoming releases

User Experience Improvements

  • Simplified Setup: Clear distinction between local development (with .env) and npm usage (with MCP env variables)
  • Configuration Validation: Improved error messages and troubleshooting guidance
  • Timeout Configuration: Added explanations for timeout settings optimized for different model types

Technical Changes

Environment Provider Architecture

  • TripleMergedEnvProvider: New environment provider that merges MCP env, .env file, and system environment variables with proper priority
  • ProcessEnvProvider: Direct access to process.env for MCP client variables
  • DotenvProvider: Optional .env file loading with fallback behavior

Backward Compatibility

  • Preserved Local Development: Existing .env file configurations remain fully functional
  • Graceful Degradation: System works with partial configurations and provides helpful error messages
  • Version String Updates: Updated version identifiers across all server components

Configuration Examples

GPT-5 Setup (New)

{
  "mcpServers": {
    "athena-protocol": {
      "command": "npx",
      "args": ["@n0zer0d4y/athena-protocol"],
      "env": {
        "DEFAULT_LLM_PROVIDER": "openai",
        "OPENAI_API_KEY": "your-key-here",
        "OPENAI_MODEL_DEFAULT": "gpt-5",
        "OPENAI_MAX_COMPLETION_TOKENS_DEFAULT": "8192",
        "OPENAI_VERBOSITY_DEFAULT": "medium",
        "OPENAI_REASONING_EFFORT_DEFAULT": "high",
        "LLM_TEMPERATURE_DEFAULT": "0.7",
        "LLM_MAX_TOKENS_DEFAULT": "2000",
        "LLM_TIMEOUT_DEFAULT": "30000"
      }
    }
  }
}

Google Gemini Setup (New)

{
  "mcpServers": {
    "athena-protocol": {
      "command": "npx",
      "args": ["@n0zer0d4y/athena-protocol"],
      "env": {
        "DEFAULT_LLM_PROVIDER": "google",
        "GOOGLE_API_KEY": "your-key-here",
        "GOOGLE_MODEL_DEFAULT": "gemini-2.5-flash",
        "LLM_TEMPERATURE_DEFAULT": "0.7",
        "LLM_MAX_TOKENS_DEFAULT": "2000",
        "LLM_TIMEOUT_DEFAULT": "30000"
      }
    }
  }
}

Known Limitations

GPT-5 Parameter Requirements

Current implementation requires standard LLM parameters (LLM_TEMPERATURE_DEFAULT, LLM_MAX_TOKENS_DEFAULT, LLM_TIMEOUT_DEFAULT) for GPT-5 models, even though these parameters are not used by the model itself. This is a temporary limitation that will be addressed in v0.3.0.

Migration Guide

For Existing Local Users

No changes required. Existing .env file configurations continue to work unchanged.

For New NPM Users

Use the MCP client configuration examples provided in docs/CLIENT_MCP_CONFIGURATION_EXAMPLES.md for immediate setup.

Testing

  • Comprehensive testing with both local and npx execution modes
  • Validation of all documented configuration examples
  • Cross-platform compatibility verification (Windows, macOS, Linux)

Acknowledgments

This release represents a significant improvement in user experience by eliminating the need for local file configuration when using Athena Protocol through MCP clients. The foundation is now set for future enhancements and additional provider support.

v0.1.0 - Foundation Release: Enhanced File Analysis & Production Ready

13 Nov 06:46

Choose a tag to compare

v0.1.0 - Foundation Release: Enhanced File Analysis & Production Ready

Initial Public Release

Athena Protocol MCP Server - A systematic thinking validation system for LLM coding agents, acting as an AI tech lead to validate approaches, analyze impacts, and optimize decision-making.

Key Features

  • 5 Core Validation Tools: thinking_validation, impact_analysis, assumption_checker, dependency_mapper, thinking_optimizer
  • 14 LLM Provider Support: OpenAI, Anthropic, Google, Qwen, Groq, XAI, Mistral, Perplexity, OpenRouter, Ollama, ZAI, Azure, Bedrock, Vertex
  • Precision File Analysis: New analysisTargets parameter with 4 read modes (full, head, tail, range)
  • Docker Support: Production-ready containerization with multi-stage builds
  • npm Package: Ready for distribution via npm registry

Major Improvements

Enhanced File Analysis System

  • NEW: analysisTargets parameter with client-controlled precision
    • full mode: Read entire file when issue location is unclear
    • head mode: Read first N lines for imports/setup analysis
    • tail mode: Read last N lines for recent changes
    • range mode: Read specific line ranges for targeted analysis
    • Priority levels: critical, important, supplementary
  • REMOVED: Legacy filesToAnalyze parameter (previously limited to 100 lines)
  • Smart defaults: mode: "head" with lines: 50 when omitted

Tool Output Enhancements

thinking_optimizer now includes comprehensive tacticalPlan output:

  • Classification and grep-first guidance
  • Key findings and decision points
  • Implementation steps and testing plans
  • Risk mitigation and checkpoints
  • Value/effort analysis

Stability & Quality

  • FIXED: JSON parsing errors in MCP client communication
  • FIXED: Stdout contamination from debug logs
  • REMOVED: All informal emojis from code and logs
  • REMOVED: Unused web-search placeholder tool
  • IMPROVED: Error messages and validation feedback

npm & Docker Ready

npm Package Configuration

  • Package name: @n0zer0d4y/athena-protocol
  • Executable: athena-protocol
  • Optimized bundle with .npmignore
  • Pre-publish validation scripts

Docker Deployment

  • Multi-stage Alpine-based build (62MB final image)
  • Non-root user execution
  • Health check endpoint
  • Docker Compose orchestration

Documentation

  • Comprehensive README: Setup, configuration, tool usage, MCP client integration
  • Provider Guide: Detailed setup for all 14 LLM providers with latest models (GPT-5, Claude 4.5, Gemini 2.5, etc.)
  • MCP Client Schema: JSON configuration examples for Cursor and Claude Desktop
  • Important Notices:
    • Memory system marked as experimental (pending refactor)
    • Provider testing status (6 tested, 8 configured)

Known Limitations

  • Tested Providers: Only OpenAI, Google, ZAI, Mistral, OpenRouter, and Groq have been thoroughly tested
  • Memory System: Persistent memory creates thinking-memory.json in project root (refactor planned)

Testing

All test scripts updated to use new analysisTargets parameter:

  • test-live-mcp-tools.cjs
  • test-mcp-thinking-validation.js
  • test-all-tools.js
  • validate-tool-architecture.cjs

Requirements

  • Node.js 18 or higher
  • At least one LLM provider API key
  • TypeScript 5.x (for development)

Quick Start

npm installation:

npm install -g @n0zer0d4y/athena-protocol
athena-protocol

Docker deployment:

docker-compose up -d

Links

Commits in this Release

  • fix: restore and enhance thinking_optimizer tactical plan output
  • refactor: streamline file analysis with analysisTargets parameter
  • fix: resolve MCP client JSON parsing and validation errors
  • refactor: remove unused web-search placeholder tool
  • docs: update model listings and improve documentation quality
  • docs: add notice about memory system pending refactor
  • docs: add MCP client configuration section
  • docs: add provider testing status notice
  • chore: prepare package for npm publishing and Docker deployment

Full Changelog: Initial release