Releases: n0zer0d4y/athena-protocol
v0.2.0 - MCP Client Environment Variable Support
v0.2.0 - MCP Environment Variable Support
Overview
This release introduces comprehensive MCP client environment variable configuration support, enabling users to configure Athena Protocol directly through MCP client settings without requiring local .env files. This significantly improves the user experience for npm/npx installations.
New Features
MCP Client Configuration Support
- Environment Variable Priority System: Implemented hierarchical configuration loading where MCP client
envvariables take precedence over local.envfiles - NPX Detection: Added intelligent detection of npx execution to skip unnecessary
.envfile loading - Flexible Provider Configuration: Support for configuring any combination of LLM providers through MCP client environment variables
GPT-5 Model Support
- GPT-5 Specific Parameters: Added support for GPT-5 exclusive parameters:
OPENAI_MAX_COMPLETION_TOKENS_DEFAULTfor controlling completion token limitsOPENAI_VERBOSITY_DEFAULTfor verbosity controlOPENAI_REASONING_EFFORT_DEFAULTfor reasoning effort configuration
- Model-Aware Configuration: System now recognizes GPT-5 models and applies appropriate parameter handling
Google Gemini Integration
- Native Gemini Support: Added complete support for Google Gemini models
- Gemini-Specific Configuration: Streamlined configuration options for Gemini models
Documentation
Configuration Guides
- CLIENT_MCP_CONFIGURATION_EXAMPLES.md: New comprehensive guide with tested MCP client configurations for both GPT-5 and Google Gemini setups
- README.md Updates: Enhanced installation instructions with clear separation between local and npm usage patterns
- Future Refactoring Plans: Documented roadmap for GPT-5 parameter optimization in upcoming releases
User Experience Improvements
- Simplified Setup: Clear distinction between local development (with
.env) and npm usage (with MCP env variables) - Configuration Validation: Improved error messages and troubleshooting guidance
- Timeout Configuration: Added explanations for timeout settings optimized for different model types
Technical Changes
Environment Provider Architecture
- TripleMergedEnvProvider: New environment provider that merges MCP env,
.envfile, and system environment variables with proper priority - ProcessEnvProvider: Direct access to
process.envfor MCP client variables - DotenvProvider: Optional
.envfile loading with fallback behavior
Backward Compatibility
- Preserved Local Development: Existing
.envfile configurations remain fully functional - Graceful Degradation: System works with partial configurations and provides helpful error messages
- Version String Updates: Updated version identifiers across all server components
Configuration Examples
GPT-5 Setup (New)
{
"mcpServers": {
"athena-protocol": {
"command": "npx",
"args": ["@n0zer0d4y/athena-protocol"],
"env": {
"DEFAULT_LLM_PROVIDER": "openai",
"OPENAI_API_KEY": "your-key-here",
"OPENAI_MODEL_DEFAULT": "gpt-5",
"OPENAI_MAX_COMPLETION_TOKENS_DEFAULT": "8192",
"OPENAI_VERBOSITY_DEFAULT": "medium",
"OPENAI_REASONING_EFFORT_DEFAULT": "high",
"LLM_TEMPERATURE_DEFAULT": "0.7",
"LLM_MAX_TOKENS_DEFAULT": "2000",
"LLM_TIMEOUT_DEFAULT": "30000"
}
}
}
}Google Gemini Setup (New)
{
"mcpServers": {
"athena-protocol": {
"command": "npx",
"args": ["@n0zer0d4y/athena-protocol"],
"env": {
"DEFAULT_LLM_PROVIDER": "google",
"GOOGLE_API_KEY": "your-key-here",
"GOOGLE_MODEL_DEFAULT": "gemini-2.5-flash",
"LLM_TEMPERATURE_DEFAULT": "0.7",
"LLM_MAX_TOKENS_DEFAULT": "2000",
"LLM_TIMEOUT_DEFAULT": "30000"
}
}
}
}Known Limitations
GPT-5 Parameter Requirements
Current implementation requires standard LLM parameters (LLM_TEMPERATURE_DEFAULT, LLM_MAX_TOKENS_DEFAULT, LLM_TIMEOUT_DEFAULT) for GPT-5 models, even though these parameters are not used by the model itself. This is a temporary limitation that will be addressed in v0.3.0.
Migration Guide
For Existing Local Users
No changes required. Existing .env file configurations continue to work unchanged.
For New NPM Users
Use the MCP client configuration examples provided in docs/CLIENT_MCP_CONFIGURATION_EXAMPLES.md for immediate setup.
Testing
- Comprehensive testing with both local and npx execution modes
- Validation of all documented configuration examples
- Cross-platform compatibility verification (Windows, macOS, Linux)
Acknowledgments
This release represents a significant improvement in user experience by eliminating the need for local file configuration when using Athena Protocol through MCP clients. The foundation is now set for future enhancements and additional provider support.
v0.1.0 - Foundation Release: Enhanced File Analysis & Production Ready
v0.1.0 - Foundation Release: Enhanced File Analysis & Production Ready
Initial Public Release
Athena Protocol MCP Server - A systematic thinking validation system for LLM coding agents, acting as an AI tech lead to validate approaches, analyze impacts, and optimize decision-making.
Key Features
- 5 Core Validation Tools:
thinking_validation,impact_analysis,assumption_checker,dependency_mapper,thinking_optimizer - 14 LLM Provider Support: OpenAI, Anthropic, Google, Qwen, Groq, XAI, Mistral, Perplexity, OpenRouter, Ollama, ZAI, Azure, Bedrock, Vertex
- Precision File Analysis: New
analysisTargetsparameter with 4 read modes (full,head,tail,range) - Docker Support: Production-ready containerization with multi-stage builds
- npm Package: Ready for distribution via npm registry
Major Improvements
Enhanced File Analysis System
- NEW:
analysisTargetsparameter with client-controlled precisionfullmode: Read entire file when issue location is unclearheadmode: Read first N lines for imports/setup analysistailmode: Read last N lines for recent changesrangemode: Read specific line ranges for targeted analysis- Priority levels:
critical,important,supplementary
- REMOVED: Legacy
filesToAnalyzeparameter (previously limited to 100 lines) - Smart defaults:
mode: "head"withlines: 50when omitted
Tool Output Enhancements
thinking_optimizer now includes comprehensive tacticalPlan output:
- Classification and grep-first guidance
- Key findings and decision points
- Implementation steps and testing plans
- Risk mitigation and checkpoints
- Value/effort analysis
Stability & Quality
- FIXED: JSON parsing errors in MCP client communication
- FIXED: Stdout contamination from debug logs
- REMOVED: All informal emojis from code and logs
- REMOVED: Unused
web-searchplaceholder tool - IMPROVED: Error messages and validation feedback
npm & Docker Ready
npm Package Configuration
- Package name:
@n0zer0d4y/athena-protocol - Executable:
athena-protocol - Optimized bundle with
.npmignore - Pre-publish validation scripts
Docker Deployment
- Multi-stage Alpine-based build (62MB final image)
- Non-root user execution
- Health check endpoint
- Docker Compose orchestration
Documentation
- Comprehensive README: Setup, configuration, tool usage, MCP client integration
- Provider Guide: Detailed setup for all 14 LLM providers with latest models (GPT-5, Claude 4.5, Gemini 2.5, etc.)
- MCP Client Schema: JSON configuration examples for Cursor and Claude Desktop
- Important Notices:
- Memory system marked as experimental (pending refactor)
- Provider testing status (6 tested, 8 configured)
Known Limitations
- Tested Providers: Only OpenAI, Google, ZAI, Mistral, OpenRouter, and Groq have been thoroughly tested
- Memory System: Persistent memory creates
thinking-memory.jsonin project root (refactor planned)
Testing
All test scripts updated to use new analysisTargets parameter:
test-live-mcp-tools.cjstest-mcp-thinking-validation.jstest-all-tools.jsvalidate-tool-architecture.cjs
Requirements
- Node.js 18 or higher
- At least one LLM provider API key
- TypeScript 5.x (for development)
Quick Start
npm installation:
npm install -g @n0zer0d4y/athena-protocol
athena-protocolDocker deployment:
docker-compose up -dLinks
- GitHub Repository: https://github.com/n0zer0d4y/athena-protocol
- npm Package: https://www.npmjs.com/package/@n0zer0d4y/athena-protocol
- Issue Tracker: https://github.com/n0zer0d4y/athena-protocol/issues
Commits in this Release
fix: restore and enhance thinking_optimizer tactical plan outputrefactor: streamline file analysis with analysisTargets parameterfix: resolve MCP client JSON parsing and validation errorsrefactor: remove unused web-search placeholder tooldocs: update model listings and improve documentation qualitydocs: add notice about memory system pending refactordocs: add MCP client configuration sectiondocs: add provider testing status noticechore: prepare package for npm publishing and Docker deployment
Full Changelog: Initial release