Releases: tang-vu/ContribAI
Releases · tang-vu/ContribAI
v6.4.1
See CHANGELOG.md for details.
v6.4.0
See CHANGELOG.md for details.
v6.3.0
See CHANGELOG.md for details.
v6.2.1
See CHANGELOG.md for details.
v6.2.0
Added
- Sprint 18 complete — Dependencies, benchmarks, binary optimization:
- Criterion benchmark suite (5 benchmarks: AST extraction, framework detection, risk classification)
- Rust dependabot (weekly automated dependency updates)
- Test fixtures for benchmarking (Python/Rust/JavaScript samples)
Changed
- Dependencies updated:
- tower 0.4 → 0.5 (compatible with axum 0.7)
- Tree-sitter grammar audit (documented 0.23/0.24/0.25 compatibility)
- Binary size optimization: 34MB → 23MB (32% reduction)
- Moved
profile.releaseto workspace root (eliminated warning) - LTO + strip + opt-level=z all applied correctly
- Moved
Stats
- Binary size: 34MB → 23MB (-32%)
- Benchmarks: 0 → 5 (criterion framework)
- Dependabot: Python+Actions → +Rust (weekly updates)
v6.1.0
Added
- Sprint 17 complete — Code quality & dead code removal:
- Framework detection from imports (20+ frameworks: Django, React, Rails, etc.)
- Copilot provider fully wired in all factory functions
- Session dead code removed (commented out for future feature)
- 5 new framework detection tests
Fixed
- 6 clippy warnings eliminated — Zero-warning strict lint:
- Removed unused imports (
OpenOptions,std::io::Write,warn,CopilotProvider) - Fixed
unwrap()afteris_some()→if let Some(p) = path - Derived
DefaultforPluginManager(manual impl removed) - Test helper warnings suppressed with
#[allow(dead_code)]
- Removed unused imports (
Stats
- Tests: 587 → 602 (+15)
- Clippy warnings: 6 → 0
- Dead code: Session module removed, Copilot wired
v6.0.0
Plugin System + Enterprise Mode + i18n
BREAKING CHANGE: Major version bump for plugin system architecture and enterprise features.
Plugin System
Extensible lifecycle hooks for pipeline events:
plugins:
- name: "slack-notifier"
hooks: [on_pr_created, on_error]
config:
webhook_url: "https://hooks.slack.com/test"
channel: "#devops"
- name: "custom-audit"
hooks: [on_analysis_complete, on_pr_merged]Available hooks:
on_analysis_complete— After code analysis finisheson_pr_created— After a PR is submittedon_pr_merged— After a PR is mergedon_pr_closed— After a PR is closed (not merged)on_error— When a pipeline error occurs
Enterprise Mode
enterprise:
enabled: true
sso_url: "https://sso.mycompany.com/saml"
audit_log_path: "~/.contribai/logs/audit.log"
max_daily_cost_usd: 100.0
allowed_providers: ["copilot", "vertex"]Features:
- SSO integration (SAML/OIDC ready)
- Audit log file path configuration
- Maximum daily cost limit in USD
- Allowed LLM providers whitelist
i18n (Internationalization)
Supported locales:
- English (
en) — default - Vietnamese (
vi) - Japanese (
ja) - Chinese Simplified (
zh-CN)
Config:
locale: "vi" # or set LANG env varDetected from:
localeconfig fieldLANGenvironment variable- Falls back to English
Tests
- 444 total tests (+11 from Sprint 16)
- 4 plugin tests (CRUD + dispatch)
- 7 i18n tests (locale detection + translations)
Binaries
- Linux x86_64: 35.69 MiB
- macOS aarch64: 32.98 MiB
- macOS x86_64: 34.69 MiB
- Windows x86_64: 33.58 MiB
v5.20.0
Client/Server Architecture + TUI Polish + Observability
Complete infrastructure upgrade: remote API access, session management, and structured logging.
Pipeline Server Mode
contribai serve— starts pipeline server with REST API (default port: 9876)- Bind to custom host/port:
contribai serve --host 0.0.0.0 --port 9876
REST API Endpoints
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/health |
GET | Optional | Server health check |
/api/stats |
GET | Optional | Pipeline statistics |
/api/sessions |
GET | Required | List active sessions |
/api/sessions/create |
POST | Required | Create a new session |
/api/run |
POST | Required | Trigger pipeline run |
/api/run/target |
POST | Required | Trigger targeted repo analysis |
/metrics |
GET | Optional | Prometheus-format metrics |
Session Management
- Sessions table in SQLite for persistent session tracking
- Each session has:
id,name,mode(plan/build),status,created_at - Session API: Create and list sessions via HTTP
- Session CLI:
contribai session list,contribai session create
Structured JSON Logging
- New
core/logging.rsmodule for observability LogEntrystruct with:timestamp,level,message,target- Configurable log level, format (json/text), and output file
- Prepares for OpenTelemetry integration
Binaries
- Linux x86_64: 35.67 MiB
- macOS aarch64: 32.92 MiB
- macOS x86_64: 34.64 MiB
- Windows x86_64: 33.48 MiB
v5.19.0
Agent Modes + Permissions + Small Model + Sessions
Complete feature pack: read-only analysis, rule-based access control, cost optimization, and undo support.
Agent Modes
planmode: read-only analysis, no code generation or PRsbuildmode: full PR flow (default)- CLI:
contribai run --mode plan
Rule-Based Permission System
- Granular file/shell access control with glob patterns (
*,**,?) - Actions:
allow|ask|deny - 6 permission types:
file_read,file_edit,file_create,file_delete,shell_command,pr_create
Snapshots + Undo
- Track file changes before/after generation
contribai undoto rollback last generated changes- Separate snapshot database for undo support
Small Model Routing
- Dedicated cheap model for lightweight tasks (PR titles, commit messages, context compaction)
- Config:
llm.small_model: "gemini/gemini-3-flash-lite" - Automatic fallback if small model fails to initialize
Auto Context Compaction
- Summarize context when it exceeds 80% threshold
- Use small model for cost-efficient summarization
ContextBudgettracker for real-time token usage monitoring
Multi-Session Support
SessionManagerwithcreate/list/kill/fork/active_count- Each session has independent context and state
- 4 new tests for session CRUD operations
LSP Validation Config
- Prepare for LSP-based code validation
- Config:
sandbox.lsp_enabled,sandbox.lsp_timeout_secs,sandbox.lsp_accept_warnings
Binaries
- Linux x86_64: 35.59 MiB
- macOS aarch64: 32.86 MiB
- macOS x86_64: 34.53 MiB
- Windows x86_64: 33.39 MiB
v5.18.0
Sprint 11: Auth Ecosystem
GitHub Copilot, Auto-Detection, Token Refresh, Fallback Chain.
GitHub Copilot Provider
- Exchange
gh authtoken for Copilot API access - Access gpt-4o, claude-sonnet-4, gemini-2.5-pro via Copilot subscription
- No separate API key needed — just
gh auth login - Auto-refresh token before expiry (5-min TTL, 30s safety buffer)
Enhanced Login Flow
contribai loginnow detects ALL available auth sources:- Copilot (via gh CLI)
- Vertex AI (via gcloud)
- API keys (Gemini, OpenAI, Anthropic)
- Ollama (local)
Provider Fallback Chain
- Config:
llm.fallback: ["vertex/gemini-3-pro", "openai/gpt-4o"] - Auto-fallback on rate limit, API errors, token expiry
FallbackProviderwrapper tries primary first, then each fallback in order- Logs which provider was used for each request
Config Example
llm:
provider: "copilot"
model: "gpt-4o"
fallback:
- "vertex/gemini-3-pro"
- "openai/gpt-4o"Auth Detection Output
LLM Providers:
Copilot: ✅ Token detected (via gh CLI)
Vertex AI: ✅ gcloud token OK (project: my-project)
gemini: ❌ gemini key missing
Ollama: ⚪ Not configured
Binaries
- Linux x86_64: 35.46 MiB
- macOS aarch64: 32.78 MiB
- macOS x86_64: 34.40 MiB
- Windows x86_64: 33.34 MiB