An AI-powered agent for automated IT infrastructure and code auditing using Microsoft Agent Framework with GitHub models.
This agent provides comprehensive auditing capabilities including:
- Security Analysis: Scan code for security vulnerabilities, hardcoded secrets, and unsafe patterns
- Code Quality: Analyze code complexity, maintainability, and adherence to best practices
- Compliance Checks: Validate compliance with organizational standards and industry regulations
- Dependency Auditing: Check for outdated or vulnerable dependencies
- Infrastructure Analysis: Audit infrastructure configurations for security and performance
- Architecture Review: Evaluate system architecture and design patterns
- Technical Debt Detection: Identify areas of technical debt and suggest improvements
The agent is built using:
- Microsoft Agent Framework: Flexible agent framework for building AI applications
- GitHub Models: Free-tier access to various AI models including GPT-4.1-mini and others
- Custom Audit Tools: Specialized tools for different types of audits
- MCP Integration: Model Context Protocol support for extensibility
- Python 3.9 or higher
- GitHub Personal Access Token (PAT) for accessing GitHub Models
- Git (for code repository auditing)
- Clone the repository:
git clone <your-repo-url>
cd code-audit-agent- Install dependencies:
# Note: The --pre flag is REQUIRED while Agent Framework is in preview
pip install agent-framework-azure-ai --pre
pip install -r requirements.txt- Configure environment variables:
cp .env.example .env
# Edit .env and add your GitHub PAT token- Go to https://github.com/settings/tokens
- Click "Generate new token (classic)"
- Select scopes (at minimum:
repo,read:org) - Copy the generated token to your
.envfile
Basic usage:
python main.pyWith specific audit type:
python main.py --audit-type security --path /path/to/codeInteractive mode:
python main.py --interactivefrom audit_agent import create_audit_agent
async def run_security_audit():
agent = create_audit_agent()
thread = agent.get_new_thread()
async for chunk in agent.run_stream(
"Perform a security audit on the current codebase, focusing on authentication and data handling",
thread=thread
):
if chunk.text:
print(chunk.text, end="", flush=True)async def run_quality_review():
agent = create_audit_agent()
thread = agent.get_new_thread()
async for chunk in agent.run_stream(
"Review the code quality in src/ directory and provide recommendations",
thread=thread
):
if chunk.text:
print(chunk.text, end="", flush=True)async def run_dependency_audit():
agent = create_audit_agent()
thread = agent.get_new_thread()
async for chunk in agent.run_stream(
"Check all dependencies for known vulnerabilities and suggest updates",
thread=thread
):
if chunk.text:
print(chunk.text, end="", flush=True)The agent has access to the following audit tools:
scan_for_secrets: Detect hardcoded credentials and API keyscheck_security_headers: Validate HTTP security headersanalyze_authentication: Review authentication mechanismscheck_sql_injection: Identify potential SQL injection vulnerabilities
calculate_complexity: Measure cyclomatic complexitycheck_code_style: Validate code style and formattinganalyze_test_coverage: Calculate and report test coveragedetect_code_smells: Identify common code anti-patterns
check_license_compliance: Verify license compatibilityvalidate_gdpr_compliance: Check GDPR requirementsaudit_logging: Review logging practices
scan_dockerfile: Audit Docker configurationscheck_cloud_config: Review cloud infrastructure settingsanalyze_network_security: Evaluate network security posture
The agent uses GitHub models by default. You can change the model in config.py:
# Available models (via GitHub Models):
# - openai/gpt-4.1-mini (default) - Best balance of quality and cost
# - openai/gpt-4.1 - Higher quality, more expensive
# - openai/gpt-5-mini - Latest lightweight model
# - microsoft/phi-4-mini-instruct - Efficient small model
# - meta/llama-3.3-70b-instruct - Open source alternative
MODEL_ID = "openai/gpt-4.1-mini"Customize audit behavior in config.py:
AUDIT_CONFIG = {
"max_file_size": 1_000_000, # 1MB
"excluded_dirs": [".git", "node_modules", "__pycache__", "venv"],
"security_patterns": [...],
"complexity_threshold": 10,
"test_coverage_minimum": 80,
}code-audit-agent/
├── README.md # This file
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── config.py # Configuration settings
├── main.py # Main entry point
├── audit_agent.py # Agent creation and setup
├── tools/ # Audit tools
│ ├── __init__.py
│ ├── security_tools.py # Security scanning tools
│ ├── quality_tools.py # Code quality tools
│ ├── compliance_tools.py # Compliance checking tools
│ └── infra_tools.py # Infrastructure audit tools
├── examples/ # Usage examples
│ ├── basic_audit.py
│ ├── security_scan.py
│ └── interactive_mode.py
└── tests/ # Test suite
├── test_agent.py
└── test_tools.py
The agent maintains context across multiple queries:
thread = agent.get_new_thread()
# First query
await agent.run_stream("Audit the authentication system", thread=thread)
# Follow-up query (maintains context)
await agent.run_stream("What are the top 3 most critical issues?", thread=thread)
# Additional follow-up
await agent.run_stream("Generate remediation steps for issue #1", thread=thread)Add your own audit tools:
from typing import Annotated
def custom_audit_tool(
target: Annotated[str, "The target to audit"],
severity: Annotated[str, "Severity level: low, medium, high"] = "medium"
) -> str:
"""Your custom audit logic"""
# Implementation
return f"Audit results for {target}"
# Add to agent
agent = ChatAgent(
# ... other config
tools=[custom_audit_tool, *existing_tools]
)Extend with Model Context Protocol tools:
from agent_framework import MCPStdioTool
mcp_tools = [
MCPStdioTool(
name="Custom MCP Tool",
description="Your MCP tool description",
command="npx",
args=["-y", "your-mcp-package"]
)
]
agent = ChatAgent(
# ... other config
tools=[*audit_tools, *mcp_tools]
)For debugging and monitoring, you can add tracing:
# See tracing documentation for setup
# This enables visualization of agent execution flowTo evaluate agent performance:
# See evaluation documentation for setup
# Run agent against test datasets to measure quality- Free to Start: No charge until hitting rate limits
- Quick Setup: Just need a GitHub PAT token
- Model Flexibility: Switch between models with same API
- Production Ready: Same models available on Microsoft Foundry for scaling
Import Error: No module named 'agent_framework'
- Make sure you installed with the
--preflag:pip install agent-framework-azure-ai --pre
Authentication Error
- Verify your GitHub PAT token is valid
- Ensure token has proper scopes (
repo,read:org)
Rate Limits
- GitHub models have free tier limits
- For production, consider Microsoft Foundry models
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new tools
- Submit a pull request
MIT License - See LICENSE file for details
For issues and questions:
- Create an issue in the repository
- Check existing documentation
- Review example code in
examples/