Version: 0.1.0
Date: 2026-02-05
This guide walks you through creating, publishing, and installing AAM packages with practical examples.
- Quick Start (Local)
- Creating a Package from an Existing Project
- Creating a Package from Scratch
- Publishing a Package
- Installing a Package
- Adding Dependencies
- Complete Example: Building a Code Review Package
- Dist-Tags
- Governance & Policy Gates
- Quality: Tests & Evals
- Portable Bundles
- MCP Server Integration
- Environment Diagnostics (aam doctor)
- Cross-Platform Conversion (aam convert)
flowchart LR
subgraph Author["Package Author"]
direction TB
subgraph NewPkg["From Scratch"]
Init["aam pkg init"]
CreateNew["Create Artifacts<br/>skills/, agents/,<br/>prompts/, instructions/"]
end
subgraph ExistingPkg["From Existing Project"]
CreatePkg["aam pkg create<br/><i>autodetect + select</i>"]
end
Validate["aam pkg validate"]
Pack["aam pkg pack"]
Publish["aam pkg publish --sign"]
end
subgraph Registry["Registry"]
Store[(Package<br/>Storage)]
end
subgraph Consumer["Package Consumer"]
Search["aam search"]
Install["aam install"]
Deploy["Auto-deploy to<br/>target platforms"]
end
Init --> CreateNew
CreateNew --> Validate
CreatePkg --> Validate
Validate --> Pack
Pack --> Publish
Publish --> Store
Store --> Search
Search --> Install
Install --> Deploy
style Init fill:#e3f2fd
style CreatePkg fill:#e3f2fd
style Publish fill:#fff3e0
style Store fill:#f3e5f5
style Install fill:#e8f5e9
Get up and running in under 5 minutes β no server, no Docker, no database required.
# Install AAM
pip install aam
# Verify installation
aam --version
# aam 0.1.0
# Set up AAM (detects platform, registers default sources)
aam init
# or non-interactive: aam init --yes# 1. Initialize AAM (one-time setup)
aam init
# 2. Create a local registry
aam registry init ~/my-packages
aam registry add local file:///home/user/my-packages --default
# 3. Create a package from an existing project
aam pkg create
aam pkg validate
aam pkg pack
# 4. Publish to local registry (no server needed)
aam pkg publish --registry local
# 5. Install from local registry
cd another-project/
aam search "my-package"
aam install @author/my-package
# That's it! No Docker, no Postgres, no server process.# Set up AAM client (platform, default sources)
aam init
# Create a package from an existing project (autodetect skills/agents/etc.)
aam pkg create
# Or create a new package from scratch (unscoped)
aam pkg init my-package
# Or create a scoped package
aam pkg init @author/my-package
# Validate before publishing
aam pkg validate
# Build and publish
aam pkg pack
aam pkg publish
# Install a package (scoped)
aam install @author/my-package
# Install a package (unscoped)
aam install my-package
# Check for updates from sources
aam outdated
# Upgrade outdated packages
aam upgrade
# Browse available source artifacts
aam list --availableYou have been working on a project and organically created skills, agents, or instructions β perhaps directly in .cursor/skills/, .cursor/rules/, or other platform-specific locations. Now you want to bundle them into an AAM package so they can be shared, versioned, and installed elsewhere.
The aam pkg create command handles this workflow.
Imagine your project looks like this:
my-project/
βββ .cursor/
β βββ skills/
β β βββ code-reviewer/
β β β βββ SKILL.md
β β β βββ scripts/
β β β βββ analyze.py
β β βββ deploy-helper/
β β βββ SKILL.md
β βββ rules/
β β βββ agent-security-auditor.mdc
β β βββ python-standards.mdc
β βββ prompts/
β βββ refactor-template.md
βββ CLAUDE.md
βββ src/
β βββ ... (your application code)
βββ ... (other project files)
These skills and agents are useful, but they're trapped in your local project. There's no aam.yaml, no package structure β just files you created while working.
Run aam pkg create from your project root:
$ cd my-project/
$ aam pkg create
Scanning for artifacts not managed by AAM...
Found 5 artifacts:
Skills (2):
[x] 1. code-reviewer .cursor/skills/code-reviewer/SKILL.md
[x] 2. deploy-helper .cursor/skills/deploy-helper/SKILL.md
Agents (1):
[x] 3. security-auditor .cursor/rules/agent-security-auditor.mdc
Instructions (1):
[x] 4. python-standards .cursor/rules/python-standards.mdc
Prompts (1):
[x] 5. refactor-template .cursor/prompts/refactor-template.md
Toggle selection with [space], confirm with [enter].
Select/deselect all: [a] | Invert selection: [i]Select the artifacts you want to include (or deselect ones that are work-in-progress), then fill in the package metadata:
Selected 5 artifacts. Continue? [Y/n] y
Package name [my-project]: my-toolkit
Version [1.0.0]:
Description: Code review and deployment toolkit
Author [spazy]:
License [Apache-2.0]:
How should files be organized?
(c) Copy into AAM package structure
(r) Reference in-place (keep files where they are)
[default: c]
Creating package...
β Created aam.yaml
β Copied .cursor/skills/code-reviewer/ β skills/code-reviewer/
β Copied .cursor/skills/deploy-helper/ β skills/deploy-helper/
β Converted .cursor/rules/agent-security-auditor.mdc β agents/security-auditor/
β Converted .cursor/rules/python-standards.mdc β instructions/python-standards.md
β Copied .cursor/prompts/refactor-template.md β prompts/refactor-template.md
β Package created: my-toolkit@1.0.0
5 artifacts (2 skills, 1 agent, 1 instruction, 1 prompt)
Next steps:
aam pkg validate β verify the package is well-formed
aam pkg pack β build distributable .aam archive
aam pkg publish β publish to registryAAM scans for known artifact patterns across all supported platforms:
| Artifact Type | Detection Patterns |
|---|---|
| Skills | **/SKILL.md (parent directory = skill), .cursor/skills/*/, .codex/skills/*/, skills/*/ |
| Agents | **/agent.yaml, agents/*/, .cursor/rules/agent-*.mdc |
| Prompts | prompts/*.md, .cursor/prompts/*.md, .github/prompts/*.md |
| Instructions | instructions/*.md, .cursor/rules/*.mdc (non-agent), CLAUDE.md, AGENTS.md, .github/copilot-instructions.md |
The following directories are always excluded from scanning:
.aam/packages/(already-installed AAM packages)node_modules/,.venv/,__pycache__/,.git/
Files already declared in an existing aam.yaml are also excluded.
When creating the package, you choose how files are organized:
Copies detected artifacts into the standard AAM directory structure. Your original files remain untouched.
$ aam create-package --organize copyResult:
my-project/
βββ aam.yaml # NEW: package manifest
βββ skills/ # NEW: copied from .cursor/skills/
β βββ code-reviewer/
β β βββ SKILL.md
β β βββ scripts/
β β βββ analyze.py
β βββ deploy-helper/
β βββ SKILL.md
βββ agents/ # NEW: converted from .cursor/rules/
β βββ security-auditor/
β βββ agent.yaml
β βββ system-prompt.md
βββ instructions/ # NEW: converted from .cursor/rules/
β βββ python-standards.md
βββ prompts/ # NEW: copied from .cursor/prompts/
β βββ refactor-template.md
βββ .cursor/ # UNTOUCHED: original files remain
β βββ ...
βββ src/
βββ ...
Does not copy files. The aam.yaml points to artifacts at their current locations. Best for local-only packages or when you want to keep using platform-specific paths.
$ aam create-package --organize referenceResult (only aam.yaml is created):
# aam.yaml (reference mode)
artifacts:
skills:
- name: code-reviewer
path: .cursor/skills/code-reviewer/
description: "..."
- name: deploy-helper
path: .cursor/skills/deploy-helper/
description: "..."Moves files into AAM structure and removes originals. Use with caution β this changes your project layout.
$ aam create-package --organize moveArtifacts found in platform-specific formats are automatically converted to AAM-native format:
Cursor .mdc Rules β Instructions:
A .cursor/rules/python-standards.mdc file like:
---
description: "Python coding standards"
globs: "**/*.py"
alwaysApply: false
---
# Python Standards
...Gets converted to instructions/python-standards.md:
---
name: python-standards
description: "Python coding standards"
scope: project
globs: "**/*.py"
---
# Python Standards
...Cursor Agent Rules β Agent Definitions:
A .cursor/rules/agent-security-auditor.mdc file is converted into:
agents/security-auditor/
βββ agent.yaml # Generated agent definition
βββ system-prompt.md # Extracted from the .mdc rule body
If autodetection misses files, include them manually:
# Include a specific file as an instruction
$ aam create-package --include docs/coding-guide.md --include-as instruction
# Include a directory as a skill
$ aam create-package --include ./tools/linter/ --include-as skillFor CI/CD or scripting, use flags to skip prompts:
$ aam create-package \
--all \
--name my-toolkit \
--version 1.0.0 \
--description "My toolkit" \
--author spazy \
--organize copy \
--yesSee what would be detected and created without writing any files:
$ aam create-package --dry-run
Scanning for artifacts not managed by AAM...
Found 5 artifacts:
skill: code-reviewer .cursor/skills/code-reviewer/SKILL.md
skill: deploy-helper .cursor/skills/deploy-helper/SKILL.md
agent: security-auditor .cursor/rules/agent-security-auditor.mdc
instruction: python-standards .cursor/rules/python-standards.mdc
prompt: refactor-template .cursor/prompts/refactor-template.md
Would create:
aam.yaml
skills/code-reviewer/ (copy from .cursor/skills/code-reviewer/)
skills/deploy-helper/ (copy from .cursor/skills/deploy-helper/)
agents/security-auditor/ (convert from .cursor/rules/agent-security-auditor.mdc)
instructions/python-standards.md (convert from .cursor/rules/python-standards.mdc)
prompts/refactor-template.md (copy from .cursor/prompts/refactor-template.md)
[Dry run β no files written]Use aam pkg init to create a new package interactively:
$ mkdir python-best-practices && cd python-best-practices
$ aam pkg init
Package name [python-best-practices]: @author/python-best-practices
Version [1.0.0]:
Description: Python coding standards and best practices for AI agents
Author: author
License [Apache-2.0]:
What artifacts will this package contain?
[x] Skills
[x] Agents
[x] Prompts
[x] Instructions
Which platforms should this package support?
[x] Cursor
[x] Claude
[x] GitHub Copilot
[ ] Codex
Created python-best-practices/
βββ aam.yaml
βββ agents/
βββ skills/
βββ prompts/
βββ instructions/After initialization, your package looks like this:
python-best-practices/
βββ aam.yaml # Package manifest (required)
βββ agents/ # Agent definitions
βββ skills/ # Skill definitions
βββ prompts/ # Prompt templates
βββ instructions/ # Platform instructions/rules
The generated aam.yaml is the heart of your package:
# aam.yaml
name: "@author/python-best-practices" # Scoped package name
version: 1.0.0
description: "Python coding standards and best practices for AI agents"
author: author
license: Apache-2.0
repository: https://github.com/author/python-best-practices
# Declare what this package provides
artifacts:
skills: []
agents: []
prompts: []
instructions: []
# Dependencies on other AAM packages (both scoped and unscoped supported)
dependencies: {}
# Platform-specific configuration
platforms:
cursor:
skill_scope: project
claude:
merge_instructions: true
copilot:
merge_instructions: trueA skill is more than just a SKILL.md file β it can include scripts, templates, references, and assets that the AI agent can use.
flowchart TB
subgraph Skill["π skills/python-reviewer/"]
SKILL["π SKILL.md<br/><i>Required: Instructions & metadata</i>"]
subgraph Scripts["π scripts/"]
S1["analyze.py"]
S2["complexity.py"]
end
subgraph Templates["π templates/"]
T1["report.md.j2"]
T2["summary.j2"]
end
subgraph References["π references/"]
R1["style-guide.md"]
R2["best-practices.md"]
end
subgraph Assets["π assets/"]
A1["logo.png"]
A2["config.json"]
end
end
Agent["π€ AI Agent"]
Agent -->|"reads instructions"| SKILL
Agent -->|"executes"| Scripts
Agent -->|"formats output"| Templates
Agent -->|"loads on demand"| References
Agent -->|"uses in output"| Assets
style SKILL fill:#e8f5e9,stroke:#2e7d32
style Scripts fill:#fff3e0,stroke:#ef6c00
style Templates fill:#e3f2fd,stroke:#1565c0
style References fill:#fce4ec,stroke:#c2185b
style Assets fill:#f3e5f5,stroke:#7b1fa2
Complete Skill Directory Structure:
skills/<skill-name>/
βββ SKILL.md # Required β frontmatter + instructions
βββ scripts/ # Optional β executable scripts
β βββ analyze.py
β βββ generate_report.sh
βββ templates/ # Optional β output templates
β βββ report.md.j2
β βββ summary.html.j2
βββ references/ # Optional β documentation loaded on demand
β βββ style-guide.md
β βββ best-practices.md
βββ assets/ # Optional β files used in output
βββ logo.png
βββ styles.css
Create a complete skill with all components:
mkdir -p skills/python-reviewer/{scripts,templates,references,assets}1. SKILL.md β Main skill definition:
skills/python-reviewer/SKILL.md:
---
name: python-reviewer
description: Review Python code for best practices, PEP 8 compliance, and common issues. Use when asked to review Python files or suggest improvements.
---
# Python Code Reviewer
## When to Use
- User asks to review Python code
- User asks about Python best practices
- User wants to improve code quality
## Available Scripts
This skill includes executable scripts:
- `scripts/analyze.py` β Static analysis script that checks for common issues
- `scripts/complexity.py` β Calculate cyclomatic complexity metrics
Run scripts when deeper analysis is needed:
```bash
python skills/python-reviewer/scripts/analyze.py <file.py>Use these templates for consistent output:
templates/review-report.md.j2β Full review report templatetemplates/quick-summary.md.j2β Brief summary template
Load these for detailed guidance:
- Style Guide β PEP 8 quick reference
- Best Practices β Python idioms and patterns
- Type Hints Guide β Typing best practices
-
PEP 8 Compliance
- Line length (max 88 for Black, 79 for strict PEP 8)
- Naming conventions (snake_case for functions/variables, PascalCase for classes)
- Import ordering (standard library, third-party, local)
-
Type Hints
- Function parameters and return types
- Variable annotations where helpful
- Use
typingmodule for complex types
-
Documentation
- Module docstrings
- Function/method docstrings (Google or NumPy style)
- Inline comments for complex logic
-
Error Handling
- Specific exception types (not bare
except:) - Context managers for resources
- Proper error messages
- Specific exception types (not bare
-
Performance
- List comprehensions over loops where appropriate
- Generator expressions for large datasets
- Avoid premature optimization
Use the templates/review-report.md.j2 template, or structure feedback as:
## Code Review: [filename]
### Summary
[1-2 sentence overview]
### Issues Found
#### Critical
- [issue with line reference]
#### Suggestions
- [improvement with example]
### Positive Aspects
- [what's done well]
**2. Scripts β Executable tools:**
`skills/python-reviewer/scripts/analyze.py`:
```python
#!/usr/bin/env python3
"""
Static analysis script for Python code review.
Usage: python analyze.py <file.py> [--format json|text]
"""
import ast
import sys
import json
from pathlib import Path
from dataclasses import dataclass, asdict
@dataclass
class Issue:
line: int
column: int
severity: str # critical, high, medium, low
category: str
message: str
suggestion: str | None = None
def analyze_file(filepath: Path) -> list[Issue]:
"""Analyze a Python file for common issues."""
issues = []
content = filepath.read_text()
# Check line lengths
for i, line in enumerate(content.splitlines(), 1):
if len(line) > 88:
issues.append(Issue(
line=i,
column=89,
severity="low",
category="formatting",
message=f"Line exceeds 88 characters ({len(line)} chars)",
suggestion="Break line or use Black formatter"
))
# Parse AST for deeper analysis
tree = ast.parse(content)
for node in ast.walk(tree):
# Check for bare except
if isinstance(node, ast.ExceptHandler) and node.type is None:
issues.append(Issue(
line=node.lineno,
column=node.col_offset,
severity="high",
category="error-handling",
message="Bare 'except:' clause catches all exceptions",
suggestion="Specify exception type: except Exception:"
))
# Check for missing docstrings
if isinstance(node, ast.FunctionDef):
if not ast.get_docstring(node):
issues.append(Issue(
line=node.lineno,
column=node.col_offset,
severity="medium",
category="documentation",
message=f"Function '{node.name}' missing docstring",
suggestion="Add docstring describing purpose and parameters"
))
return issues
def main():
if len(sys.argv) < 2:
print("Usage: python analyze.py <file.py> [--format json|text]")
sys.exit(1)
filepath = Path(sys.argv[1])
output_format = "text"
if "--format" in sys.argv:
idx = sys.argv.index("--format")
output_format = sys.argv[idx + 1]
issues = analyze_file(filepath)
if output_format == "json":
print(json.dumps([asdict(i) for i in issues], indent=2))
else:
for issue in issues:
print(f"{filepath}:{issue.line}:{issue.column} [{issue.severity}] {issue.message}")
if __name__ == "__main__":
main()
skills/python-reviewer/scripts/complexity.py:
#!/usr/bin/env python3
"""
Calculate cyclomatic complexity for Python functions.
Usage: python complexity.py <file.py>
"""
import ast
import sys
from pathlib import Path
class ComplexityVisitor(ast.NodeVisitor):
"""Calculate cyclomatic complexity of functions."""
def __init__(self):
self.results = []
def visit_FunctionDef(self, node):
complexity = self._calculate_complexity(node)
self.results.append({
"name": node.name,
"line": node.lineno,
"complexity": complexity,
"rating": self._rate_complexity(complexity)
})
self.generic_visit(node)
def _calculate_complexity(self, node) -> int:
"""Count decision points."""
complexity = 1 # Base complexity
for child in ast.walk(node):
if isinstance(child, (ast.If, ast.While, ast.For, ast.ExceptHandler)):
complexity += 1
elif isinstance(child, ast.BoolOp):
complexity += len(child.values) - 1
elif isinstance(child, ast.comprehension):
complexity += 1
return complexity
def _rate_complexity(self, complexity: int) -> str:
if complexity <= 5:
return "low (good)"
elif complexity <= 10:
return "moderate"
elif complexity <= 20:
return "high (consider refactoring)"
else:
return "very high (refactor recommended)"
def main():
filepath = Path(sys.argv[1])
tree = ast.parse(filepath.read_text())
visitor = ComplexityVisitor()
visitor.visit(tree)
print(f"Complexity Analysis: {filepath}\n")
print(f"{'Function':<30} {'Line':<6} {'Complexity':<12} {'Rating'}")
print("-" * 70)
for result in visitor.results:
print(f"{result['name']:<30} {result['line']:<6} {result['complexity']:<12} {result['rating']}")
if __name__ == "__main__":
main()3. Templates β Output formatting:
skills/python-reviewer/templates/review-report.md.j2:
# Code Review Report
**File:** {{ filename }}
**Reviewed:** {{ timestamp }}
**Reviewer:** AI Code Reviewer
---
## Summary
{{ summary }}
**Overall Score:** {{ score }}/10
---
## Issues Found
{% if critical_issues %}
### Critical ({{ critical_issues | length }})
{% for issue in critical_issues %}
- **Line {{ issue.line }}**: {{ issue.message }}
- Suggestion: {{ issue.suggestion }}
{% endfor %}
{% endif %}
{% if high_issues %}
### High Priority ({{ high_issues | length }})
{% for issue in high_issues %}
- **Line {{ issue.line }}**: {{ issue.message }}
- Suggestion: {{ issue.suggestion }}
{% endfor %}
{% endif %}
{% if medium_issues %}
### Medium Priority ({{ medium_issues | length }})
{% for issue in medium_issues %}
- **Line {{ issue.line }}**: {{ issue.message }}
{% endfor %}
{% endif %}
---
## Metrics
| Metric | Value |
|--------|-------|
| Lines of Code | {{ loc }} |
| Functions | {{ function_count }} |
| Average Complexity | {{ avg_complexity }} |
| Test Coverage | {{ coverage }}% |
---
## Recommendations
{% for rec in recommendations %}
{{ loop.index }}. {{ rec }}
{% endfor %}
---
*Generated by python-reviewer skill*skills/python-reviewer/templates/quick-summary.md.j2:
## Quick Review: {{ filename }}
**Score:** {{ score }}/10 | **Issues:** {{ issue_count }} ({{ critical_count }} critical)
{% if critical_issues %}
**Critical:** {% for issue in critical_issues %}Line {{ issue.line }}: {{ issue.message }}. {% endfor %}
{% endif %}
**Top recommendations:** {{ recommendations[:3] | join(", ") }}4. References β Documentation for the agent:
skills/python-reviewer/references/pep8-summary.md:
# PEP 8 Quick Reference
## Naming Conventions
| Type | Convention | Example |
|------|-----------|---------|
| Module | lowercase_with_underscores | `my_module.py` |
| Class | CapitalizedWords | `MyClass` |
| Function | lowercase_with_underscores | `my_function()` |
| Variable | lowercase_with_underscores | `my_variable` |
| Constant | UPPERCASE_WITH_UNDERSCORES | `MAX_VALUE` |
| Private | _single_leading_underscore | `_internal` |
## Indentation
- Use 4 spaces per indentation level
- Never mix tabs and spaces
- Continuation lines: align with opening delimiter or use hanging indent
## Line Length
- Maximum 79 characters (72 for docstrings)
- Maximum 88 characters (Black formatter default)
## Imports
Order:
1. Standard library imports
2. Related third-party imports
3. Local application imports
Each group separated by a blank line.
## Whitespace
- No whitespace inside parentheses: `spam(ham[1], {eggs: 2})`
- No whitespace before comma: `if x == 4: print(x, y)`
- Surround operators with single space: `x = 1`skills/python-reviewer/references/best-practices.md:
# Python Best Practices
## Use Context Managers
```python
# Bad
f = open('file.txt')
content = f.read()
f.close()
# Good
with open('file.txt') as f:
content = f.read()# Bad
squares = []
for x in range(10):
squares.append(x ** 2)
# Good
squares = [x ** 2 for x in range(10)]# Bad
message = "Hello, " + name + "!"
message = "Hello, {}!".format(name)
# Good
message = f"Hello, {name}!"def greet(name: str, times: int = 1) -> str:
return f"Hello, {name}! " * timesfrom dataclasses import dataclass
@dataclass
class User:
name: str
email: str
age: int = 0
**5. Assets β Static files:**
`skills/python-reviewer/assets/` can contain:
- Images for reports (logos, badges)
- CSS stylesheets for HTML output
- Configuration files
- Sample data files
**Update `aam.yaml` to include the skill:**
```yaml
artifacts:
skills:
- name: python-reviewer
path: skills/python-reviewer/
description: "Review Python code for best practices and PEP 8 compliance"
Note: The entire skill directory is packaged and deployed. All scripts, templates, references, and assets are available to the AI agent at runtime.
Create prompts/refactor-function.md:
---
name: refactor-function
description: "Prompt template for refactoring a Python function"
variables:
- name: function_code
description: "The function code to refactor"
required: true
- name: focus_area
description: "Specific area to focus on"
required: false
enum: [readability, performance, testability, all]
default: all
---
# Refactor Python Function
Refactor the following Python function with focus on: **{{focus_area}}**
```python
{{function_code}}- Maintain the same functionality and API
- Add or improve type hints
- Add docstring if missing
- Apply PEP 8 conventions
- Suggest unit tests if applicable
Provide:
- The refactored code
- Explanation of changes made
- Any trade-offs or considerations
Update `aam.yaml`:
```yaml
artifacts:
skills:
- name: python-reviewer
path: skills/python-reviewer/
description: "Review Python code for best practices and PEP 8 compliance"
prompts:
- name: refactor-function
path: prompts/refactor-function.md
description: "Template for refactoring Python functions"
Create instructions/python-standards.md:
---
name: python-standards
description: "Python coding standards for this project"
scope: project
globs: "**/*.py"
---
# Python Coding Standards
When working with Python files in this project, follow these standards:
## Style
- Use Black formatter with default settings (line length 88)
- Sort imports with isort (Black-compatible profile)
- Use double quotes for strings
## Type Hints
- All public functions must have type hints
- Use `from __future__ import annotations` for forward references
- Prefer `list[str]` over `List[str]` (Python 3.9+)
## Testing
- Use pytest for all tests
- Minimum 80% code coverage for new code
- Name test files `test_*.py`
- Name test functions `test_*`
## Documentation
- Use Google-style docstrings
- All public modules, classes, and functions need docstrings
## Error Handling
- Never use bare `except:`
- Create custom exceptions in `exceptions.py`
- Log errors before re-raisingUpdate aam.yaml:
artifacts:
skills:
- name: python-reviewer
path: skills/python-reviewer/
description: "Review Python code for best practices and PEP 8 compliance"
prompts:
- name: refactor-function
path: prompts/refactor-function.md
description: "Template for refactoring Python functions"
instructions:
- name: python-standards
path: instructions/python-standards.md
description: "Python coding standards for projects"Create agent directory and files:
mkdir -p agents/python-mentorCreate agents/python-mentor/agent.yaml:
name: python-mentor
description: "A Python mentor agent that helps write better Python code"
version: 1.0.0
system_prompt: system-prompt.md
# Skills this agent uses
skills:
- python-reviewer
# Prompts this agent uses
prompts:
- refactor-function
# Tools the agent can access
tools:
- file_read
- file_write
- shell
# Behavioral parameters
parameters:
temperature: 0.3
style: educational
verbosity: detailedCreate agents/python-mentor/system-prompt.md:
You are a Python Mentor β an expert Python developer focused on teaching and improving code quality.
## Your Role
- Help users write better Python code
- Explain concepts clearly with examples
- Review code constructively, highlighting both issues and strengths
- Suggest improvements with explanations of why they're better
## Your Approach
1. **Be Educational**: Don't just fix code, explain why changes improve it
2. **Be Encouraging**: Acknowledge what's done well before suggesting improvements
3. **Be Practical**: Focus on changes that provide real value
4. **Be Current**: Use modern Python (3.10+) features and idioms
## When Reviewing Code
Use the python-reviewer skill to systematically check:
- PEP 8 compliance
- Type hints
- Documentation
- Error handling
- Performance
## When Refactoring
Use the refactor-function prompt template to ensure consistent, thorough refactoring.
## Communication Style
- Use clear, jargon-free explanations
- Provide code examples for every suggestion
- Reference official Python documentation when relevant
- Offer multiple solutions when trade-offs existUpdate final aam.yaml:
# aam.yaml - Complete package manifest
name: python-best-practices
version: 1.0.0
description: "Python coding standards and best practices for AI agents"
author: your-username
license: Apache-2.0
repository: https://github.com/your-username/python-best-practices
artifacts:
agents:
- name: python-mentor
path: agents/python-mentor/
description: "Python mentor agent for code review and improvement"
skills:
- name: python-reviewer
path: skills/python-reviewer/
description: "Review Python code for best practices and PEP 8 compliance"
prompts:
- name: refactor-function
path: prompts/refactor-function.md
description: "Template for refactoring Python functions"
instructions:
- name: python-standards
path: instructions/python-standards.md
description: "Python coding standards for projects"
dependencies: {}
platforms:
cursor:
skill_scope: project
deploy_instructions_as: rules
claude:
merge_instructions: true
copilot:
merge_instructions: true
keywords:
- python
- code-review
- best-practices
- pep8
- mentorBefore publishing, validate your package:
$ aam validate
Validating python-best-practices@1.0.0...
Manifest:
β name: valid format
β version: valid semver
β description: present
β author: present
Artifacts:
β agent: python-mentor
β agent.yaml exists and valid
β system-prompt.md exists
β skill: python-reviewer
β SKILL.md exists and valid
β prompt: refactor-function
β prompts/refactor-function.md exists and valid
β instruction: python-standards
β instructions/python-standards.md exists and valid
Dependencies:
β No dependencies declared
β Package is valid and ready to publishFirst, register on the AAM registry:
$ aam register
Username: your-username
Email: you@example.com
Password: ********
Confirm password: ********
β Account created successfully
β Verification email sent to you@example.com
Please verify your email, then run `aam login` to authenticate.$ aam login
Email or username: your-username
Password: ********
β Logged in as your-username
β API token saved to ~/.aam/credentials.yaml
Token scopes: publish, yank
Token expires: never (revoke with `aam logout`)$ aam pack
Building @author/python-best-practices@1.0.0...
Adding aam.yaml
Adding agents/python-mentor/agent.yaml
Adding agents/python-mentor/system-prompt.md
Adding skills/python-reviewer/SKILL.md
Adding prompts/refactor-function.md
Adding instructions/python-standards.md
β Built python-best-practices-1.0.0.aam (4.2 KB)
Checksum: sha256:a1b2c3d4e5f6...$ aam publish
Publishing @author/python-best-practices@1.0.0...
Uploading python-best-practices-1.0.0.aam...
ββββββββββββββββββββββββββββββββ 100%
β Published @author/python-best-practices@1.0.0
URL: https://registry.aam.dev/packages/@author/python-best-practices
β Package is unsigned. Consider signing with --sign for better security.$ aam publish --sign
Publishing @author/python-best-practices@1.0.0...
Signing package with Sigstore...
π Opening browser for authentication...
β Authenticated as author@github
β Package signed
β Recorded in Rekor transparency log
Uploading python-best-practices-1.0.0.aam...
ββββββββββββββββββββββββββββββββ 100%
β Published python-best-practices@1.0.0
URL: https://registry.aam.dev/packages/python-best-practices
Signed by: your-username@github
Transparency log: https://rekor.sigstore.dev/api/v1/log/entries/...$ aam publish --sign --sign-type gpg --key-id ABC123DEF456
Publishing python-best-practices@1.0.0...
Signing package with GPG key ABC123DEF456...
β Package signed
Uploading python-best-practices-1.0.0.aam...
ββββββββββββββββββββββββββββββββ 100%
β Published python-best-practices@1.0.0
URL: https://registry.aam.dev/packages/python-best-practices
Signed by: ABC123DEF456Update version in aam.yaml:
version: 1.1.0Then publish:
$ aam pkg validate && aam pkg pack && aam pkg publish --sign
β Package is valid
β Built python-best-practices-1.1.0.aam (4.5 KB)
β Published python-best-practices@1.1.0If you publish a version with issues, yank it (marks as "do not install" but doesn't delete):
$ aam yank python-best-practices@1.0.0 --reason "Security issue in prompt template"
β This will mark python-best-practices@1.0.0 as yanked.
Existing installations will continue to work.
New installations will skip this version.
Proceed? [y/N] y
β Yanked python-best-practices@1.0.0$ cd my-project/
$ aam install @author/python-best-practices
Resolving @author/python-best-practices@1.1.0...
+ @author/python-best-practices@1.1.0
Downloading 1 package...
β @author/python-best-practices@1.1.0 (4.5 KB)
Verification:
β Checksum: sha256:a1b2c3d4... matches
β Signature: Sigstore (author@github)
Deploying to cursor...
β agent: python-mentor β .cursor/rules/agent-author--python-mentor.mdc
β skill: python-reviewer β .cursor/skills/author--python-reviewer/
β prompt: refactor-function β .cursor/prompts/refactor-function.md
β instruction: python-standards β .cursor/rules/python-standards.mdc
β Installed 1 package (1 agent, 1 skill, 1 prompt, 1 instruction)# Install scoped package with a specific version
$ aam install @author/python-best-practices@1.0.0
Resolving @author/python-best-practices@1.0.0...
+ @author/python-best-practices@1.0.0
...
# Unscoped package with version also works
$ aam install my-package@1.0.0# Install only to Claude
$ aam install @author/python-best-practices --platform claude
Deploying to claude...
β skill: python-reviewer β .claude/skills/author--python-reviewer/
β instruction: python-standards β CLAUDE.md (section added)
β Installed 1 package# From registry β scoped package (default)
aam install @author/python-best-practices
# From registry β unscoped package
aam install python-best-practices
# From git repository
aam install git+https://github.com/author/python-best-practices.git
# From local directory
aam install ./my-local-package/
# From .aam archive file
aam install python-best-practices-1.0.0.aamDownload and resolve dependencies without deploying artifacts:
$ aam install @author/python-best-practices --no-deploy
Resolving @author/python-best-practices@1.1.0...
+ @author/python-best-practices@1.1.0
β Downloaded to .aam/packages/@author/python-best-practices/
To deploy later, run: aam deploy$ aam list
Installed packages:
βββββββββββββββββββββββββ³ββββββββββ³βββββββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββ
β Name β Version β Source β Artifacts β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β python-best-practices β 1.1.0 β local β 4 (1 agent, 1 skill, 1 prompt, 1 instruction) β
βββββββββββββββββββββββββ΄ββββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββ
$ aam list --tree
python-best-practices@1.1.0 (local)
(no dependencies)
$ aam info python-best-practices
python-best-practices@1.1.0
Description: Python coding standards and best practices for AI agents
Author: your-username
License: Apache-2.0
Repository: https://github.com/your-username/python-best-practices
Artifacts:
agent: python-mentor β Python mentor agent for code review
skill: python-reviewer β Review Python code for best practices
prompt: refactor-function β Template for refactoring functions
instruction: python-standards β Python coding standards
Dependencies: none
Source: local
Deployed to:
cursor: .cursor/skills/, .cursor/rules/, .cursor/prompts/$ aam uninstall python-best-practices
Uninstalling python-best-practices@1.1.0...
Removing deployed artifacts from cursor...
β Removed .cursor/rules/agent-python-mentor.mdc
β Removed .cursor/skills/python-reviewer/
β Removed .cursor/prompts/refactor-function.md
β Removed .cursor/rules/python-standards.mdc
β Uninstalled python-best-practicesSuppose you want your package to depend on a code-analysis package. Add it to aam.yaml:
name: "@author/python-best-practices"
version: 1.2.0
description: "Python coding standards and best practices for AI agents"
author: author
license: Apache-2.0
artifacts:
# ... your artifacts ...
dependencies:
# Scoped dependency β exact version
"@author/code-analysis": "1.0.0"
# Scoped dependency β minimum version
"@org/common-prompts": ">=2.0.0"
# Unscoped dependency β compatible version (>=1.0.0, <2.0.0)
linting-rules: "^1.0.0"
# Approximate version (>=1.0.0, <1.1.0)
formatting-utils: "~1.0.0"
# Any version
utilities: "*"| Syntax | Meaning | Example |
|---|---|---|
1.0.0 |
Exact version | Only 1.0.0 |
>=1.0.0 |
Minimum version | 1.0.0 or higher |
^1.0.0 |
Compatible | >=1.0.0, <2.0.0 |
~1.0.0 |
Approximate | >=1.0.0, <1.1.0 |
* |
Any version | Latest available |
>=1.0.0,<2.0.0 |
Range | Between 1.0.0 and 2.0.0 |
Your artifacts can reference artifacts from dependencies.
In an agent (agents/python-mentor/agent.yaml):
name: python-mentor
description: "Python mentor with enhanced analysis"
system_prompt: system-prompt.md
skills:
- python-reviewer # From this package
- code-analyzer # From code-analysis dependency
- complexity-checker # From code-analysis dependency
prompts:
- refactor-function # From this package
- explain-code # From common-prompts dependencyIn a skill (skills/python-reviewer/SKILL.md):
---
name: python-reviewer
description: Review Python code using analysis tools from dependencies
---
# Python Code Reviewer
## Dependencies
This skill uses:
- `code-analyzer` skill from `code-analysis` package for static analysis
- `complexity-checker` skill from `code-analysis` package for complexity metrics
## Workflow
1. Run code-analyzer to identify issues
2. Run complexity-checker for metrics
3. Compile findings into structured reviewWhen users install your package, dependencies are resolved automatically:
$ aam install python-best-practices
Resolving python-best-practices@1.2.0...
+ python-best-practices@1.2.0
+ code-analysis@1.0.0 (dependency)
+ common-prompts@2.1.0 (dependency)
+ linting-rules@1.3.0 (dependency)
+ formatting-utils@1.0.2 (dependency)
+ utilities@3.0.0 (dependency)
Downloading 6 packages...
β python-best-practices@1.2.0 (4.5 KB)
β code-analysis@1.0.0 (12.3 KB)
β common-prompts@2.1.0 (3.1 KB)
β linting-rules@1.3.0 (2.8 KB)
β formatting-utils@1.0.2 (1.5 KB)
β utilities@3.0.0 (2.2 KB)
Deploying to cursor...
β skill: python-reviewer β .cursor/skills/python-reviewer/
β skill: code-analyzer β .cursor/skills/code-analyzer/
β skill: complexity-checker β .cursor/skills/complexity-checker/
... (all artifacts deployed)
β Installed 6 packages (2 agents, 5 skills, 8 prompts, 3 instructions)flowchart TD
PBP["python-best-practices@1.2.0"]
CA["code-analysis@1.0.0"]
CP["common-prompts@2.1.0"]
LR["linting-rules@1.3.0"]
FU["formatting-utils@1.0.2"]
UT["utilities@3.0.0"]
PBP --> CA
PBP --> CP
PBP --> LR
PBP --> FU
LR --> UT
style PBP fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
style CA fill:#fff3e0
style CP fill:#fff3e0
style LR fill:#fff3e0
style FU fill:#fff3e0
style UT fill:#f3e5f5
$ aam list --tree
python-best-practices@1.2.0 (local)
βββ code-analysis@1.0.0
βββ common-prompts@2.1.0
βββ linting-rules@1.3.0
β βββ utilities@3.0.0
βββ formatting-utils@1.0.2After installation, AAM creates .aam/aam-lock.yaml for reproducible installs:
# .aam/aam-lock.yaml β DO NOT EDIT MANUALLY
lockfile_version: 1
resolved_at: "2026-02-05T14:30:00Z"
packages:
python-best-practices:
version: 1.2.0
source: aam-central
checksum: sha256:a1b2c3d4...
dependencies:
code-analysis: 1.0.0
common-prompts: 2.1.0
linting-rules: 1.3.0
formatting-utils: 1.0.2
utilities: 3.0.0
code-analysis:
version: 1.0.0
source: aam-central
checksum: sha256:e5f6g7h8...
dependencies: {}
common-prompts:
version: 2.1.0
source: aam-central
checksum: sha256:i9j0k1l2...
dependencies: {}
linting-rules:
version: 1.3.0
source: aam-central
checksum: sha256:m3n4o5p6...
dependencies:
utilities: 3.0.0
formatting-utils:
version: 1.0.2
source: aam-central
checksum: sha256:q7r8s9t0...
dependencies: {}
utilities:
version: 3.0.0
source: aam-central
checksum: sha256:u1v2w3x4...
dependencies: {}Commit this file to git for reproducible installs across your team.
# Check which packages have updates available
$ aam outdated
Package Current Latest Source Status
code-analysis 1a2b3c4 9f8e7d6 anthropics/skills outdated
common-prompts 5e6f7a8 b1c2d3e awesome-prompts outdated (modified)
2 outdated, 1 up to date, 0 from registry
# Upgrade all outdated packages
$ aam upgrade
β Upgraded code-analysis (1a2b3c4 β 9f8e7d6)
β Skipped common-prompts: Local modifications detected. Use --force to overwrite.
# Force upgrade (overwrite local changes)
$ aam upgrade common-prompts --force
# Dry-run β preview without making changes
$ aam upgrade --dry-run
# JSON output for scripting
$ aam outdated --jsonLet's build a complete package from scratch: a code review toolkit.
mkdir code-review-toolkit && cd code-review-toolkit
aam pkg initFill in:
- Name:
@author/code-review-toolkit - Version:
1.0.0 - Description: "Comprehensive code review toolkit for multiple languages"
- Author: author
# Create complete skill directories with all components
mkdir -p skills/security-scan/{scripts,templates,references}
mkdir -p skills/performance-check/{scripts,templates,references}
mkdir -p agents/security-reviewer
mkdir -p prompts
mkdir -p instructionsThis skill demonstrates a complete skill with scripts, templates, and references.
Directory structure:
skills/security-scan/
βββ SKILL.md # Main skill definition
βββ scripts/
β βββ scan.py # Security scanning script
β βββ check_secrets.sh # Secret detection script
βββ templates/
β βββ security-report.md.j2 # Full report template
β βββ finding.md.j2 # Single finding template
βββ references/
βββ owasp-top10.md # OWASP Top 10 reference
βββ cwe-patterns.md # Common weakness patterns
1. SKILL.md:
skills/security-scan/SKILL.md:
---
name: security-scan
description: Scan code for common security vulnerabilities. Use when reviewing code for security issues or when asked about secure coding practices.
---
# Security Scanner
## Available Scripts
Run these scripts for automated analysis:
- `scripts/scan.py <file>` β Static security analysis
- `scripts/check_secrets.sh <dir>` β Detect hardcoded secrets
Example:
```bash
python skills/security-scan/scripts/scan.py src/auth.py --format jsonUse for consistent output:
templates/security-report.md.j2β Full security assessmenttemplates/finding.md.j2β Individual finding format
Load for detailed guidance:
- OWASP Top 10 β Common web vulnerabilities
- CWE Patterns β Weakness enumeration
- SQL injection patterns
- Command injection patterns
- Path traversal vulnerabilities
- XSS vulnerabilities
- Hardcoded credentials
- Weak password patterns
- Missing authentication checks
- Improper session handling
- Sensitive data in logs
- Unencrypted sensitive data
- Exposed API keys or secrets
- Known vulnerable packages
- Outdated dependencies
When asked to review security:
- Run
scripts/scan.pyfor automated detection - Manually review for logic flaws
- Rate severity: Critical / High / Medium / Low
- Generate report using template
- Provide remediation guidance with CWE references
**2. Scripts:**
`skills/security-scan/scripts/scan.py`:
```python
#!/usr/bin/env python3
"""
Security scanner for common vulnerabilities.
Usage: python scan.py <file> [--format text|json] [--severity critical|high|medium|low|all]
"""
import re
import sys
import json
from pathlib import Path
from dataclasses import dataclass, asdict
@dataclass
class Finding:
id: str
severity: str
category: str
title: str
file: str
line: int
code: str
description: str
cwe: str
remediation: str
# Security patterns to detect
PATTERNS = {
"sql_injection": {
"pattern": r'execute\s*\(\s*["\'].*%s.*["\']\s*%',
"severity": "critical",
"cwe": "CWE-89",
"title": "SQL Injection",
"description": "User input directly concatenated into SQL query",
"remediation": "Use parameterized queries or prepared statements"
},
"hardcoded_password": {
"pattern": r'(?i)(password|passwd|pwd)\s*=\s*["\'][^"\']+["\']',
"severity": "high",
"cwe": "CWE-798",
"title": "Hardcoded Credentials",
"description": "Password appears to be hardcoded in source",
"remediation": "Use environment variables or secure vault"
},
"eval_usage": {
"pattern": r'\beval\s*\(',
"severity": "high",
"cwe": "CWE-95",
"title": "Code Injection via eval()",
"description": "eval() can execute arbitrary code",
"remediation": "Use ast.literal_eval() or avoid dynamic evaluation"
},
"shell_injection": {
"pattern": r'os\.system\s*\(|subprocess\.call\s*\([^,]+shell\s*=\s*True',
"severity": "critical",
"cwe": "CWE-78",
"title": "Shell Injection",
"description": "Command executed with shell=True or os.system",
"remediation": "Use subprocess with shell=False and argument list"
}
}
def scan_file(filepath: Path) -> list[Finding]:
findings = []
content = filepath.read_text()
lines = content.splitlines()
for vuln_id, config in PATTERNS.items():
for i, line in enumerate(lines, 1):
if re.search(config["pattern"], line):
findings.append(Finding(
id=f"SEC-{len(findings)+1:03d}",
severity=config["severity"],
category="security",
title=config["title"],
file=str(filepath),
line=i,
code=line.strip(),
description=config["description"],
cwe=config["cwe"],
remediation=config["remediation"]
))
return findings
def main():
filepath = Path(sys.argv[1])
output_format = "text"
if "--format" in sys.argv:
output_format = sys.argv[sys.argv.index("--format") + 1]
findings = scan_file(filepath)
if output_format == "json":
print(json.dumps([asdict(f) for f in findings], indent=2))
else:
for f in findings:
print(f"[{f.severity.upper()}] {f.id}: {f.title}")
print(f" File: {f.file}:{f.line}")
print(f" Code: {f.code}")
print(f" CWE: {f.cwe}")
print(f" Fix: {f.remediation}")
print()
if __name__ == "__main__":
main()
skills/security-scan/scripts/check_secrets.sh:
#!/bin/bash
# Detect hardcoded secrets in source files
# Usage: ./check_secrets.sh <directory>
DIR="${1:-.}"
echo "Scanning for secrets in: $DIR"
echo "================================"
# Patterns to detect
patterns=(
"api[_-]?key\s*[:=]"
"secret[_-]?key\s*[:=]"
"password\s*[:=]"
"private[_-]?key"
"AWS_SECRET"
"BEGIN RSA PRIVATE KEY"
"BEGIN OPENSSH PRIVATE KEY"
)
found=0
for pattern in "${patterns[@]}"; do
results=$(grep -rn -E "$pattern" "$DIR" --include="*.py" --include="*.js" --include="*.yaml" --include="*.yml" --include="*.json" 2>/dev/null)
if [ -n "$results" ]; then
echo "Pattern: $pattern"
echo "$results"
echo ""
((found++))
fi
done
if [ $found -eq 0 ]; then
echo "No secrets detected."
else
echo "================================"
echo "Found $found potential secret patterns!"
fi3. Templates:
skills/security-scan/templates/security-report.md.j2:
# Security Assessment Report
**Target:** {{ target }}
**Date:** {{ date }}
**Scanner:** security-scan skill v1.0
---
## Executive Summary
| Severity | Count |
|----------|-------|
| Critical | {{ findings | selectattr('severity', 'eq', 'critical') | list | length }} |
| High | {{ findings | selectattr('severity', 'eq', 'high') | list | length }} |
| Medium | {{ findings | selectattr('severity', 'eq', 'medium') | list | length }} |
| Low | {{ findings | selectattr('severity', 'eq', 'low') | list | length }} |
**Risk Level:** {{ risk_level }}
---
## Findings
{% for finding in findings %}
### {{ finding.id }}: {{ finding.title }}
| Field | Value |
|-------|-------|
| Severity | **{{ finding.severity | upper }}** |
| Location | `{{ finding.file }}:{{ finding.line }}` |
| CWE | [{{ finding.cwe }}](https://cwe.mitre.org/data/definitions/{{ finding.cwe | replace('CWE-', '') }}.html) |
**Description:** {{ finding.description }}
**Vulnerable Code:**{{ finding.code }}
**Remediation:** {{ finding.remediation }}
---
{% endfor %}
## Recommendations
1. Address all Critical findings immediately
2. Schedule High findings for next sprint
3. Track Medium/Low in backlog
4. Implement security linting in CI/CD
---
*Generated by security-scan skill*
skills/security-scan/templates/finding.md.j2:
## {{ finding.id }}: {{ finding.title }}
**Severity:** {{ finding.severity | upper }}
**Location:** `{{ finding.file }}:{{ finding.line }}`
**CWE:** {{ finding.cwe }}
{{ finding.description }}
```{{ language }}
{{ finding.code }}Remediation: {{ finding.remediation }}
**4. References:**
`skills/security-scan/references/owasp-top10.md`:
```markdown
# OWASP Top 10 (2021)
## A01: Broken Access Control
Access control enforces policy such that users cannot act outside their intended permissions.
**What to look for:**
- Missing authorization checks
- IDOR (Insecure Direct Object References)
- CORS misconfiguration
- Path traversal
## A02: Cryptographic Failures
Failures related to cryptography which often lead to sensitive data exposure.
**What to look for:**
- Weak algorithms (MD5, SHA1 for passwords)
- Hardcoded keys
- Missing encryption for sensitive data
- Improper certificate validation
## A03: Injection
User-supplied data is not validated, filtered, or sanitized.
**What to look for:**
- SQL injection
- Command injection
- LDAP injection
- XSS (Cross-site scripting)
## A04: Insecure Design
Missing or ineffective security controls.
**What to look for:**
- Missing rate limiting
- No defense in depth
- Missing input validation at trust boundaries
## A05: Security Misconfiguration
Missing appropriate security hardening.
**What to look for:**
- Default credentials
- Unnecessary features enabled
- Verbose error messages
- Missing security headers
## A06: Vulnerable Components
Using components with known vulnerabilities.
**What to look for:**
- Outdated dependencies
- Unpatched systems
- Unsupported software
## A07: Authentication Failures
Confirmation of identity, authentication, and session management.
**What to look for:**
- Weak passwords allowed
- Missing MFA
- Session fixation
- Credential stuffing vulnerabilities
## A08: Software and Data Integrity Failures
Code and infrastructure that does not protect against integrity violations.
**What to look for:**
- Unsigned updates
- Insecure deserialization
- CI/CD pipeline vulnerabilities
## A09: Security Logging Failures
Without logging and monitoring, breaches cannot be detected.
**What to look for:**
- Missing audit logs
- Logs not monitored
- Sensitive data in logs
## A10: Server-Side Request Forgery (SSRF)
Fetching a remote resource without validating user-supplied URL.
**What to look for:**
- URL parameters to external services
- Unvalidated redirects
- Internal service access
skills/security-scan/references/cwe-patterns.md:
# Common CWE Patterns
## CWE-89: SQL Injection
**Pattern:**
```python
# Vulnerable
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Safe
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))Pattern:
# Vulnerable
os.system(f"ping {host}")
# Safe
subprocess.run(["ping", host], shell=False)Pattern:
// Vulnerable
element.innerHTML = userInput;
// Safe
element.textContent = userInput;Pattern:
# Vulnerable
password = "secret123"
# Safe
password = os.environ.get("DB_PASSWORD")Pattern:
# Vulnerable
data = pickle.loads(user_input)
# Safe
data = json.loads(user_input)
### Step 4: Create Skill - Performance Check (Complete Structure)
Another complete skill with scripts and references.
**Directory structure:**
skills/performance-check/ βββ SKILL.md βββ scripts/ β βββ complexity.py # Cyclomatic complexity analyzer β βββ profile_wrapper.py # Performance profiling wrapper βββ templates/ β βββ perf-report.md.j2 # Performance report template βββ references/ βββ big-o-cheatsheet.md # Algorithm complexity reference βββ optimization-patterns.md # Common optimization patterns
**1. SKILL.md:**
`skills/performance-check/SKILL.md`:
```markdown
---
name: performance-check
description: Analyze code for performance issues and optimization opportunities. Use when asked about performance, optimization, or efficiency.
---
# Performance Analyzer
## Available Scripts
- `scripts/complexity.py <file>` β Calculate cyclomatic and cognitive complexity
- `scripts/profile_wrapper.py <script>` β Profile execution time and memory
Example:
```bash
python skills/performance-check/scripts/complexity.py src/main.py
templates/perf-report.md.j2β Full performance analysis report
- Big-O Cheatsheet β Complexity reference
- Optimization Patterns β Common fixes
- Identify O(nΒ²) or worse operations
- Suggest more efficient algorithms
- Flag unnecessary iterations
- Large object allocations
- Memory leaks potential
- Inefficient data structures
- Unbatched database queries (N+1)
- Synchronous blocking operations
- Missing caching opportunities
- Python: List vs generator, string concatenation
- JavaScript: DOM manipulation, event handlers
- SQL: Missing indexes, inefficient joins
- Run
scripts/complexity.pyfor static analysis - Use
scripts/profile_wrapper.pyfor runtime profiling (if applicable) - Review findings against Big-O cheatsheet
- Apply optimization patterns from references
- Generate report using template
**2. Scripts:**
`skills/performance-check/scripts/complexity.py`:
```python
#!/usr/bin/env python3
"""
Analyze code complexity metrics.
Usage: python complexity.py <file.py> [--format text|json]
"""
import ast
import sys
import json
from pathlib import Path
from dataclasses import dataclass, asdict
@dataclass
class FunctionMetrics:
name: str
line: int
cyclomatic_complexity: int
cognitive_complexity: int
lines_of_code: int
parameters: int
rating: str
class ComplexityAnalyzer(ast.NodeVisitor):
def __init__(self):
self.metrics = []
self._cognitive_nesting = 0
def visit_FunctionDef(self, node):
cc = self._cyclomatic_complexity(node)
cog = self._cognitive_complexity(node)
loc = node.end_lineno - node.lineno + 1
params = len(node.args.args)
self.metrics.append(FunctionMetrics(
name=node.name,
line=node.lineno,
cyclomatic_complexity=cc,
cognitive_complexity=cog,
lines_of_code=loc,
parameters=params,
rating=self._rate(cc, cog)
))
self.generic_visit(node)
def _cyclomatic_complexity(self, node) -> int:
complexity = 1
for child in ast.walk(node):
if isinstance(child, (ast.If, ast.While, ast.For,
ast.ExceptHandler, ast.With,
ast.Assert, ast.comprehension)):
complexity += 1
elif isinstance(child, ast.BoolOp):
complexity += len(child.values) - 1
return complexity
def _cognitive_complexity(self, node) -> int:
complexity = 0
nesting = 0
for child in ast.walk(node):
if isinstance(child, (ast.If, ast.While, ast.For)):
complexity += 1 + nesting
nesting += 1
elif isinstance(child, ast.BoolOp):
complexity += len(child.values) - 1
return complexity
def _rate(self, cc: int, cog: int) -> str:
score = max(cc, cog)
if score <= 5:
return "A (excellent)"
elif score <= 10:
return "B (good)"
elif score <= 20:
return "C (moderate - consider refactoring)"
elif score <= 30:
return "D (complex - refactor recommended)"
else:
return "F (very complex - must refactor)"
def analyze(filepath: Path) -> list[FunctionMetrics]:
tree = ast.parse(filepath.read_text())
analyzer = ComplexityAnalyzer()
analyzer.visit(tree)
return analyzer.metrics
def main():
filepath = Path(sys.argv[1])
output_format = "text"
if "--format" in sys.argv:
output_format = sys.argv[sys.argv.index("--format") + 1]
metrics = analyze(filepath)
if output_format == "json":
print(json.dumps([asdict(m) for m in metrics], indent=2))
else:
print(f"Complexity Analysis: {filepath}\n")
print(f"{'Function':<25} {'Line':<6} {'CC':<4} {'Cog':<4} {'LOC':<5} {'Rating'}")
print("-" * 70)
for m in metrics:
print(f"{m.name:<25} {m.line:<6} {m.cyclomatic_complexity:<4} "
f"{m.cognitive_complexity:<4} {m.lines_of_code:<5} {m.rating}")
if __name__ == "__main__":
main()
3. References:
skills/performance-check/references/big-o-cheatsheet.md:
# Big-O Complexity Cheatsheet
## Common Time Complexities
| Notation | Name | Example |
|----------|------|---------|
| O(1) | Constant | Hash table lookup |
| O(log n) | Logarithmic | Binary search |
| O(n) | Linear | Single loop |
| O(n log n) | Linearithmic | Merge sort |
| O(nΒ²) | Quadratic | Nested loops |
| O(2βΏ) | Exponential | Recursive fibonacci |
| O(n!) | Factorial | Permutations |
## Data Structure Operations
| Structure | Access | Search | Insert | Delete |
|-----------|--------|--------|--------|--------|
| Array | O(1) | O(n) | O(n) | O(n) |
| Linked List | O(n) | O(n) | O(1) | O(1) |
| Hash Table | N/A | O(1)* | O(1)* | O(1)* |
| BST | O(log n)* | O(log n)* | O(log n)* | O(log n)* |
| Heap | N/A | O(n) | O(log n) | O(log n) |
*Average case, worst case may differ
## Python-Specific
| Operation | Time |
|-----------|------|
| `list.append()` | O(1) |
| `list.insert(0, x)` | O(n) |
| `x in list` | O(n) |
| `x in set` | O(1) |
| `dict[key]` | O(1) |
| `list.sort()` | O(n log n) |skills/performance-check/references/optimization-patterns.md:
# Common Optimization Patterns
## Use Set for Membership Testing
```python
# Slow O(n)
if item in large_list:
pass
# Fast O(1)
large_set = set(large_list)
if item in large_set:
pass# Memory heavy - loads all into memory
squares = [x**2 for x in range(1000000)]
# Memory efficient - generates on demand
squares = (x**2 for x in range(1000000))# Slow O(nΒ²)
result = ""
for s in strings:
result += s
# Fast O(n)
result = "".join(strings)from functools import lru_cache
@lru_cache(maxsize=128)
def expensive_function(n):
# ... complex calculation
return result# Slow - N+1 queries
for user_id in user_ids:
user = db.query(User).get(user_id)
# Fast - single query
users = db.query(User).filter(User.id.in_(user_ids)).all()
### Step 5: Create Agent - Security Reviewer
`agents/security-reviewer/agent.yaml`:
```yaml
name: security-reviewer
description: "Security-focused code reviewer that identifies vulnerabilities"
version: 1.0.0
system_prompt: system-prompt.md
skills:
- security-scan
prompts:
- security-report
tools:
- file_read
- grep
- shell
parameters:
temperature: 0.2
style: thorough
output_format: structured
agents/security-reviewer/system-prompt.md:
You are a Security Code Reviewer β an expert in application security focused on identifying vulnerabilities in code.
## Your Mission
Find security issues before they become incidents. You are thorough, precise, and always explain the "why" behind vulnerabilities.
## Review Process
1. **Understand Context**: What does this code do? What data does it handle?
2. **Threat Model**: Who might attack this? What would they try?
3. **Systematic Scan**: Use security-scan skill for comprehensive checks
4. **Prioritize**: Rate by severity and exploitability
5. **Remediate**: Provide specific, actionable fixes
## Severity Ratings
- **Critical**: Immediate exploitation possible, high impact
- **High**: Exploitation likely, significant impact
- **Medium**: Exploitation requires conditions, moderate impact
- **Low**: Minimal impact or unlikely exploitation
## Communication Style
- Be direct about issues β security requires clarity
- Always explain why something is a vulnerability
- Provide working code fixes, not just descriptions
- Reference OWASP, CWE, or CVE when applicableprompts/security-report.md:
---
name: security-report
description: "Generate a structured security assessment report"
variables:
- name: target
description: "What is being reviewed (file, component, system)"
required: true
- name: scope
description: "Scope of review"
required: false
default: "full"
enum: [full, critical-only, quick-scan]
---
# Security Assessment: {{target}}
## Scope: {{scope}}
Perform a security review of the specified target and generate a report.
## Required Sections
1. **Executive Summary**
- Overall risk rating (Critical/High/Medium/Low)
- Key findings count by severity
- Immediate actions required
2. **Findings Detail**
For each finding:
- ID (SEC-001, SEC-002, etc.)
- Title
- Severity
- Location (file:line)
- Description
- Proof of Concept (if safe)
- Remediation
- References (CWE, OWASP)
3. **Recommendations**
- Prioritized action items
- Security improvements
4. **Appendix**
- Tools/methods used
- Scope limitationsinstructions/secure-coding.md:
---
name: secure-coding
description: "Secure coding guidelines for all languages"
scope: global
---
# Secure Coding Guidelines
Apply these security practices when writing or reviewing code:
## Input Validation
- Validate all input at system boundaries
- Use allowlists over denylists
- Sanitize output based on context (HTML, SQL, etc.)
## Authentication
- Never store passwords in plain text
- Use established libraries for auth (don't roll your own)
- Implement proper session management
## Secrets Management
- Never commit secrets to version control
- Use environment variables or secret managers
- Rotate credentials regularly
## Error Handling
- Don't expose stack traces to users
- Log security events with context
- Fail securely (deny by default)
## Dependencies
- Keep dependencies updated
- Review security advisories
- Minimize dependency surfaceaam.yaml:
name: "@author/code-review-toolkit"
version: 1.0.0
description: "Comprehensive code review toolkit for security and performance"
author: author
license: Apache-2.0
repository: https://github.com/author/code-review-toolkit
artifacts:
agents:
- name: security-reviewer
path: agents/security-reviewer/
description: "Security-focused code reviewer"
skills:
- name: security-scan
path: skills/security-scan/
description: "Scan code for security vulnerabilities"
- name: performance-check
path: skills/performance-check/
description: "Analyze code for performance issues"
prompts:
- name: security-report
path: prompts/security-report.md
description: "Generate security assessment reports"
instructions:
- name: secure-coding
path: instructions/secure-coding.md
description: "Secure coding guidelines"
dependencies: {}
platforms:
cursor:
skill_scope: project
deploy_instructions_as: rules
claude:
merge_instructions: true
copilot:
merge_instructions: true
keywords:
- security
- code-review
- performance
- vulnerabilities
- best-practices# Validate
$ aam pkg validate
β Package is valid and ready to publish
# Pack
$ aam pkg pack
β Built code-review-toolkit-1.0.0.aam (8.7 KB)
# Publish with signature
$ aam pkg publish --sign
β Published @author/code-review-toolkit@1.0.0$ aam install @author/code-review-toolkit
Resolving @author/code-review-toolkit@1.0.0...
+ @author/code-review-toolkit@1.0.0
Deploying to cursor...
β agent: security-reviewer β .cursor/rules/agent-author--security-reviewer.mdc
β skill: security-scan β .cursor/skills/author--security-scan/
β skill: performance-check β .cursor/skills/author--performance-check/
β prompt: security-report β .cursor/prompts/security-report.md
β instruction: secure-coding β .cursor/rules/secure-coding.mdc
β Installed 1 package (1 agent, 2 skills, 1 prompt, 1 instruction)Dist-tags let you assign named aliases to specific versions. They're useful for marking "stable" releases, "beta" versions, or org-specific approval gates.
# Publish and tag the version as "beta"
aam publish --tag beta
# The "latest" tag is always set automatically on publish# Tag a specific version as "stable"
aam dist-tag add @author/my-agent@1.2.0 stable
# List all tags for a package
aam dist-tag ls @author/my-agent
# latest: 1.3.0
# stable: 1.2.0
# beta: 1.3.0-rc.1
# Remove a tag
aam dist-tag rm @author/my-agent beta# Install the "stable" version (whatever version "stable" points to)
aam install @author/my-agent@stable
# Install the "latest" version (default behavior)
aam install @author/my-agentDist-tags work with both local and HTTP registries. For local registries, tags are stored in the metadata.yaml file.
Governance controls help organizations manage what packages can be installed and who can publish.
Policy gates run locally in the CLI β no server required. Configure them in your project or global config:
# .aam/config.yaml or ~/.aam/config.yaml
governance:
install_policy:
allowed_scopes: ["@myorg", "@trusted-vendor"]
require_signature: true
require_tag: "stable"
blocked_packages: ["@sketchy/*"]
publish_policy:
require_signature: trueExample: Only allow approved packages:
# With require_tag: "stable" configured, this will fail:
$ aam install @unknown/sketchy-tool
ERROR: Package '@unknown/sketchy-tool' blocked by install policy:
scope '@unknown' is not in allowed_scopes
# But this works:
$ aam install @myorg/approved-tool@stable
β Installed @myorg/approved-tool@1.2.0When using an HTTP registry with require_approval: true:
# Author publishes β package is in "pending" state
$ aam publish
β Published @myorg/agent@1.2.0 (pending approval)
# Approver reviews and approves
$ aam approve @myorg/agent@1.2.0 --comment "Reviewed for production"
β Approved @myorg/agent@1.2.0
# Now it can be installed
$ aam install @myorg/agent@1.2.0Every action on the HTTP registry is logged. View the audit log via the API:
# View recent audit events (requires admin token)
curl -H "Authorization: Bearer $TOKEN" \
https://registry.aam.dev/api/v1/audit-log?limit=10Packages can declare tests and evaluations in aam.yaml. Eval results are stored in the registry and help consumers judge package quality.
# aam.yaml
quality:
tests:
- name: "unit-tests"
command: "pytest tests/"
description: "Unit tests for agent skills"
- name: "lint-check"
command: "ruff check ."
description: "Code quality check"
evals:
- name: "accuracy-eval"
path: "evals/accuracy.yaml"
description: "Measures accuracy against benchmark dataset"
metrics:
- name: "accuracy"
type: "percentage"
- name: "latency_p95"
type: "duration_ms"# Run all declared tests
$ aam test
Running unit-tests... β passed
Running lint-check... β passed
# Run all declared evals
$ aam eval
Running accuracy-eval...
accuracy: 94.2%
latency_p95: 1200ms
Status: passed
# Run evals and publish results to the registry
$ aam eval --publish
β Eval results published for @author/my-agent@1.2.0Eval results appear in aam info and aam search output:
$ aam info @author/my-agent
@author/my-agent@1.2.0
...
Eval Results:
accuracy-eval: passed (accuracy: 94.2%, latency_p95: 1200ms)Portable bundles are pre-compiled, self-contained archives for a specific platform. They're great for sharing via Slack, email, or git without requiring access to a registry.
# Build a bundle for Cursor
$ aam build --target cursor
Building @author/my-agent@1.0.0 for cursor...
Resolving dependencies...
Compiling artifacts for cursor...
β Built dist/my-agent-1.0.0-cursor.bundle.aam
# Build for all configured platforms
$ aam build --target all
β Built dist/my-agent-1.0.0-cursor.bundle.aam
β Built dist/my-agent-1.0.0-copilot.bundle.aam
β Built dist/my-agent-1.0.0-claude.bundle.aam# Install directly from a bundle file β no registry needed
$ aam install ./dist/my-agent-1.0.0-cursor.bundle.aam
Deploying to cursor...
β skill: my-skill β .cursor/skills/author--my-skill/
β agent: my-agent β .cursor/rules/agent-author--my-agent.mdc
β Installed from bundle (1 agent, 1 skill)Bundles are regular files β share them however works for your team:
- Slack/Teams: Drop the
.bundle.aamfile in a channel - Email: Attach to an email
- Git: Commit to a shared repository
- Cloud storage: Upload to Google Drive, Dropbox, etc.
The recipient just runs aam install ./path-to-bundle.aam and they're ready to go.
| Task | Command |
|---|---|
| Getting Started | |
| Set up AAM (platform, sources) | aam init |
| Package Management | |
| Install package (scoped) | aam install @author/name |
| Install package (unscoped) | aam install name |
| Install specific version | aam install @author/name@version |
| Install from specific source | aam install source-name/artifact (e.g., anthropics/skills/skill-creator) |
| Uninstall package | aam uninstall <name> |
| Upgrade outdated packages | aam upgrade [name] |
| Check for outdated packages | aam outdated [--json] |
| Search registry + sources | aam search <query> |
| List installed | aam list |
| Browse available artifacts | aam list --available |
| Show package info | aam info <name> |
| Package Authoring | |
| Create package from existing project | aam pkg create |
| Create package from scratch | aam pkg init <name> |
| Validate package | aam pkg validate |
| Build archive | aam pkg pack |
| Publish to registry | aam pkg publish [--sign] |
| Build portable bundle | aam pkg build --target <platform> |
| Source Management | |
| Add git source | aam source add <url> |
| Scan source | aam source scan <name> |
| Update sources | aam source update [--all] |
| List sources | aam source list |
| Remove source | aam source remove <name> |
| List candidates | aam source candidates |
| Enable default sources | aam source enable-defaults |
| Package Integrity | |
| Verify package | aam verify <name> |
| Diff package | aam diff <name> |
| Configuration | |
| Create local registry | aam registry init <path> |
| Manage dist-tags | aam dist-tag add/rm/ls |
| Utilities | |
| Start MCP server | aam mcp serve [--transport] [--allow-write] |
| Run diagnostics | aam doctor |
| Run tests | aam test |
| Run evals | aam eval [--publish] |
AAM can discover and package artifacts directly from remote git repositories. This lets you browse community skill collections (like openai/skills) and create AAM packages from them without manually downloading files.
# GitHub shorthand
aam source add openai/skills
# HTTPS URL with subdirectory
aam source add https://github.com/openai/skills --path skills/.curated
# Track a specific branch
aam source add openai/skills --ref v2AAM clones the repository, scans for artifacts, and saves the source in your configuration.
# See what's in a source
aam source scan openai/skills
# Filter by type
aam source scan openai/skills --type skill
# List unpackaged candidates across all sources
aam source candidatesYou can install artifacts directly from git sources without creating a package first. Use the qualified name format source-name/artifact as a single argument:
# Install skill-creator from anthropics/skills (source name from aam source list)
aam install anthropics/skills/skill-creator
# For sources with a scan path (e.g., openai/skills:.curated)
aam install openai/skills:.curated/code-reviewNote: There is no -s or --source option for aam install. The source is specified in the package argument. If you add anthropics/skills and anthropics/skills:skills as separate sources (with different paths), use the source name that appears in aam list --available or the Source column of aam search. Run aam info source/artifact to see the exact install command for any uninstalled artifact.
# Package specific artifacts from a source
aam create-package --from-source openai/skills --artifacts code-review,code-gen
# Package all artifacts from a source
aam create-package --from-source openai/skills --allPackages created from sources include provenance metadata that records where the content originated.
# Fetch upstream changes
aam source update openai/skills
# Update all sources
aam source update --all
# Preview changes without fetching
aam source update openai/skills --dry-run# List all configured sources
aam source list
# Remove a source
aam source remove openai/skills
# Remove and purge cached clone
aam source remove openai/skills --purge-cacheAAM tracks per-file SHA-256 checksums for installed packages. You can verify that installed files haven't been modified:
# Check if files are modified
aam verify my-package
# Verify all installed packages
aam verify --all
# See exact changes
aam diff my-packageWhen upgrading a package with local modifications, AAM warns you and offers options to backup, skip, view differences, or force the upgrade.
AAM ships with 4 curated community skill sources:
github/awesome-copilot- GitHub Copilot skillsopenai/skills:.curated- OpenAI curated skillsanthropics/skills- Anthropic Claude skillsmicrosoft/skills- Microsoft skills
These are registered automatically when you run aam init. You can also enable them at any time:
# Enable all 4 default sources
aam source enable-defaults
# Clone and scan the sources
aam source update --allIf you previously removed a default source, aam source enable-defaults re-enables it by clearing it from the removed_defaults list. Sources that are already configured are skipped.
The AAM MCP (Model Context Protocol) server allows AI agents inside IDEs to manage packages programmatically. Instead of running CLI commands manually, your IDE agent can search registries, install packages, validate manifests, manage remote sources, and verify package integrity through MCP tools.
The MCP server exposes AAM's capabilities as structured tools and resources that IDE agents (Cursor, VS Code, Claude Desktop, Windsurf) can invoke over a standard protocol. This means your AI assistant can:
- Search for and discover packages across registries
- Install, uninstall, and manage packages
- Validate package manifests
- Discover artifacts from remote git sources
- Verify package integrity and view file differences
- Read project context (installed packages, configuration, sources)
- Diagnose environment issues
- Recommend skills based on repository analysis (frontend, backend, LLM usage)
# Start with stdio transport (default, for IDE integration)
aam mcp serve
# Start with HTTP transport on a custom port
aam mcp serve --transport http --port 9000
# Enable write tools for full package management
aam mcp serve --allow-write
# Log to a file at DEBUG level
aam mcp serve --log-file /tmp/aam-mcp.log --log-level DEBUGOptions:
| Flag | Default | Description |
|---|---|---|
--transport |
stdio |
Transport protocol: stdio or http |
--port |
8000 |
HTTP port (only used with --transport http) |
--allow-write |
false |
Enable mutating tools (install, publish, etc.) |
--log-file |
None |
Redirect logs to a file (recommended for stdio) |
--log-level |
INFO |
Log level: DEBUG, INFO, WARNING, ERROR |
Safety Model: By default, only read-only tools are exposed (17 tools including source scanning, verify, diff, and skill recommendation). Write tools (install, uninstall, publish, config set, registry add, source add/remove/update) require the
--allow-writeflag. This prevents accidental modifications by AI agents.
Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"aam": {
"command": "aam",
"args": ["mcp", "serve", "--allow-write"],
"env": {}
}
}
}Add to .vscode/settings.json:
{
"mcp.servers": {
"aam": {
"command": "aam",
"args": ["mcp", "serve", "--allow-write"]
}
}
}Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"aam": {
"command": "aam",
"args": ["mcp", "serve", "--allow-write"]
}
}
}Read-only tools (always available, 17 tools):
| Tool | Description |
|---|---|
aam_search |
Search registries for packages by query |
aam_list |
List all installed packages |
aam_info |
Show detailed metadata for an installed package |
aam_validate |
Validate a package manifest and artifacts |
aam_config_get |
Get configuration value(s) |
aam_registry_list |
List configured registries |
aam_doctor |
Run environment diagnostics |
aam_source_list |
List all configured remote git sources |
aam_source_scan |
Scan a source for artifacts (with optional type filter) |
aam_source_candidates |
List unpackaged artifact candidates across sources |
aam_source_diff |
Preview upstream changes for a source (dry-run update) |
aam_verify |
Verify integrity of installed package files |
aam_diff |
Show unified diff of modified files in installed packages |
aam_outdated |
Check for outdated source-installed packages |
aam_available |
List all available artifacts from configured sources |
aam_recommend_skills |
Recommend skills based on repository analysis |
aam_init_info |
Get client initialization status and detected platform |
Write tools (require --allow-write, 12 tools):
| Tool | Description |
|---|---|
aam_install |
Install packages from registries or local sources |
aam_uninstall |
Remove an installed package |
aam_publish |
Publish a package to a registry |
aam_create_package |
Create a package from a project or remote source (from_source parameter) |
aam_config_set |
Set a configuration value |
aam_registry_add |
Add a new registry source |
aam_init_package |
Scaffold a brand-new package with directories and manifest |
aam_source_add |
Add a remote git repository as an artifact source |
aam_source_remove |
Remove a configured source (with optional cache purge) |
aam_source_update |
Fetch upstream changes for one or all sources |
aam_upgrade |
Upgrade outdated source-installed packages (supports dry_run and force) |
aam_init |
Initialize the AAM client for a specific AI platform (platform, skip_sources) |
Skill recommendation (aam_recommend_skills):
When you ask "what skills should I use for this repo?" or "find skills for my React + Python LLM app," the agent can call aam_recommend_skills(path=None, limit=15). The tool analyzes the project (package.json, pyproject.toml, structure) to detect frontend (React, Vue), backend (Python, FastAPI), LLM usage, and docs, then returns ranked skill recommendations from configured sources with rationale.
Source operation error codes:
| Error Code | Condition |
|---|---|
AAM_SOURCE_NOT_FOUND |
Named source not in config |
AAM_SOURCE_ALREADY_EXISTS |
Duplicate source name on add |
AAM_SOURCE_URL_INVALID |
URL failed validation |
AAM_GIT_CLONE_FAILED |
Git clone failed after retries |
AAM_PACKAGE_NOT_INSTALLED |
Package not found for verify/diff |
Resources provide passive data access β agents can read them without invoking tools.
| Resource URI | Description |
|---|---|
aam://config |
Full merged AAM configuration |
aam://packages/installed |
List of installed packages |
aam://packages/{name} |
Detailed metadata for a specific package |
aam://registries |
List of configured registries |
aam://manifest |
Parsed aam.yaml from the current directory |
aam://sources |
List of all configured remote git sources |
aam://sources/{id} |
Source details with artifact list (use -- for / in names) |
aam://sources/{id}/candidates |
Unpackaged candidates from a source |
aam://init_status |
Client initialization status and detected platform |
Searching for packages:
"Search for code review skills in the AAM registry." β Agent calls
aam_search(query="code review", package_type="skill")
Installing a package:
"Install the asvc-auditor package." β Agent calls
aam_install(packages=["asvc-auditor"])
Discovering skills from a git source:
"Add the openai/skills repository and show me what's available." β Agent calls
aam_source_add(source="openai/skills"), thenaam_source_scan(source_name="openai/skills")
Checking package integrity:
"Has anyone modified the installed code-review package?" β Agent calls
aam_verify(package_name="code-review"), thenaam_diff(package_name="code-review")if changes found
Checking project health:
"Run diagnostics on my AAM setup." β Agent calls
aam_doctor()
| Issue | Solution |
|---|---|
| Server not starting | Check aam mcp serve --help for options. Ensure fastmcp is installed. |
| Tools not showing in IDE | Verify the MCP config file path and command value. |
| Write tools not available | Add --allow-write to the server args in your IDE config. |
| No output in stdio mode | Logs go to stderr by default. Use --log-file to capture them. |
The aam doctor command runs comprehensive diagnostics to identify issues with your AAM setup.
aam doctor| Check | Description |
|---|---|
| Python version | Verifies Python >= 3.11 |
| Configuration | Loads and validates ~/.aam/config.yaml and .aam/config.yaml |
| Registries | Checks each configured registry is accessible |
| Package integrity | Verifies installed package manifests are valid |
| Incomplete installs | Detects leftover staging directories from interrupted installs |
AAM Environment Diagnostics
β Python 3.12.1
β Configuration loaded (2 registries configured)
β Registry 'local' accessible at /home/user/my-registry
β Registry 'team' path not found: /shared/packages
Run 'aam registry init /shared/packages' to create it, or 'aam registry remove team' to remove it.
β Package 'asvc-auditor@1.0.0' β manifest valid, 3 artifacts
β No incomplete installations detected
5/6 checks passed, 1 warnings
| Issue | Fix |
|---|---|
| Registry path not found | aam registry init <path> or aam registry remove <name> |
| Package manifest error | aam install <package> --force to reinstall |
| Incomplete installation | Remove .aam/.tmp/ directory, then re-run the install |
| Configuration error | Check ~/.aam/config.yaml for syntax errors |
The aam convert command converts AI agent configurations between platforms,
making it easy to migrate or maintain configs across Cursor, Copilot, Claude, and Codex.
# Convert all Cursor configs to Copilot format
aam convert -s cursor -t copilot
# Convert only instructions from Copilot to Claude
aam convert -s copilot -t claude --type instruction
# Preview what would happen without writing files
aam convert -s cursor -t copilot --dry-run| Artifact Type | Behavior |
|---|---|
| Skills | Direct copy β universal SKILL.md format works everywhere |
| Instructions | Field mapping (e.g. Cursor globs β Copilot applyTo). Platform-only fields generate warnings |
| Agents | Metadata filtered to target platform's supported fields. Codex gets AGENTS.md sections |
| Prompts | Frontmatter stripped or added as needed. Codex gets AGENTS.md sections |
Some metadata cannot be converted between platforms. The convert command warns you about each lost field and suggests workarounds:
$ aam convert -s cursor -t claude --verbose
INSTRUCTIONS:
β .cursor/rules/python-style.mdc β CLAUDE.md (appended)
β Glob-scoped instruction converted to always-on. Original globs: **/*.py
The target platform does not support file-scoped instructions.
The instruction will apply globally. Consider adding file-path
references in the instruction text to indicate intended scope.By default, existing target files are skipped. Use --force to overwrite
(a .bak backup is created automatically):
aam convert -s codex -t cursor --forceClaude (CLAUDE.md) and Codex (AGENTS.md) use single files for instructions.
When multiple source files convert to the same target, they are appended with
section markers:
<!-- BEGIN AAM CONVERTED: python-style -->
## Python Standards (applies to: **/*.py)
Use type hints...
<!-- END AAM CONVERTED: python-style -->Re-running the conversion updates existing sections in place.
Migrating from Cursor to Copilot:
aam convert -s cursor -t copilot --dry-run # Preview first
aam convert -s cursor -t copilot # Run conversionSetting up Claude Code alongside Cursor:
aam convert -s cursor -t claudeConverting Codex AGENTS.md to Cursor rules:
aam convert -s codex -t cursor --forceFor more details, see:
- DESIGN.md β Architecture and concepts
- HTTP_REGISTRY_SPEC.md β Registry API specification