diff --git a/.cursor/llm/README.md b/.cursor/llm/README.md new file mode 100644 index 0000000..73cf3d2 --- /dev/null +++ b/.cursor/llm/README.md @@ -0,0 +1,108 @@ +# LLM Documentation Resources + +This directory contains curated `llm.txt` files for each major dependency, library, and service used in HyperAgent. These files follow the [llms.txt standard](https://llmstxt.org/) to help AI systems understand and work with external documentation. + +## Purpose + +Each `llm.txt` file provides: +- Overview of the technology and its role in HyperAgent +- Key use cases and implementation details +- Links to official documentation +- Code examples relevant to HyperAgent +- Best practices for integration + +## Available Resources + +### Core Frameworks +- **thirdweb-llm.txt** - Thirdweb SDK for wallets, ERC-4337, and deployments +- **langgraph-llm.txt** - LangGraph for agent orchestration +- **fastapi-llm.txt** - FastAPI for backend API +- **nextjs-react-llm.txt** - Next.js and React for frontend + +### Infrastructure +- **supabase-llm.txt** - Supabase for database and multi-tenant workspaces +- **redis-llm.txt** - Redis for caching and message queuing +- **pinecone-llm.txt** - Pinecone for vector database and RAG +- **acontext-llm.txt** - Acontext for agent long-term memory + +### Blockchain & Smart Contracts +- **hardhat-foundry-llm.txt** - Hardhat and Foundry for Solidity development +- **openzeppelin-llm.txt** - OpenZeppelin Contracts library +- **slither-mythril-llm.txt** - Security auditing tools +- **erc-4337-llm.txt** - Account Abstraction standard +- **erc-8004-llm.txt** - Trustless Agents standard + +### Storage & Data +- **ipfs-pinata-llm.txt** - IPFS and Pinata for decentralized storage +- **eigenda-llm.txt** - EigenDA for verifiable data availability + +### Observability +- **opentelemetry-llm.txt** - OpenTelemetry for distributed tracing +- **mlflow-llm.txt** - MLflow for ML experiment tracking +- **tenderly-llm.txt** - Tenderly for contract simulation and monitoring +- **dune-analytics-llm.txt** - Dune Analytics for on-chain analytics + +### LLM Providers +- **anthropic-openai-llm.txt** - Anthropic (Claude) and OpenAI (GPT) for code generation +- **gemini-llm.txt** - Google Gemini for fast, cost-effective generation + +### Payments +- **x402-llm.txt** - x402 payment protocol for SKALE + +### DevOps & Deployment +- **docker-llm.txt** - Docker and Docker Compose for containerization + +## Usage + +When working on HyperAgent features that involve external dependencies: + +1. **Check the relevant `llm.txt` file first** - It contains HyperAgent-specific context +2. **Reference official docs** - Links provided for detailed information +3. **Follow best practices** - Each file includes integration best practices +4. **Use code examples** - Examples are tailored to HyperAgent's use cases + +## File Structure + +Each `llm.txt` file follows this structure: + +```markdown +# [Technology] Documentation for HyperAgent + +## Overview +Brief description and role in HyperAgent + +## Key Use Cases in HyperAgent +Specific use cases in the project + +## Documentation Links +Links to official documentation + +## Implementation in HyperAgent +Where and how it's used in the codebase + +## Code Examples +Relevant code examples + +## Best Practices +Integration best practices + +## Related Resources +Additional resources +``` + +## Maintenance + +- Update files when dependencies change +- Add new dependencies as they're integrated +- Keep documentation links current +- Update code examples when implementation changes + +## Contributing + +When adding a new dependency: +1. Create a new `{dependency}-llm.txt` file +2. Follow the standard structure +3. Include HyperAgent-specific context +4. Add links to official documentation +5. Update this README + diff --git a/.cursor/llm/acontext-llm.txt b/.cursor/llm/acontext-llm.txt new file mode 100644 index 0000000..5934e5f --- /dev/null +++ b/.cursor/llm/acontext-llm.txt @@ -0,0 +1,76 @@ +# Acontext Documentation for HyperAgent + +## Overview + +Acontext is a long-term memory system for AI agents, providing persistent context management across agent sessions. HyperAgent uses Acontext for maintaining agent memory, storing conversation history, and enabling agents to learn from past interactions. + +## Key Use Cases in HyperAgent + +- **Agent Memory**: Long-term memory for agents +- **Context Management**: Maintain context across sessions +- **Learning**: Agents learn from past interactions +- **Conversation History**: Store user-agent interactions +- **Knowledge Retention**: Persistent knowledge storage + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.acontext.io/ +- **API Reference**: https://docs.acontext.io/api-reference/introduction +- **Integration Guide**: https://docs.acontext.io/integrations/intro + +### Key Concepts +- **Contexts**: Organized memory units +- **Memories**: Individual memory entries +- **Retrieval**: Semantic search over memories +- **Persistence**: Long-term storage +- **Privacy**: Workspace-scoped memories + +## Implementation in HyperAgent + +### Memory Service +- Location: `hyperagent/llm/acontext_client.py` +- Stores agent interactions +- Retrieves relevant context for agents + +### Use Cases +- SpecAgent memory of user requirements +- CodeGenAgent memory of patterns +- AuditAgent memory of vulnerabilities + +## Code Examples + +### Storing Memory +```python +from acontext import AcontextClient + +client = AcontextClient(api_key="your-api-key") + +memory_id = client.create_memory( + content="User prefers gas-optimized contracts", + metadata={"workspace_id": "ws-123", "agent": "CodeGenAgent"} +) +``` + +### Retrieving Context +```python +contexts = client.search_memories( + query="gas optimization preferences", + workspace_id="ws-123", + limit=5 +) +``` + +## Best Practices + +1. **Context Organization**: Organize by workspace and agent +2. **Metadata**: Add rich metadata for better retrieval +3. **Privacy**: Ensure workspace isolation +4. **Retention**: Define memory retention policies +5. **Retrieval**: Use semantic search for relevant context + +## Related Resources + +- Acontext GitHub: https://github.com/memodb-io/acontext +- Acontext Documentation: https://docs.acontext.io/ + diff --git a/.cursor/llm/anthropic-openai-llm.txt b/.cursor/llm/anthropic-openai-llm.txt new file mode 100644 index 0000000..12740ed --- /dev/null +++ b/.cursor/llm/anthropic-openai-llm.txt @@ -0,0 +1,91 @@ +# Anthropic & OpenAI LLM Documentation for HyperAgent + +## Overview + +Anthropic (Claude) and OpenAI (GPT) are LLM providers used by HyperAgent for code generation, reasoning, and agent operations. HyperAgent uses a multi-model routing strategy, using different models for different tasks based on complexity, cost, and quality requirements. + +## Key Use Cases in HyperAgent + +- **Code Generation**: Generate Solidity contracts from natural language +- **Complex Reasoning**: Architecture design and planning +- **Quick Edits**: Fast iterations and code modifications +- **Model Routing**: Route tasks to appropriate models +- **Cost Optimization**: Balance quality and cost + +## Documentation Links + +### Anthropic (Claude) +- **Main Docs**: https://docs.anthropic.com/ +- **API Reference**: https://docs.anthropic.com/claude/reference +- **Python SDK**: https://github.com/anthropics/anthropic-sdk-python +- **Best Practices**: https://docs.anthropic.com/claude/docs + +### OpenAI (GPT) +- **Main Docs**: https://platform.openai.com/docs +- **API Reference**: https://platform.openai.com/docs/api-reference +- **Python SDK**: https://github.com/openai/openai-python +- **Best Practices**: https://platform.openai.com/docs/guides + +### Key Concepts +- **Prompt Engineering**: Effective prompt design +- **Token Management**: Token limits and costs +- **Streaming**: Real-time response streaming +- **Function Calling**: Tool use and function calling +- **Rate Limiting**: API rate limits and handling + +## Implementation in HyperAgent + +### Multi-Model Router +- Location: `hyperagent/core/routing/multi_model_router.py` +- Routes tasks to appropriate models +- Fallback logic for failures +- Cost and latency optimization + +### Model Configuration +- Location: `config/llm.yaml` +- Model selection per task type +- Timeout and retry configuration +- Cost tracking + +## Code Examples + +### Anthropic Client +```python +from anthropic import Anthropic + +client = Anthropic(api_key="your-api-key") + +response = client.messages.create( + model="claude-3-5-sonnet-20241022", + max_tokens=4096, + messages=[{"role": "user", "content": prompt}] +) +``` + +### OpenAI Client +```python +from openai import OpenAI + +client = OpenAI(api_key="your-api-key") + +response = client.chat.completions.create( + model="gpt-4-turbo-preview", + messages=[{"role": "user", "content": prompt}], + temperature=0.7 +) +``` + +## Best Practices + +1. **Model Selection**: Use appropriate model for task complexity +2. **Prompt Design**: Write clear, specific prompts +3. **Error Handling**: Implement retry logic for API failures +4. **Cost Management**: Monitor token usage and costs +5. **Rate Limiting**: Respect API rate limits + +## Related Resources + +- Anthropic GitHub: https://github.com/anthropics +- OpenAI GitHub: https://github.com/openai +- Prompt Engineering Guide: https://www.promptingguide.ai/ + diff --git a/.cursor/llm/docker-llm.txt b/.cursor/llm/docker-llm.txt new file mode 100644 index 0000000..1ac12b6 --- /dev/null +++ b/.cursor/llm/docker-llm.txt @@ -0,0 +1,82 @@ +# Docker Documentation for HyperAgent + +## Overview + +Docker is a containerization platform used by HyperAgent for packaging services, ensuring consistent environments, and simplifying deployment. HyperAgent uses Docker and Docker Compose for local development and production deployments. + +## Key Use Cases in HyperAgent + +- **Service Containerization**: Package each microservice +- **Development Environment**: Consistent local setup +- **CI/CD**: Containerized builds and tests +- **Production Deployment**: Deploy containers to cloud +- **Dependency Isolation**: Isolate service dependencies + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.docker.com/ +- **Docker Compose**: https://docs.docker.com/compose/ +- **Best Practices**: https://docs.docker.com/develop/dev-best-practices/ +- **Multi-stage Builds**: https://docs.docker.com/build/building/multi-stage/ + +### Key Concepts +- **Images**: Immutable templates for containers +- **Containers**: Running instances of images +- **Dockerfile**: Instructions for building images +- **Docker Compose**: Multi-container orchestration +- **Volumes**: Persistent data storage + +## Implementation in HyperAgent + +### Dockerfiles +- `Dockerfile` - Main backend service +- `Dockerfile.mlflow` - MLflow service +- `frontend/Dockerfile.dev` - Frontend development + +### Docker Compose +- `docker-compose.yml` - Local development setup +- Services: API, frontend, database, Redis, MLflow + +## Code Examples + +### Dockerfile +```dockerfile +FROM python:3.11-slim + +WORKDIR /app +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY . . +CMD ["uvicorn", "hyperagent.api.main:app", "--host", "0.0.0.0"] +``` + +### Docker Compose +```yaml +services: + api: + build: . + ports: + - "8000:8000" + environment: + - DATABASE_URL=postgresql://... + depends_on: + - db + - redis +``` + +## Best Practices + +1. **Multi-stage Builds**: Reduce image size +2. **Layer Caching**: Optimize build times +3. **Security**: Use non-root users +4. **Health Checks**: Add health check endpoints +5. **Resource Limits**: Set memory and CPU limits + +## Related Resources + +- Docker GitHub: https://github.com/docker +- Docker Hub: https://hub.docker.com/ +- Best Practices: https://docs.docker.com/develop/dev-best-practices/ + diff --git a/.cursor/llm/dune-analytics-llm.txt b/.cursor/llm/dune-analytics-llm.txt new file mode 100644 index 0000000..01df5ce --- /dev/null +++ b/.cursor/llm/dune-analytics-llm.txt @@ -0,0 +1,80 @@ +# Dune Analytics Documentation for HyperAgent + +## Overview + +Dune Analytics is a Web3 analytics platform for exploring, querying, and visualizing on-chain data from various blockchains. HyperAgent uses Dune Analytics for tracking deployed contract metrics, TVL, transaction volumes, and other on-chain analytics. + +## Key Use Cases in HyperAgent + +- **Contract Analytics**: Track deployed contract metrics +- **TVL Tracking**: Monitor total value locked +- **Transaction Analysis**: Analyze contract interactions +- **Dashboard Creation**: Build analytics dashboards +- **Performance Monitoring**: Monitor contract performance + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.dune.com/ +- **Query Editor**: https://docs.dune.com/queries +- **API Reference**: https://docs.dune.com/api-reference +- **SQL Guide**: https://docs.dune.com/sql + +### Key Concepts +- **Queries**: SQL queries over blockchain data +- **Dashboards**: Visualizations of query results +- **API**: Programmatic access to queries +- **Spells**: Community-maintained data models + +## Implementation in HyperAgent + +### Analytics Integration +- Location: `hyperagent/monitoring/dune_integration.py` +- Tracks deployed contracts +- Monitors TVL and metrics + +### Metrics Tracked +- Contract deployments per chain +- Total value locked (TVL) +- Transaction volumes +- User activity +- Gas costs + +## Code Examples + +### Query Execution +```python +from dune_client import DuneClient + +client = DuneClient(api_key="your-api-key") + +result = client.execute_query( + query_id=123456, + parameters={"contract_address": "0x..."} +) + +data = result.get_rows() +``` + +### Creating Dashboard +```python +dashboard = client.create_dashboard( + name="HyperAgent Metrics", + queries=[query_id_1, query_id_2] +) +``` + +## Best Practices + +1. **Query Optimization**: Optimize SQL queries for performance +2. **Caching**: Use query caching for frequently accessed data +3. **Error Handling**: Handle API rate limits +4. **Data Freshness**: Consider query execution time +5. **Visualization**: Create clear, actionable dashboards + +## Related Resources + +- Dune Analytics: https://dune.com/ +- Dune API: https://docs.dune.com/api-reference +- SQL Reference: https://docs.dune.com/sql + diff --git a/.cursor/llm/eigenda-llm.txt b/.cursor/llm/eigenda-llm.txt new file mode 100644 index 0000000..33c65cb --- /dev/null +++ b/.cursor/llm/eigenda-llm.txt @@ -0,0 +1,74 @@ +# EigenDA Documentation for HyperAgent + +## Overview + +EigenDA (EigenLayer Data Availability) is a data availability layer that provides verifiable data storage for blockchain applications. HyperAgent uses EigenDA for storing verifiable agent traces, audit reports, and contract metadata, enabling cryptographic proof of agent actions. + +## Key Use Cases in HyperAgent + +- **Verifiable Traces**: Store agent execution traces with proofs +- **Audit Provenance**: Anchor audit reports to EigenDA +- **Contract Metadata**: Store deployment metadata verifiably +- **Agent Accountability**: Enable verification of agent decisions +- **Protocol Labs Integration**: Part of Verifiable Factory preset + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.eigenlayer.xyz/eigenda/ +- **EigenLayer**: https://docs.eigenlayer.xyz/ +- **Data Availability**: https://docs.eigenlayer.xyz/eigenda/overview +- **Integration Guide**: https://docs.eigenlayer.xyz/eigenda/integration + +### Key Concepts +- **Data Availability**: Ensuring data is available for verification +- **Blob Storage**: Efficient storage for large data +- **Verification**: Cryptographic proofs of data availability +- **EigenLayer**: Restaking and security model + +## Implementation in HyperAgent + +### Verifiable Factory Preset +- Location: Protocol Labs preset implementation +- Stores agent traces and audit reports +- Enables verification of agent decisions + +### Integration Points +- Agent trace storage +- Audit report anchoring +- Contract metadata storage +- Proof generation for agent actions + +## Code Examples + +### Storing Data +```python +from eigenda_client import EigenDAClient + +client = EigenDAClient() +blob_id = await client.store_blob( + data=agent_trace_json, + namespace="hyperagent-traces" +) +``` + +### Verification +```python +proof = await client.get_availability_proof(blob_id) +is_available = await client.verify_proof(proof) +``` + +## Best Practices + +1. **Data Format**: Use structured formats (JSON) for traces +2. **Namespace Organization**: Organize by workspace/project +3. **Verification**: Always verify proofs before trusting data +4. **Cost Management**: Monitor storage costs +5. **Retention**: Define data retention policies + +## Related Resources + +- EigenLayer GitHub: https://github.com/Layr-Labs/eigenlayer-contracts +- EigenDA Specification: https://docs.eigenlayer.xyz/eigenda/ +- Protocol Labs: https://protocol.ai/ + diff --git a/.cursor/llm/erc-4337-llm.txt b/.cursor/llm/erc-4337-llm.txt new file mode 100644 index 0000000..bd7494a --- /dev/null +++ b/.cursor/llm/erc-4337-llm.txt @@ -0,0 +1,97 @@ +# ERC-4337 (Account Abstraction) Documentation for HyperAgent + +## Overview + +ERC-4337 is an Ethereum standard for account abstraction that enables smart contract wallets without requiring changes to the Ethereum protocol. HyperAgent uses ERC-4337 for gasless transactions, session keys for agents, and improved user experience. + +## Key Use Cases in HyperAgent + +- **Gasless Transactions**: Users don't need ETH for gas +- **Session Keys**: Time-limited keys for agent operations +- **Multi-signature**: Enterprise multi-sig support +- **Social Recovery**: Account recovery mechanisms +- **Batch Transactions**: Execute multiple operations in one transaction + +## Documentation Links + +### Official Specification +- **EIP-4337**: https://eips.ethereum.org/EIPS/eip-4337 +- **EntryPoint Contract**: https://github.com/eth-infinitism/account-abstraction +- **Bundler Specification**: https://eips.ethereum.org/EIPS/eip-4337#bundler-specification + +### Implementation Resources +- **Thirdweb AA**: https://portal.thirdweb.com/account-abstraction +- **Alchemy AA**: https://www.alchemy.com/account-abstraction +- **Stackup**: https://docs.stackup.sh/ + +### Key Concepts +- **Smart Contract Wallets**: Wallets implemented as contracts +- **EntryPoint**: Standard contract for executing user operations +- **Bundlers**: Network of nodes that bundle and submit operations +- **Paymasters**: Contracts that sponsor gas fees +- **User Operations**: Abstraction of transactions + +## Implementation in HyperAgent + +### Account Abstraction +- Location: `hyperagent/blockchain/erc4337.py` +- HyperAccount contract implementation +- Session key management + +### Deployment Service +- ERC-4337 deployment support +- Gasless deployment options +- Paymaster integration + +## Code Examples + +### User Operation +```solidity +struct UserOperation { + address sender; + uint256 nonce; + bytes initCode; + bytes callData; + uint256 callGasLimit; + uint256 verificationGasLimit; + uint256 preVerificationGas; + uint256 maxFeePerGas; + uint256 maxPriorityFeePerGas; + bytes paymasterAndData; + bytes signature; +} +``` + +### EntryPoint Integration +```solidity +import "@account-abstraction/contracts/interfaces/IEntryPoint.sol"; + +contract HyperAccount { + IEntryPoint public immutable entryPoint; + + function validateUserOp( + UserOperation calldata userOp, + bytes32 userOpHash, + uint256 missingAccountFunds + ) external returns (uint256) { + require(msg.sender == address(entryPoint), "UNAUTHORIZED"); + // Validation logic + return 0; + } +} +``` + +## Best Practices + +1. **Security**: Validate all user operations carefully +2. **Gas Limits**: Set appropriate gas limits +3. **Nonce Management**: Handle nonces correctly +4. **Signature Verification**: Verify signatures properly +5. **Paymaster Integration**: Use paymasters for gasless UX + +## Related Resources + +- ERC-4337 GitHub: https://github.com/eth-infinitism/account-abstraction +- EntryPoint Contract: https://github.com/eth-infinitism/account-abstraction/blob/develop/contracts/core/EntryPoint.sol +- Account Abstraction Guide: https://ethereum.org/en/developers/docs/account-abstraction/ + diff --git a/.cursor/llm/erc-8004-llm.txt b/.cursor/llm/erc-8004-llm.txt new file mode 100644 index 0000000..e4df999 --- /dev/null +++ b/.cursor/llm/erc-8004-llm.txt @@ -0,0 +1,88 @@ +# ERC-8004 (Trustless Agents) Documentation for HyperAgent + +## Overview + +ERC-8004 is a proposed standard for trustless agent identity and reputation on Ethereum. HyperAgent uses ERC-8004 for agent identity, reputation tracking, and verifiable agent actions, enabling AI agents to operate on-chain with accountability. + +## Key Use Cases in HyperAgent + +- **Agent Identity**: Unique on-chain identity for agents +- **Reputation System**: Track agent performance and trust +- **Verifiable Actions**: Prove agent decisions on-chain +- **Agent Registry**: Registry of verified agents +- **Accountability**: Cryptographic proof of agent actions + +## Documentation Links + +### Specification +- **ERC-8004 Draft**: https://eips.ethereum.org/EIPS/eip-8004 +- **Agent Registry**: On-chain registry of agents +- **Reputation System**: Trust and reputation mechanisms + +### Key Concepts +- **Agent Identity**: On-chain identity for AI agents +- **Reputation**: Trust score based on past actions +- **Attestations**: Cryptographic proofs of agent actions +- **Registry**: On-chain registry of agents +- **Verification**: Verify agent actions and decisions + +## Implementation in HyperAgent + +### Agent Registry +- Location: `hyperagent/architecture/a2a.py` +- Agent identity management +- Reputation tracking + +### Verifiable Actions +- Store agent decisions on-chain +- Generate attestations for actions +- Enable verification of agent behavior + +## Code Examples + +### Agent Identity +```solidity +contract AgentRegistry { + struct Agent { + address agentAddress; + bytes32 agentId; + uint256 reputation; + bool verified; + } + + mapping(bytes32 => Agent) public agents; + + function registerAgent(bytes32 agentId) external { + agents[agentId] = Agent({ + agentAddress: msg.sender, + agentId: agentId, + reputation: 0, + verified: false + }); + } +} +``` + +### Attestation +```solidity +struct Attestation { + bytes32 agentId; + bytes32 actionHash; + uint256 timestamp; + bytes signature; +} +``` + +## Best Practices + +1. **Identity Management**: Secure agent identity creation +2. **Reputation**: Fair reputation scoring +3. **Attestations**: Generate verifiable attestations +4. **Registry**: Maintain accurate agent registry +5. **Verification**: Enable easy verification of actions + +## Related Resources + +- ERC-8004 Discussion: https://ethereum-magicians.org/t/erc-8004-trustless-agents/XXXXX +- Agent Standards: Research on agent identity standards + diff --git a/.cursor/llm/erc1066-x402-llm.txt b/.cursor/llm/erc1066-x402-llm.txt new file mode 100644 index 0000000..7ef8d96 --- /dev/null +++ b/.cursor/llm/erc1066-x402-llm.txt @@ -0,0 +1,68 @@ +# ERC-1066-x402 Gateway SDK Resources + +## Overview +ERC-1066-x402 is a payment gateway standard for agentic commerce. HyperKit provides SDKs for both TypeScript and Python. + +## Official Packages + +### TypeScript/JavaScript SDK +- **Package**: `hyperkit-erc1066` +- **npm**: https://www.npmjs.com/package/hyperkit-erc1066 +- **Libraries.io**: https://libraries.io/npm/hyperkit-erc1066 +- **Latest Version**: 0.1.0 (Dec 20, 2025) +- **Install**: `npm install hyperkit-erc1066@0.1.0` + +### Python SDK +- **Package**: `hyperkitlabs-erc1066-x402` +- **PyPI**: https://pypi.org/project/hyperkitlabs-erc1066-x402/ +- **Latest Version**: 0.2.0 (Dec 18, 2025) +- **Install**: `pip install hyperkitlabs-erc1066-x402==0.2.0` +- **Requires**: Python >=3.8 + +## Integration Points + +### SKALE Agentic Commerce Preset +- ERC-1066-x402 is required for SKALE commerce preset +- Enables payment processing for agentic commerce workflows +- Integrates with SKALE chain adapters + +### SDK/CLI Integration +- TypeScript SDK extends HyperAgent SDK with x402Client +- Python SDK available for backend services +- Both SDKs support SKALE chain integration + +## Usage Examples + +### TypeScript +```typescript +import { X402Client } from 'hyperkit-erc1066'; + +const client = new X402Client({ + gatewayUrl: process.env.X402_GATEWAY_URL, + apiKey: process.env.X402_API_KEY, + chainId: 0x1a4, // SKALE chain ID +}); +``` + +### Python +```python +from hyperkitlabs_erc1066_x402 import X402Client + +client = X402Client( + gateway_url=os.getenv("X402_GATEWAY_URL"), + api_key=os.getenv("X402_API_KEY"), + chain_id=420 # SKALE chain ID +) +``` + +## Environment Variables +- `X402_GATEWAY_URL` - Gateway endpoint URL +- `X402_API_KEY` - API authentication key +- `SKALE_TESTNET_RPC_URL` - SKALE testnet RPC endpoint +- `SKALE_MAINNET_RPC_URL` - SKALE mainnet RPC endpoint + +## Related Resources +- See `.cursor/llm/x402-llm.txt` for x402 protocol details +- See `.cursor/llm/skale-llm.txt` for SKALE chain integration +- SKALE endpoints must be live on testnet/mainnet before integration + diff --git a/.cursor/llm/fastapi-llm.txt b/.cursor/llm/fastapi-llm.txt new file mode 100644 index 0000000..ecff568 --- /dev/null +++ b/.cursor/llm/fastapi-llm.txt @@ -0,0 +1,105 @@ +# FastAPI Documentation for HyperAgent + +## Overview + +FastAPI is a modern, fast web framework for building APIs with Python. HyperAgent uses FastAPI for the backend API gateway, providing REST endpoints for agent orchestration, project management, and deployment operations. + +## Key Use Cases in HyperAgent + +- **API Gateway**: Main entry point for all client requests +- **Multi-tenant Routing**: Route requests to appropriate workspaces +- **Authentication**: JWT-based auth with Auth0 integration +- **Rate Limiting**: Per-workspace rate limiting +- **WebSocket Support**: Real-time updates for workflow progress +- **OpenAPI Documentation**: Auto-generated API docs + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://fastapi.tiangolo.com/ +- **Tutorial**: https://fastapi.tiangolo.com/tutorial/ +- **Advanced Usage**: https://fastapi.tiangolo.com/advanced/ +- **Deployment**: https://fastapi.tiangolo.com/deployment/ + +### Key Concepts +- **Dependency Injection**: For auth, database sessions, rate limiting +- **Pydantic Models**: Request/response validation +- **Background Tasks**: For async operations +- **WebSockets**: Real-time communication +- **Middleware**: For CORS, logging, error handling + +## Implementation in HyperAgent + +### API Structure +- Main app: `hyperagent/api/main.py` +- Routes organized by domain: `hyperagent/api/routes/` +- Middleware: `hyperagent/api/middleware/` + +### Key Endpoints +- `/api/v1/workflows` - Create and manage workflows +- `/api/v1/contracts` - Contract generation and management +- `/api/v1/deployments` - Deployment operations +- `/api/v1/templates` - Contract templates +- `/api/v1/x402/` - Payment and billing endpoints + +### Authentication +- JWT token validation via Auth0 +- Workspace-scoped access control +- Session management for agent operations + +## Code Examples + +### Basic Endpoint +```python +from fastapi import APIRouter, Depends +from pydantic import BaseModel + +router = APIRouter(prefix="/api/v1/workflows") + +class WorkflowRequest(BaseModel): + prompt: str + chains: list[str] + +@router.post("/") +async def create_workflow( + request: WorkflowRequest, + user: User = Depends(get_current_user) +): + workflow = await orchestrator.create_workflow( + prompt=request.prompt, + chains=request.chains, + workspace_id=user.workspace_id + ) + return workflow +``` + +### Dependency Injection +```python +from fastapi import Depends +from hyperagent.api.middleware.auth import get_current_user +from hyperagent.db.session import get_db + +@router.get("/") +async def list_workflows( + db: Session = Depends(get_db), + user: User = Depends(get_current_user) +): + return await db.query(Workflow).filter( + Workflow.workspace_id == user.workspace_id + ).all() +``` + +## Best Practices + +1. **Type Safety**: Use Pydantic models for all request/response +2. **Error Handling**: Use HTTPException with appropriate status codes +3. **Async Operations**: Use async/await for I/O-bound operations +4. **Dependency Injection**: Leverage FastAPI's DI for reusable logic +5. **Documentation**: Use docstrings and response models for OpenAPI + +## Related Resources + +- FastAPI GitHub: https://github.com/tiangolo/fastapi +- Pydantic Documentation: https://docs.pydantic.dev/ +- Uvicorn (ASGI server): https://www.uvicorn.org/ + diff --git a/.cursor/llm/gemini-llm.txt b/.cursor/llm/gemini-llm.txt new file mode 100644 index 0000000..f5bb092 --- /dev/null +++ b/.cursor/llm/gemini-llm.txt @@ -0,0 +1,84 @@ +# Google Gemini Documentation for HyperAgent + +## Overview + +Google Gemini is an LLM used by HyperAgent for fast, cost-effective code generation and quick edits. HyperAgent uses Gemini for tasks that don't require the highest reasoning capability but benefit from speed and lower cost. + +## Key Use Cases in HyperAgent + +- **Quick Edits**: Fast code modifications and iterations +- **Cost-Effective Generation**: Lower-cost alternative for simple tasks +- **Rapid Prototyping**: Quick contract generation for testing +- **Batch Operations**: Process multiple simple tasks efficiently + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://ai.google.dev/docs +- **API Reference**: https://ai.google.dev/api +- **Python SDK**: https://github.com/google/generative-ai-python +- **Quickstart**: https://ai.google.dev/tutorials/python_quickstart + +### Key Concepts +- **Model Variants**: Different model sizes for different needs +- **Prompt Design**: Effective prompt engineering +- **Safety Settings**: Content filtering and safety +- **Streaming**: Real-time response streaming +- **Multimodal**: Support for text and code + +## Implementation in HyperAgent + +### Gemini Client +- Location: `hyperagent/llm/provider.py` +- Used for quick edit operations +- Lower cost alternative to Claude/GPT + +### Routing Strategy +- Simple tasks → Gemini (fast, cheap) +- Complex tasks → Claude/GPT (high quality) +- Automatic routing based on task complexity + +## Code Examples + +### Basic Usage +```python +import google.generativeai as genai + +genai.configure(api_key="your-api-key") + +model = genai.GenerativeModel('gemini-pro') + +response = model.generate_content( + prompt, + generation_config={ + "temperature": 0.7, + "max_output_tokens": 2048, + } +) +``` + +### Streaming +```python +response = model.generate_content( + prompt, + stream=True +) + +for chunk in response: + print(chunk.text) +``` + +## Best Practices + +1. **Task Selection**: Use for simple, well-defined tasks +2. **Prompt Clarity**: Clear prompts work best +3. **Cost Optimization**: Leverage for high-volume operations +4. **Error Handling**: Handle API errors gracefully +5. **Rate Limiting**: Respect API limits + +## Related Resources + +- Gemini GitHub: https://github.com/google/generative-ai-python +- AI Studio: https://aistudio.google.com/ +- Gemini API: https://ai.google.dev/api + diff --git a/.cursor/llm/hardhat-foundry-llm.txt b/.cursor/llm/hardhat-foundry-llm.txt new file mode 100644 index 0000000..e7fb10a --- /dev/null +++ b/.cursor/llm/hardhat-foundry-llm.txt @@ -0,0 +1,88 @@ +# Hardhat & Foundry Documentation for HyperAgent + +## Overview + +Hardhat and Foundry are development frameworks for Ethereum smart contracts. HyperAgent uses both frameworks for Solidity compilation, testing, and deployment. Foundry is preferred for gas optimization and advanced testing, while Hardhat provides better TypeScript integration. + +## Key Use Cases in HyperAgent + +- **Contract Compilation**: Compile Solidity contracts to bytecode +- **Testing**: Generate and run contract tests +- **Deployment**: Deploy contracts to EVM chains +- **Gas Estimation**: Calculate deployment and transaction costs +- **Static Analysis**: Integration with Slither for security checks + +## Documentation Links + +### Hardhat +- **Main Docs**: https://hardhat.org/docs +- **Deployment**: https://hardhat.org/hardhat-runner/docs/guides/deploying +- **Plugins**: https://hardhat.org/hardhat-runner/plugins +- **TypeScript**: https://hardhat.org/hardhat-runner/docs/guides/typescript + +### Foundry +- **Main Docs**: https://book.getfoundry.sh/ +- **Forge**: https://book.getfoundry.sh/forge/ +- **Cast**: https://book.getfoundry.sh/reference/cast/ +- **Anvil**: https://book.getfoundry.sh/reference/anvil/ + +### Key Concepts +- **Compilation**: Solidity to EVM bytecode +- **Testing**: Unit and integration tests +- **Deployment Scripts**: Automated contract deployment +- **Network Configuration**: Multi-chain support +- **Gas Optimization**: Foundry's gas reporting + +## Implementation in HyperAgent + +### Compilation Service +- Location: `hyperagent/core/services/compilation_service.py` +- Supports both Hardhat and Foundry +- Generates ABI and bytecode for deployment + +### Deployment Service +- Uses Foundry for gas-optimized deployments +- Hardhat for TypeScript integration +- Multi-chain network configuration + +### Testing Integration +- Generates test files alongside contracts +- Runs tests before deployment +- Integrates with CI/CD pipeline + +## Code Examples + +### Foundry Compilation +```bash +forge build +forge test +forge script Deploy.s.sol --rpc-url $RPC_URL --broadcast +``` + +### Hardhat Deployment +```typescript +import { ethers } from "hardhat"; + +async function deploy() { + const Contract = await ethers.getContractFactory("MyContract"); + const contract = await Contract.deploy(); + await contract.deployed(); + return contract.address; +} +``` + +## Best Practices + +1. **Gas Optimization**: Use Foundry for gas reporting +2. **Testing**: Write comprehensive tests before deployment +3. **Network Config**: Maintain separate configs per chain +4. **Version Pinning**: Pin Solidity compiler versions +5. **Security**: Run Slither/Mythril before deployment + +## Related Resources + +- Hardhat GitHub: https://github.com/NomicFoundation/hardhat +- Foundry GitHub: https://github.com/foundry-rs/foundry +- Solidity Documentation: https://docs.soliditylang.org/ +- OpenZeppelin Contracts: https://docs.openzeppelin.com/contracts + diff --git a/.cursor/llm/ipfs-pinata-llm.txt b/.cursor/llm/ipfs-pinata-llm.txt new file mode 100644 index 0000000..48fbc41 --- /dev/null +++ b/.cursor/llm/ipfs-pinata-llm.txt @@ -0,0 +1,84 @@ +# IPFS & Pinata Documentation for HyperAgent + +## Overview + +IPFS (InterPlanetary File System) is a distributed file system for storing and sharing content-addressed data. Pinata provides IPFS pinning services and APIs. HyperAgent uses IPFS/Pinata for storing contract artifacts, audit reports, and other verifiable content. + +## Key Use Cases in HyperAgent + +- **Artifact Storage**: Store generated contracts and audit reports +- **Verifiable Memory**: RAG system for agent knowledge +- **Decentralized Storage**: Persistent storage for critical data +- **Content Addressing**: Immutable, content-addressed storage +- **MCP Integration**: Pinata MCP server for agent access + +## Documentation Links + +### IPFS +- **Main Docs**: https://docs.ipfs.tech/ +- **Concepts**: https://docs.ipfs.tech/concepts/ +- **HTTP API**: https://docs.ipfs.tech/reference/http/api/ + +### Pinata +- **Main Docs**: https://docs.pinata.cloud/ +- **API Reference**: https://docs.pinata.cloud/api-pinning/pin-file-to-ipfs +- **MCP Server**: https://docs.pinata.cloud/mcp + +### Key Concepts +- **Content Addressing**: Files identified by hash +- **Pinning**: Keeping files available on IPFS +- **Gateways**: HTTP access to IPFS content +- **CID**: Content Identifier (hash of content) + +## Implementation in HyperAgent + +### RAG System +- Location: `hyperagent/rag/pinata_manager.py` +- Stores documentation and knowledge bases +- Retrieves content for agent context + +### Artifact Storage +- Contract source code +- Audit reports +- Deployment metadata +- Agent execution traces + +## Code Examples + +### Pinata Upload +```python +from pinata import PinataSDK + +pinata = PinataSDK(api_key, secret_key) + +result = pinata.pin_file_to_ipfs( + file_path="contract.sol", + pinata_metadata={"name": "MyContract"} +) + +ipfs_hash = result["IpfsHash"] +``` + +### IPFS Retrieval +```python +import requests + +ipfs_hash = "Qm..." +gateway_url = f"https://gateway.pinata.cloud/ipfs/{ipfs_hash}" +content = requests.get(gateway_url).text +``` + +## Best Practices + +1. **Pinning**: Always pin important content +2. **Metadata**: Add descriptive metadata to pins +3. **Gateways**: Use reliable IPFS gateways +4. **Backup**: Consider multiple pinning services +5. **Cost Management**: Monitor pinning costs + +## Related Resources + +- IPFS GitHub: https://github.com/ipfs/ipfs +- Pinata GitHub: https://github.com/PinataCloud +- IPFS Documentation: https://docs.ipfs.tech/ + diff --git a/.cursor/llm/langgraph-llm.txt b/.cursor/llm/langgraph-llm.txt new file mode 100644 index 0000000..11a4ee6 --- /dev/null +++ b/.cursor/llm/langgraph-llm.txt @@ -0,0 +1,102 @@ +# LangGraph Documentation for HyperAgent + +## Overview + +LangGraph is a library for building stateful, multi-actor applications with LLMs. HyperAgent uses LangGraph for orchestrating agent workflows, managing state transitions, and coordinating between specialized agents. + +## Key Use Cases in HyperAgent + +- **Agent Orchestration**: Coordinate SpecAgent, CodeGenAgent, AuditAgent, DeployAgent +- **Workflow Management**: Define and execute multi-step agent pipelines +- **State Management**: Track project state through generation → audit → deploy +- **Error Handling**: Implement retry logic and error recovery +- **Conditional Routing**: Route tasks to appropriate agents based on context + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://langchain-ai.github.io/langgraph/ +- **Python SDK**: https://python.langchain.com/docs/langgraph +- **Concepts**: https://langchain-ai.github.io/langgraph/concepts/ +- **Tutorials**: https://langchain-ai.github.io/langgraph/tutorials/ + +### Key Concepts +- **StateGraph**: Core abstraction for defining agent workflows +- **Nodes**: Individual agent steps or operations +- **Edges**: Transitions between nodes (conditional or unconditional) +- **State**: Shared state object passed between nodes +- **Checkpoints**: Persistence layer for state management + +## Implementation in HyperAgent + +### Orchestrator Service +- Main LangGraph workflow in `hyperagent/core/orchestrator.py` +- Defines agent pipeline: Spec → Design → Generate → Audit → Test → Deploy +- Manages state transitions and error recovery + +### Agent Nodes +- Each agent (SpecAgent, CodeGenAgent, etc.) is a LangGraph node +- Nodes receive state, perform operations, return updated state +- Support for async operations and tool calling + +### State Schema +- Project state includes: prompts, generated code, audit results, deployment status +- Versioned state for reproducibility +- Checkpointed for recovery and debugging + +## Code Examples + +### Basic Workflow Definition +```python +from langgraph.graph import StateGraph, END + +workflow = StateGraph(ProjectState) + +# Add nodes +workflow.add_node("spec", spec_agent) +workflow.add_node("generate", codegen_agent) +workflow.add_node("audit", audit_agent) +workflow.add_node("deploy", deploy_agent) + +# Define edges +workflow.set_entry_point("spec") +workflow.add_edge("spec", "generate") +workflow.add_conditional_edges( + "generate", + should_audit, + {"audit": "audit", "deploy": "deploy"} +) +workflow.add_edge("audit", "deploy") +workflow.add_edge("deploy", END) + +# Compile and run +app = workflow.compile() +result = await app.ainvoke(initial_state) +``` + +### State Management +```python +from typing import TypedDict + +class ProjectState(TypedDict): + prompt: str + generated_code: str + audit_results: dict + deployment_status: str + errors: list +``` + +## Best Practices + +1. **State Design**: Keep state minimal and focused +2. **Error Handling**: Implement retry logic at node level +3. **Checkpointing**: Use checkpoints for long-running workflows +4. **Conditional Routing**: Use conditional edges for dynamic workflows +5. **Async Operations**: Leverage async for I/O-bound agent operations + +## Related Resources + +- LangChain Documentation: https://python.langchain.com/ +- LangGraph GitHub: https://github.com/langchain-ai/langgraph +- State Management Guide: https://langchain-ai.github.io/langgraph/concepts/low_level/ + diff --git a/.cursor/llm/llm.txt b/.cursor/llm/llm.txt new file mode 100644 index 0000000..f9b8160 --- /dev/null +++ b/.cursor/llm/llm.txt @@ -0,0 +1,82 @@ +# HyperAgent LLM Documentation Index + +This is the main index file for HyperAgent's LLM documentation resources. Each major dependency, library, and service has its own dedicated `llm.txt` file following the [llms.txt standard](https://llmstxt.org/). + +## Purpose + +These files help AI systems understand external dependencies and their integration with HyperAgent. Each file contains: +- Technology overview and HyperAgent-specific use cases +- Links to official documentation +- Implementation details and code examples +- Best practices for integration + +## Available Documentation Files + +### Core Frameworks & Languages +- **[thirdweb-llm.txt](thirdweb-llm.txt)** - Thirdweb SDK for wallets, ERC-4337, EIP-7702, and multi-chain deployment +- **[langgraph-llm.txt](langgraph-llm.txt)** - LangGraph for agent orchestration and workflow management +- **[fastapi-llm.txt](fastapi-llm.txt)** - FastAPI for backend API gateway and REST endpoints +- **[nextjs-react-llm.txt](nextjs-react-llm.txt)** - Next.js 14+ and React for frontend UI + +### Infrastructure & Data +- **[supabase-llm.txt](supabase-llm.txt)** - Supabase (PostgreSQL) for database, multi-tenant workspaces, and RLS +- **[redis-llm.txt](redis-llm.txt)** - Redis for caching, message queuing, and session management +- **[pinecone-llm.txt](pinecone-llm.txt)** - Pinecone vector database for RAG and semantic search +- **[acontext-llm.txt](acontext-llm.txt)** - Acontext for agent long-term memory and context management + +### Smart Contract Development +- **[hardhat-foundry-llm.txt](hardhat-foundry-llm.txt)** - Hardhat and Foundry for Solidity compilation, testing, and deployment +- **[openzeppelin-llm.txt](openzeppelin-llm.txt)** - OpenZeppelin Contracts library for secure contract templates +- **[slither-mythril-llm.txt](slither-mythril-llm.txt)** - Slither, Mythril, MythX, and Echidna for security auditing + +### Blockchain Standards +- **[erc-4337-llm.txt](erc-4337-llm.txt)** - ERC-4337 Account Abstraction for gasless transactions and smart wallets +- **[erc-8004-llm.txt](erc-8004-llm.txt)** - ERC-8004 Trustless Agents for agent identity and reputation +- **[x402-llm.txt](x402-llm.txt)** - x402 payment protocol for pay-per-use services on SKALE +- **[erc1066-x402-llm.txt](erc1066-x402-llm.txt)** - ERC-1066-x402 SDKs (npm: hyperkit-erc1066, PyPI: hyperkitlabs-erc1066-x402) + +### Storage & Data Availability +- **[ipfs-pinata-llm.txt](ipfs-pinata-llm.txt)** - IPFS and Pinata for decentralized artifact storage and RAG +- **[eigenda-llm.txt](eigenda-llm.txt)** - EigenDA for verifiable data availability and agent traces + +### Observability & Monitoring +- **[opentelemetry-llm.txt](opentelemetry-llm.txt)** - OpenTelemetry for distributed tracing and metrics +- **[mlflow-llm.txt](mlflow-llm.txt)** - MLflow for LLM experiment tracking and model routing +- **[tenderly-llm.txt](tenderly-llm.txt)** - Tenderly for contract simulation, debugging, and monitoring +- **[dune-analytics-llm.txt](dune-analytics-llm.txt)** - Dune Analytics for on-chain analytics and dashboards + +### LLM Providers +- **[anthropic-openai-llm.txt](anthropic-openai-llm.txt)** - Anthropic (Claude) and OpenAI (GPT) for code generation and reasoning +- **[gemini-llm.txt](gemini-llm.txt)** - Google Gemini for fast, cost-effective code generation + +### DevOps & Deployment +- **[docker-llm.txt](docker-llm.txt)** - Docker and Docker Compose for containerization and deployment + +## How to Use + +1. **Before implementing a feature** involving an external dependency, read the relevant `llm.txt` file +2. **Reference official docs** using the provided links for detailed information +3. **Follow best practices** outlined in each file +4. **Use code examples** as starting points for implementation + +## File Naming Convention + +Files follow the pattern: `{dependency}-llm.txt` + +Examples: +- `thirdweb-llm.txt` - Thirdweb documentation +- `langgraph-llm.txt` - LangGraph documentation +- `erc-4337-llm.txt` - ERC-4337 standard documentation + +## Maintenance + +- Files are updated when dependencies change +- New dependencies get their own `llm.txt` file +- Documentation links are kept current +- Code examples reflect current implementation + +## Related Resources + +- Main HyperAgent Spec: `docs/HyperAgent Spec.md` +- Project Naming Conventions: `.cursor/llm/llm.txt` (original file) +- Skills Directory: `.cursor/skills/` diff --git a/.cursor/llm/mlflow-llm.txt b/.cursor/llm/mlflow-llm.txt new file mode 100644 index 0000000..e5c15dd --- /dev/null +++ b/.cursor/llm/mlflow-llm.txt @@ -0,0 +1,81 @@ +# MLflow Documentation for HyperAgent + +## Overview + +MLflow is an open-source platform for managing the machine learning lifecycle, including experiment tracking, model packaging, and deployment. HyperAgent uses MLflow for tracking LLM model performance, routing decisions, and agent experiment results. + +## Key Use Cases in HyperAgent + +- **Model Tracking**: Track which LLM models perform best +- **Experiment Tracking**: Compare different agent configurations +- **Routing Decisions**: Log model routing choices and outcomes +- **Performance Metrics**: Track latency, cost, and quality metrics +- **Model Registry**: Version and manage LLM configurations + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://mlflow.org/docs/latest/index.html +- **Tracking**: https://mlflow.org/docs/latest/tracking.html +- **Python API**: https://mlflow.org/docs/latest/python_api/index.html +- **Quickstart**: https://mlflow.org/docs/latest/quickstart.html + +### Key Concepts +- **Experiments**: Group related runs +- **Runs**: Individual execution records +- **Metrics**: Numerical measurements +- **Parameters**: Input configurations +- **Artifacts**: Files and models + +## Implementation in HyperAgent + +### Tracking Service +- Location: `hyperagent/monitoring/mlflow_tracker.py` +- Tracks agent runs and model performance +- Logs routing decisions and outcomes + +### Metrics Tracked +- Agent execution time +- LLM response quality +- Cost per operation +- Success rates +- Error rates + +## Code Examples + +### Basic Tracking +```python +import mlflow + +mlflow.set_experiment("hyperagent-agents") + +with mlflow.start_run(): + mlflow.log_param("agent", "CodeGenAgent") + mlflow.log_param("model", "claude-opus-4.5") + mlflow.log_metric("execution_time", 2.5) + mlflow.log_metric("cost", 0.15) + mlflow.log_artifact("generated_contract.sol") +``` + +### Model Registry +```python +mlflow.register_model( + model_uri="runs:/run-id/model", + name="CodeGenAgent-v1" +) +``` + +## Best Practices + +1. **Experiment Organization**: Use clear experiment names +2. **Metric Consistency**: Use consistent metric names +3. **Artifact Management**: Store important artifacts +4. **Model Versioning**: Version models in registry +5. **Cost Tracking**: Track costs for optimization + +## Related Resources + +- MLflow GitHub: https://github.com/mlflow/mlflow +- MLflow Tracking: https://mlflow.org/docs/latest/tracking.html +- Best Practices: https://mlflow.org/docs/latest/tracking.html#best-practices + diff --git a/.cursor/llm/nextjs-react-llm.txt b/.cursor/llm/nextjs-react-llm.txt new file mode 100644 index 0000000..3f5d391 --- /dev/null +++ b/.cursor/llm/nextjs-react-llm.txt @@ -0,0 +1,103 @@ +# Next.js & React Documentation for HyperAgent + +## Overview + +Next.js is a React framework for production with features like server-side rendering, static site generation, and API routes. HyperAgent uses Next.js 14+ with App Router for the frontend, providing a modern, type-safe UI for managing workflows, viewing deployments, and monitoring agent operations. + +## Key Use Cases in HyperAgent + +- **Dashboard UI**: Project management and workflow visualization +- **Run Pipeline UI**: Real-time workflow progress tracking +- **Deployment Interface**: Multi-chain deployment configuration +- **Template Selection**: Preset and template browsing +- **Analytics**: Usage metrics and spending dashboards + +## Documentation Links + +### Official Documentation +- **Next.js Docs**: https://nextjs.org/docs +- **App Router**: https://nextjs.org/docs/app +- **React Docs**: https://react.dev/ +- **TypeScript**: https://www.typescriptlang.org/docs/ + +### Key Concepts +- **App Router**: File-based routing with layouts and loading states +- **Server Components**: React Server Components for data fetching +- **Client Components**: Interactive UI with "use client" directive +- **Tailwind CSS**: Utility-first CSS framework +- **shadcn/ui**: Component library built on Radix UI + +## Implementation in HyperAgent + +### Frontend Structure +- Location: `frontend/` directory +- App Router structure: `frontend/app/` +- Components: `frontend/components/` +- Hooks: `frontend/hooks/` + +### Key Pages +- `/` - Dashboard with project overview +- `/workflows` - Workflow management +- `/workflows/create` - Create new workflow +- `/workflows/[id]` - Workflow detail and progress +- `/deployments` - Deployment history +- `/templates` - Template library + +### State Management +- TanStack Query for server state +- Zustand for client state +- WebSocket hooks for real-time updates + +## Code Examples + +### Server Component +```typescript +// app/workflows/page.tsx +export default async function WorkflowsPage() { + const workflows = await fetchWorkflows(); + + return ( +
+

Workflows

+ {workflows.map(workflow => ( + + ))} +
+ ); +} +``` + +### Client Component with State +```typescript +// components/workflows/WorkflowProgress.tsx +"use client"; + +import { useWorkflowProgress } from "@/hooks/useWorkflowProgress"; + +export function WorkflowProgress({ workflowId }: { workflowId: string }) { + const { progress, status } = useWorkflowProgress(workflowId); + + return ( +
+ + +
+ ); +} +``` + +## Best Practices + +1. **Server Components**: Use for data fetching and static content +2. **Client Components**: Only when interactivity is needed +3. **Type Safety**: Use TypeScript for all components +4. **Error Boundaries**: Implement error handling for robust UX +5. **Loading States**: Show loading indicators for async operations + +## Related Resources + +- Next.js GitHub: https://github.com/vercel/next.js +- React GitHub: https://github.com/facebook/react +- Tailwind CSS: https://tailwindcss.com/docs +- shadcn/ui: https://ui.shadcn.com/ + diff --git a/.cursor/llm/opentelemetry-llm.txt b/.cursor/llm/opentelemetry-llm.txt new file mode 100644 index 0000000..e6797c0 --- /dev/null +++ b/.cursor/llm/opentelemetry-llm.txt @@ -0,0 +1,83 @@ +# OpenTelemetry Documentation for HyperAgent + +## Overview + +OpenTelemetry is an open standard for observability, providing APIs, SDKs, and tools for generating, collecting, and exporting telemetry data (metrics, logs, traces). HyperAgent uses OpenTelemetry for instrumenting services, tracking agent operations, and monitoring system performance. + +## Key Use Cases in HyperAgent + +- **Distributed Tracing**: Track requests across microservices +- **Metrics Collection**: Performance and business metrics +- **Agent Instrumentation**: Monitor agent execution times +- **Error Tracking**: Capture and analyze errors +- **Performance Monitoring**: Identify bottlenecks + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://opentelemetry.io/docs/ +- **Python**: https://opentelemetry.io/docs/instrumentation/python/ +- **Concepts**: https://opentelemetry.io/docs/concepts/ +- **Specification**: https://opentelemetry.io/docs/specs/ + +### Key Concepts +- **Traces**: Distributed request tracking +- **Spans**: Individual operations within a trace +- **Metrics**: Numerical measurements over time +- **Logs**: Structured log events +- **Exporters**: Send data to backends (Prometheus, Datadog, etc.) + +## Implementation in HyperAgent + +### Instrumentation +- Location: `hyperagent/monitoring/` +- Services instrumented: API, orchestrator, agents +- Export to Prometheus and MLflow + +### Key Metrics +- Agent execution time +- API request latency +- Error rates +- Deployment success rates +- Gas costs per deployment + +## Code Examples + +### Basic Instrumentation +```python +from opentelemetry import trace +from opentelemetry.sdk.trace import TracerProvider + +tracer = trace.get_tracer(__name__) + +@tracer.start_as_current_span("generate_contract") +def generate_contract(prompt: str): + with tracer.start_as_current_span("llm_call"): + # LLM operation + pass +``` + +### Metrics +```python +from opentelemetry import metrics + +meter = metrics.get_meter(__name__) +counter = meter.create_counter("contracts_generated") + +counter.add(1, {"chain": "mantle", "type": "erc20"}) +``` + +## Best Practices + +1. **Span Naming**: Use descriptive span names +2. **Attributes**: Add relevant context to spans +3. **Sampling**: Configure sampling for production +4. **Exporters**: Use appropriate exporters for your stack +5. **Error Handling**: Don't let instrumentation break main logic + +## Related Resources + +- OpenTelemetry GitHub: https://github.com/open-telemetry +- Python SDK: https://github.com/open-telemetry/opentelemetry-python +- Prometheus Exporter: https://opentelemetry.io/docs/instrumentation/python/exporters/ + diff --git a/.cursor/llm/openzeppelin-llm.txt b/.cursor/llm/openzeppelin-llm.txt new file mode 100644 index 0000000..7dceb5e --- /dev/null +++ b/.cursor/llm/openzeppelin-llm.txt @@ -0,0 +1,81 @@ +# OpenZeppelin Contracts Documentation for HyperAgent + +## Overview + +OpenZeppelin Contracts is a library of secure, reusable smart contracts for Ethereum and other EVM-compatible blockchains. HyperAgent uses OpenZeppelin contracts as submodules and references them in generated contract templates. + +## Key Use Cases in HyperAgent + +- **Contract Templates**: Base contracts for ERC20, ERC721, ERC1155 +- **Security Standards**: Battle-tested security patterns +- **Access Control**: Roles and permissions management +- **Upgradeable Contracts**: Proxy patterns for upgradeability +- **Token Standards**: Standard token implementations + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.openzeppelin.com/contracts +- **API Reference**: https://docs.openzeppelin.com/contracts/api/overview +- **Upgrades**: https://docs.openzeppelin.com/upgrades +- **Defender**: https://docs.openzeppelin.com/defender + +### Key Contracts +- **ERC20**: https://docs.openzeppelin.com/contracts/erc20 +- **ERC721**: https://docs.openzeppelin.com/contracts/erc721 +- **ERC1155**: https://docs.openzeppelin.com/contracts/erc1155 +- **Access Control**: https://docs.openzeppelin.com/contracts/access-control +- **Upgradeable**: https://docs.openzeppelin.com/contracts/upgradeable + +## Implementation in HyperAgent + +### Submodule Structure +- Location: `external/openzeppelin-contracts/` (submodule) +- Pinned to specific version tags +- Referenced in contract templates + +### Contract Templates +- ERC20 templates use OpenZeppelin's ERC20 +- Access control uses OpenZeppelin's AccessControl +- Upgradeable contracts use OpenZeppelin's upgradeable patterns + +## Code Examples + +### Using ERC20 +```solidity +import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; + +contract MyToken is ERC20 { + constructor() ERC20("MyToken", "MTK") { + _mint(msg.sender, 1000000 * 10**decimals()); + } +} +``` + +### Access Control +```solidity +import "@openzeppelin/contracts/access/AccessControl.sol"; + +contract MyContract is AccessControl { + bytes32 public constant ADMIN_ROLE = keccak256("ADMIN_ROLE"); + + constructor() { + _grantRole(ADMIN_ROLE, msg.sender); + } +} +``` + +## Best Practices + +1. **Version Pinning**: Always pin to specific OZ version +2. **Security Updates**: Regularly update to latest secure version +3. **Submodule Management**: Update via PRs, not direct commits +4. **Testing**: Test with OZ contracts before deployment +5. **Documentation**: Reference OZ docs in generated contracts + +## Related Resources + +- OpenZeppelin GitHub: https://github.com/OpenZeppelin/openzeppelin-contracts +- Security Center: https://security.openzeppelin.com/ +- Forum: https://forum.openzeppelin.com/ + diff --git a/.cursor/llm/pinecone-llm.txt b/.cursor/llm/pinecone-llm.txt new file mode 100644 index 0000000..b5247af --- /dev/null +++ b/.cursor/llm/pinecone-llm.txt @@ -0,0 +1,89 @@ +# Pinecone Documentation for HyperAgent + +## Overview + +Pinecone is a vector database for building AI applications with semantic search capabilities. HyperAgent uses Pinecone for RAG (Retrieval-Augmented Generation), storing embeddings of Solidity documentation, security best practices, and contract templates for agent context retrieval. + +## Key Use Cases in HyperAgent + +- **RAG System**: Retrieve relevant documentation for code generation +- **Template Retrieval**: Find similar contract templates +- **Security Patterns**: Retrieve security best practices +- **Code Examples**: Find relevant code snippets +- **Knowledge Base**: Store agent knowledge embeddings + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.pinecone.io/ +- **Python Client**: https://docs.pinecone.io/python-client +- **Quickstart**: https://docs.pinecone.io/guides/get-started/quickstart +- **API Reference**: https://docs.pinecone.io/api-reference + +### Key Concepts +- **Vectors**: High-dimensional embeddings +- **Indexes**: Collections of vectors +- **Metadata**: Additional data stored with vectors +- **Querying**: Semantic similarity search +- **Namespaces**: Logical separation of data + +## Implementation in HyperAgent + +### RAG Service +- Location: `hyperagent/rag/vector_store.py` +- Stores Solidity docs and security patterns +- Retrieves context for CodeGenAgent + +### Embeddings +- Generated from documentation +- Stored in Pinecone indexes +- Retrieved based on query similarity + +## Code Examples + +### Creating Index +```python +from pinecone import Pinecone + +pc = Pinecone(api_key="your-api-key") +index = pc.create_index( + name="hyperagent-docs", + dimension=1536, # OpenAI embedding dimension + metric="cosine" +) +``` + +### Upserting Vectors +```python +index.upsert(vectors=[ + { + "id": "doc-1", + "values": embedding_vector, + "metadata": {"title": "ERC20 Guide", "type": "docs"} + } +]) +``` + +### Querying +```python +results = index.query( + vector=query_embedding, + top_k=5, + include_metadata=True +) +``` + +## Best Practices + +1. **Dimension Consistency**: Use same dimension for all vectors +2. **Metadata Filtering**: Use metadata for efficient filtering +3. **Batch Operations**: Batch upserts for better performance +4. **Index Management**: Monitor index size and performance +5. **Cost Optimization**: Use appropriate index types + +## Related Resources + +- Pinecone GitHub: https://github.com/pinecone-io/pinecone-python-client +- Embeddings Guide: https://docs.pinecone.io/guides/embeddings +- Best Practices: https://docs.pinecone.io/guides/best-practices + diff --git a/.cursor/llm/redis-llm.txt b/.cursor/llm/redis-llm.txt new file mode 100644 index 0000000..a11014a --- /dev/null +++ b/.cursor/llm/redis-llm.txt @@ -0,0 +1,85 @@ +# Redis Documentation for HyperAgent + +## Overview + +Redis is an in-memory data structure store used as a database, cache, and message broker. HyperAgent uses Redis for caching, session management, message queuing for agent execution, and real-time data storage. + +## Key Use Cases in HyperAgent + +- **Caching**: Cache LLM responses and compiled contracts +- **Message Queue**: Queue agent tasks for background processing +- **Session Management**: Store user sessions and workspace state +- **Rate Limiting**: Track API rate limits per workspace +- **Real-time Data**: Store temporary workflow state + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://redis.io/docs/ +- **Python Client**: https://redis.readthedocs.io/ +- **Commands**: https://redis.io/commands/ +- **Data Types**: https://redis.io/docs/data-types/ + +### Key Concepts +- **Strings**: Simple key-value storage +- **Hashes**: Field-value maps +- **Lists**: Ordered collections +- **Sets**: Unordered unique collections +- **Pub/Sub**: Publish-subscribe messaging + +## Implementation in HyperAgent + +### Caching +- Location: `hyperagent/cache/` +- Cache LLM responses +- Cache compiled contract bytecode +- TTL-based expiration + +### Message Queue +- Agent task queuing +- Background job processing +- Worker pool management + +## Code Examples + +### Basic Operations +```python +import redis + +r = redis.Redis(host='localhost', port=6379, db=0) + +# Set/get +r.set("key", "value", ex=3600) # Expire in 1 hour +value = r.get("key") + +# Hash operations +r.hset("workflow:123", "status", "running") +status = r.hget("workflow:123", "status") +``` + +### Pub/Sub +```python +# Publisher +r.publish("workflow:updates", json.dumps({"id": 123, "status": "done"})) + +# Subscriber +pubsub = r.pubsub() +pubsub.subscribe("workflow:updates") +for message in pubsub.listen(): + data = json.loads(message["data"]) +``` + +## Best Practices + +1. **Connection Pooling**: Use connection pools for production +2. **TTL**: Set appropriate expiration times +3. **Memory Management**: Monitor memory usage +4. **Persistence**: Configure RDB or AOF for durability +5. **Security**: Use authentication and TLS in production + +## Related Resources + +- Redis GitHub: https://github.com/redis/redis +- Python Client: https://github.com/redis/redis-py +- Redis Cloud: https://redis.com/cloud/ + diff --git a/.cursor/llm/slither-mythril-llm.txt b/.cursor/llm/slither-mythril-llm.txt new file mode 100644 index 0000000..2a4c88f --- /dev/null +++ b/.cursor/llm/slither-mythril-llm.txt @@ -0,0 +1,80 @@ +# Slither & Mythril Documentation for HyperAgent + +## Overview + +Slither and Mythril are static analysis tools for Solidity smart contracts. HyperAgent uses these tools in the AuditAgent to automatically detect security vulnerabilities, gas optimization opportunities, and code quality issues before deployment. + +## Key Use Cases in HyperAgent + +- **Security Auditing**: Automated vulnerability detection +- **Gas Optimization**: Identify gas-saving opportunities +- **Code Quality**: Check for best practices and patterns +- **Pre-deployment Checks**: Validate contracts before deployment +- **Continuous Integration**: Run in CI/CD pipeline + +## Documentation Links + +### Slither +- **Main Docs**: https://github.com/crytic/slither +- **Detectors**: https://github.com/crytic/slither/wiki/Detector-Documentation +- **Python API**: https://github.com/crytic/slither#slither-as-a-library +- **Installation**: https://github.com/crytic/slither#installation + +### Mythril +- **Main Docs**: https://mythril-classic.readthedocs.io/ +- **GitHub**: https://github.com/ConsenSys/mythril +- **Usage**: https://mythril-classic.readthedocs.io/en/master/usage.html + +### Additional Tools +- **MythX**: https://docs.mythx.io/ +- **Echidna**: https://github.com/crytic/echidna + +## Implementation in HyperAgent + +### AuditAgent Integration +- Location: `hyperagent/agents/audit.py` +- Wrappers: `hyperagent/security/slither_wrapper.py`, `mythril_wrapper.py` +- Service: `hyperagent/core/services/audit_service.py` + +### Security Checks +- Reentrancy vulnerabilities +- Access control issues +- Integer overflow/underflow +- Gas optimization opportunities +- Best practice violations + +## Code Examples + +### Slither Analysis +```python +from slither import Slither + +slither = Slither("contracts/MyContract.sol") +results = slither.run_detectors() + +for detector in results: + for finding in detector: + print(f"{finding.check}: {finding.description}") +``` + +### Mythril Analysis +```bash +myth analyze contracts/MyContract.sol +myth analyze contracts/MyContract.sol --execution-timeout 300 +``` + +## Best Practices + +1. **Run Before Deployment**: Always audit before deploying +2. **Fix Critical Issues**: Address high-severity findings +3. **Gas Optimization**: Review gas-related findings +4. **CI Integration**: Automate in deployment pipeline +5. **False Positives**: Tune detectors to reduce noise + +## Related Resources + +- Slither GitHub: https://github.com/crytic/slither +- Mythril GitHub: https://github.com/ConsenSys/mythril +- Echidna GitHub: https://github.com/crytic/echidna +- Secureum: https://secureum.xyz/ + diff --git a/.cursor/llm/supabase-llm.txt b/.cursor/llm/supabase-llm.txt new file mode 100644 index 0000000..26b654e --- /dev/null +++ b/.cursor/llm/supabase-llm.txt @@ -0,0 +1,94 @@ +# Supabase Documentation for HyperAgent + +## Overview + +Supabase is an open-source Firebase alternative providing PostgreSQL database, authentication, real-time subscriptions, and storage. HyperAgent uses Supabase for relational data storage, multi-tenant workspace management, and Row Level Security (RLS) for data isolation. + +## Key Use Cases in HyperAgent + +- **Workspace Management**: Multi-tenant workspace isolation +- **Project Storage**: Store projects, runs, and artifacts +- **User Management**: Authentication and authorization +- **Row Level Security**: Workspace-scoped data access +- **Real-time Subscriptions**: Live updates for workflow progress + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://supabase.com/docs +- **PostgreSQL Guide**: https://supabase.com/docs/guides/database +- **Row Level Security**: https://supabase.com/docs/guides/auth/row-level-security +- **Python Client**: https://supabase.com/docs/reference/python +- **Realtime**: https://supabase.com/docs/guides/realtime + +### Key Concepts +- **Row Level Security (RLS)**: Policy-based access control +- **PostgreSQL Functions**: Stored procedures and triggers +- **Realtime**: WebSocket-based live updates +- **Storage**: File storage with access policies +- **Auth**: Built-in authentication with JWT + +## Implementation in HyperAgent + +### Database Schema +- Tables: `workspaces`, `projects`, `runs`, `artefacts`, `deployments` +- RLS policies for workspace isolation +- Foreign key relationships for data integrity + +### Key Tables +- `workspaces` - Tenant isolation +- `projects` - User projects +- `runs` - Workflow execution history +- `artefacts` - Generated code and audit reports +- `deployment_audits` - Deployment tracking + +### Connection Pooling +- Use Supabase connection pooler for production +- Session management via SQLAlchemy +- Migration management with Alembic + +## Code Examples + +### Database Connection +```python +from supabase import create_client, Client + +supabase: Client = create_client( + supabase_url=os.getenv("SUPABASE_URL"), + supabase_key=os.getenv("SUPABASE_KEY") +) +``` + +### Row Level Security +```sql +-- Example RLS policy +CREATE POLICY "Users can only see their workspace projects" +ON projects +FOR SELECT +USING (workspace_id = current_setting('app.workspace_id')::uuid); +``` + +### Querying Data +```python +from hyperagent.db.session import get_db + +def get_projects(workspace_id: str, db: Session): + return db.query(Project).filter( + Project.workspace_id == workspace_id + ).all() +``` + +## Best Practices + +1. **RLS Policies**: Always implement RLS for multi-tenant data +2. **Connection Pooling**: Use Supabase pooler for production +3. **Migrations**: Use Alembic for schema versioning +4. **Indexes**: Add indexes for frequently queried columns +5. **Backups**: Enable automatic backups in Supabase dashboard + +## Related Resources + +- Supabase GitHub: https://github.com/supabase/supabase +- PostgreSQL Documentation: https://www.postgresql.org/docs/ +- Alembic (Migrations): https://alembic.sqlalchemy.org/ + diff --git a/.cursor/llm/tenderly-llm.txt b/.cursor/llm/tenderly-llm.txt new file mode 100644 index 0000000..1fe969a --- /dev/null +++ b/.cursor/llm/tenderly-llm.txt @@ -0,0 +1,86 @@ +# Tenderly Documentation for HyperAgent + +## Overview + +Tenderly is a blockchain development platform providing simulation, debugging, and monitoring tools for smart contracts. HyperAgent uses Tenderly for contract simulation, gas estimation, transaction debugging, and post-deployment monitoring. + +## Key Use Cases in HyperAgent + +- **Contract Simulation**: Simulate contract interactions before deployment +- **Gas Estimation**: Accurate gas cost calculations +- **Transaction Debugging**: Debug failed transactions +- **Monitoring**: Monitor deployed contracts +- **Alerting**: Set up alerts for contract events + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://docs.tenderly.co/ +- **Simulations**: https://docs.tenderly.co/simulations-and-forks +- **Debugger**: https://docs.tenderly.co/debugger +- **Monitoring**: https://docs.tenderly.co/monitoring +- **API Reference**: https://docs.tenderly.co/api + +### Key Concepts +- **Simulations**: Test transactions before execution +- **Forks**: Local blockchain forks for testing +- **Debugger**: Step-through transaction execution +- **Monitoring**: Track contract state and events +- **Alerts**: Notifications for contract events + +## Implementation in HyperAgent + +### MonitorAgent Integration +- Location: `hyperagent/agents/monitoring.py` +- Simulates contracts before deployment +- Monitors deployed contracts +- Provides gas estimates + +### Simulation Service +- Pre-deployment validation +- Gas optimization analysis +- Transaction failure detection + +## Code Examples + +### Simulation +```python +from tenderly_sdk import Tenderly + +tenderly = Tenderly(api_key="your-api-key") + +simulation = tenderly.simulate_transaction( + network_id=1, + from_address="0x...", + to_address="0x...", + input="0x...", + value=0 +) + +print(f"Gas used: {simulation.gas_used}") +print(f"Status: {simulation.status}") +``` + +### Monitoring +```python +tenderly.monitor_contract( + network_id=1, + address="0x...", + name="MyContract" +) +``` + +## Best Practices + +1. **Pre-deployment Simulation**: Always simulate before deploying +2. **Gas Optimization**: Use simulations to optimize gas +3. **Error Detection**: Catch errors in simulation +4. **Monitoring**: Set up monitoring for production contracts +5. **Alerts**: Configure alerts for critical events + +## Related Resources + +- Tenderly GitHub: https://github.com/Tenderly +- Tenderly Dashboard: https://dashboard.tenderly.co/ +- API Documentation: https://docs.tenderly.co/api + diff --git a/.cursor/llm/thirdweb-llm.txt b/.cursor/llm/thirdweb-llm.txt new file mode 100644 index 0000000..d056909 --- /dev/null +++ b/.cursor/llm/thirdweb-llm.txt @@ -0,0 +1,90 @@ +# Thirdweb Documentation for HyperAgent + +## Overview + +Thirdweb is a Web3 development platform that provides SDKs, tools, and infrastructure for building decentralized applications. HyperAgent uses Thirdweb primarily for wallet management, account abstraction (ERC-4337), and EIP-7702 EOA support. + +## Key Use Cases in HyperAgent + +- **Account Abstraction**: ERC-4337 smart contract wallets for gasless transactions +- **EOA Support**: EIP-7702 for enabling smart contract features on EOAs +- **Multi-Chain Deployment**: Unified SDK for deploying across EVM chains +- **Wallet Management**: Secure wallet creation and management +- **x402 Payments**: Integration with x402 payment protocol on SKALE + +## Documentation Links + +### Official Documentation +- **Main Docs**: https://portal.thirdweb.com/ +- **SDK Documentation**: https://portal.thirdweb.com/sdk +- **React SDK**: https://portal.thirdweb.com/react +- **TypeScript SDK**: https://portal.thirdweb.com/typescript +- **Account Abstraction**: https://portal.thirdweb.com/account-abstraction +- **Deployment**: https://portal.thirdweb.com/deploy + +### Key Concepts +- **Smart Wallets**: ERC-4337 account abstraction implementation +- **EOAs with EIP-7702**: Traditional wallets with smart contract capabilities +- **Contract Deployment**: Simplified deployment across chains +- **Wallet SDK**: Client-side wallet management + +## Implementation in HyperAgent + +### DeployAgent Integration +- Uses Thirdweb SDK for multi-chain contract deployment +- Supports ERC-4337 for gasless deployments +- Integrates with x402 for payment processing on SKALE + +### Wallet Management +- Smart wallet creation for users +- Session key management for agent operations +- Multi-signature support for enterprise use cases + +### Chain Support +- Mantle, Avalanche, BNB, SKALE, Arbitrum, Base, Polygon +- Unified API across all supported chains + +## Code Examples + +### Deploying Contracts +```typescript +import { ThirdwebSDK } from "@thirdweb-dev/sdk"; + +const sdk = ThirdwebSDK.fromPrivateKey( + privateKey, + chainId +); + +const contract = await sdk.deployer.deployContract( + contractName, + constructorParams +); +``` + +### Account Abstraction +```typescript +import { SmartWallet } from "@thirdweb-dev/wallets"; + +const wallet = new SmartWallet({ + factoryAddress: "0x...", + chain: supportedChains.mantle, +}); + +await wallet.connect({ + personalWallet: personalWallet, +}); +``` + +## Best Practices + +1. **Gas Optimization**: Use account abstraction for gasless transactions +2. **Security**: Always validate contract addresses and parameters +3. **Error Handling**: Implement retry logic for network operations +4. **Rate Limiting**: Respect API rate limits for production use + +## Related Resources + +- ERC-4337 Specification: https://eips.ethereum.org/EIPS/eip-4337 +- EIP-7702 Specification: https://eips.ethereum.org/EIPS/eip-7702 +- Thirdweb GitHub: https://github.com/thirdweb-dev + diff --git a/.cursor/llm/x402-llm.txt b/.cursor/llm/x402-llm.txt new file mode 100644 index 0000000..79305ec --- /dev/null +++ b/.cursor/llm/x402-llm.txt @@ -0,0 +1,80 @@ +# x402 Payment Protocol Documentation for HyperAgent + +## Overview + +x402 is a payment protocol for pay-per-use services on blockchain networks, particularly optimized for SKALE. HyperAgent uses x402 for payment processing, spending controls, and billing for agent operations and deployments. + +## Key Use Cases in HyperAgent + +- **Pay-per-Use**: Users pay for agent operations as they use them +- **Spending Controls**: Set limits on spending per workspace +- **Payment Analytics**: Track spending and usage +- **SKALE Integration**: Optimized for SKALE network +- **Billing Management**: Automated billing and invoicing + +## Documentation Links + +### Official Documentation +- **x402 Protocol**: https://docs.x402.io/ +- **SKALE Integration**: https://docs.skale.network/x402 +- **API Reference**: https://docs.x402.io/api + +### Key Concepts +- **Payment Facilitation**: Automated payment processing +- **Spending Limits**: Per-workspace spending controls +- **Payment History**: Track all payments +- **Analytics**: Spending and usage analytics + +## Implementation in HyperAgent + +### Payment Service +- Location: `hyperagent/billing/` +- x402 client integration +- Spending controls +- Payment history tracking + +### API Endpoints +- `/api/v1/x402/analytics` - Payment analytics +- `/api/v1/x402/spending-controls` - Manage spending limits +- `/api/v1/x402/workflows` - Workflow payment tracking + +## Code Examples + +### Payment Processing +```typescript +import { x402Client } from "@hyperagent/sdk"; + +const client = new x402Client({ + network: "skale", + facilitatorAddress: "0x..." +}); + +await client.processPayment({ + amount: ethers.parseEther("0.1"), + recipient: "0x...", + metadata: { workflowId: "wf-123" } +}); +``` + +### Spending Controls +```typescript +await client.setSpendingLimit({ + workspaceId: "ws-123", + dailyLimit: ethers.parseEther("10"), + monthlyLimit: ethers.parseEther("100") +}); +``` + +## Best Practices + +1. **Spending Limits**: Always set spending limits +2. **Payment Tracking**: Track all payments +3. **Error Handling**: Handle payment failures gracefully +4. **Analytics**: Monitor spending patterns +5. **Security**: Secure payment processing + +## Related Resources + +- x402 Documentation: https://docs.x402.io/ +- SKALE x402: https://docs.skale.network/x402 + diff --git a/.cursor/rules/AGENT.mdc b/.cursor/rules/AGENT.mdc new file mode 100644 index 0000000..8eb31d7 --- /dev/null +++ b/.cursor/rules/AGENT.mdc @@ -0,0 +1,54 @@ +--- +alwaysApply: true +--- + +# Agent Resource Requirements + +## Mandatory Pre-Action Resource Check + +Before taking any action, decision, or implementing any feature, you MUST: + +1. **Check `.cursor/skills/` directory first** + - Review all available skills and their documentation + - Read relevant `SKILL.md` files in each skill subdirectory + - Check `references/` folders for templates and guidelines + - Identify applicable skills for the current task + - Follow the patterns and best practices defined in these skills + +2. **Check `.cursor/llm/` directory second** + - Review all LLM-related resources and configurations + - Read any documentation, prompts, or guidelines + - Understand LLM usage patterns and constraints + - Apply LLM-specific rules and configurations + +3. **Apply Resources Before Action** + - Do not proceed with implementation until you have reviewed relevant resources + - Incorporate patterns, templates, and guidelines from these directories + - Ensure your actions align with the established practices + - Reference specific skills or LLM resources when applicable + +## Resource Discovery Process + +1. **Identify Task Type**: Determine what kind of task you're handling +2. **Search Skills**: Look for relevant skills in `.cursor/skills/` that match the task +3. **Read Documentation**: Read the `SKILL.md` and any reference files +4. **Check LLM Resources**: Review `.cursor/llm/` for any relevant configurations +5. **Apply Learnings**: Use the patterns and guidelines before taking action + +## Example Workflow + +``` +Task: Create a GitHub workflow +1. Check .cursor/skills/github-workflow-automation/ +2. Read SKILL.md and any references +3. Check .cursor/llm/ for any LLM-related guidelines +4. Apply the patterns and templates found +5. Proceed with implementation +``` + +## Enforcement + +- These resources are authoritative and must be followed +- Ignoring these resources may result in inconsistent or incorrect implementations +- Always prioritize resources from these directories over generic approaches +- When in doubt, consult these directories before making decisions diff --git a/.cursor/rules/AGENTS.mdc b/.cursor/rules/AGENTS.mdc new file mode 100644 index 0000000..8eb31d7 --- /dev/null +++ b/.cursor/rules/AGENTS.mdc @@ -0,0 +1,54 @@ +--- +alwaysApply: true +--- + +# Agent Resource Requirements + +## Mandatory Pre-Action Resource Check + +Before taking any action, decision, or implementing any feature, you MUST: + +1. **Check `.cursor/skills/` directory first** + - Review all available skills and their documentation + - Read relevant `SKILL.md` files in each skill subdirectory + - Check `references/` folders for templates and guidelines + - Identify applicable skills for the current task + - Follow the patterns and best practices defined in these skills + +2. **Check `.cursor/llm/` directory second** + - Review all LLM-related resources and configurations + - Read any documentation, prompts, or guidelines + - Understand LLM usage patterns and constraints + - Apply LLM-specific rules and configurations + +3. **Apply Resources Before Action** + - Do not proceed with implementation until you have reviewed relevant resources + - Incorporate patterns, templates, and guidelines from these directories + - Ensure your actions align with the established practices + - Reference specific skills or LLM resources when applicable + +## Resource Discovery Process + +1. **Identify Task Type**: Determine what kind of task you're handling +2. **Search Skills**: Look for relevant skills in `.cursor/skills/` that match the task +3. **Read Documentation**: Read the `SKILL.md` and any reference files +4. **Check LLM Resources**: Review `.cursor/llm/` for any relevant configurations +5. **Apply Learnings**: Use the patterns and guidelines before taking action + +## Example Workflow + +``` +Task: Create a GitHub workflow +1. Check .cursor/skills/github-workflow-automation/ +2. Read SKILL.md and any references +3. Check .cursor/llm/ for any LLM-related guidelines +4. Apply the patterns and templates found +5. Proceed with implementation +``` + +## Enforcement + +- These resources are authoritative and must be followed +- Ignoring these resources may result in inconsistent or incorrect implementations +- Always prioritize resources from these directories over generic approaches +- When in doubt, consult these directories before making decisions diff --git a/.cursor/rules/AI Text Humanizer Specialist.mdc b/.cursor/rules/AI Text Humanizer Specialist.mdc new file mode 100644 index 0000000..bda344d --- /dev/null +++ b/.cursor/rules/AI Text Humanizer Specialist.mdc @@ -0,0 +1,102 @@ +--- +alwaysApply: true +description: Expert AI Text Humanizer who transforms robotic AI-generated content into natural +--- + + +# FOLLOW THIS WRITING STYLE: +- SHOULD use clear, simple language. +- SHOULD be spartan and informative. +- SHOULD use short, impactful sentences. +- SHOULD use active voice; avoid passive voice. +- SHOULD focus on practical, actionable insights. +- SHOULD use bullet point lists in social media posts. +- SHOULD use data and examples to support claims when possible. +- SHOULD use “you” and “your” to directly address the reader. +- AVOID using em dashes (—) anywhere in your response. Use only commas, periods, or other standard punctuation. If you need to connect ideas, use a period or a semicolon, but never an em dash. +- AVOID constructions like "...not just this, but also this". +- AVOID metaphors and clichés. • AVOID generalizations. +- AVOID common setup language in any sentence, including: in conclusion, in closing, etc. +- AVOID output warnings or notes, just the output requested. +- AVOID unnecessary adjectives and adverbs. +- AVOID hashtags. +- AVOID semicolons. +- AVOID markdown. +- AVOID asterisks. +- AVOID these words: “can, may, just, that, very, really, literally, actually, certainly, probably, basically, could, maybe, delve, embark, enlightening, esteemed, shed light, craft, crafting, imagine, realm, game-changer, unlock, discover, skyrocket, abyss, not alone, in a world where, revolutionize, disruptive, utilize, utilizing, dive deep, tapestry, illuminate, unveil, pivotal, intricate, elucidate, hence, furthermore, realm, however, harness, exciting, groundbreaking, cutting-edge, remarkable, it, remains to be seen, glimpse into, navigating, landscape, stark, testament, in summary, in conclusion, moreover, boost, skyrocketing, opened up, powerful, inquiries, ever-evolving" +- IMPORTANT: Review your response and ensure no em dashes! + +[ROLE] +You are an Expert AI Text Humanizer who transforms robotic AI-generated content into natural, human-like writing. You eliminate AI "tells" (repetitive patterns, stiff phrasing, uniform sentences) while keeping 100% of the original meaning for professional, academic, or creative use. + +Your task is to: +1. **ANALYZE** the input text for common AI patterns: + - Uniform sentence lengths (all 15-20 words) + - Repetitive phrases ("In addition", "furthermore", "delve into") + - Overly formal tone ("One must consider") + - Predictable list structures and passive voice + - Generic words ("utilize", "leverage", "commence") + +2. **HUMANIZE** by applying these specific techniques: + - **Vary sentence length**: Mix short punchy sentences (5-10 words) with longer ones (25-40 words) + - **Replace AI words**: "delve" → "explore", "utilize" → "use", "commence" → "start" + - **Add natural flow**: Contractions ("it's" not "it is"), idioms ("nail it" not "succeed") + - **Restructure**: Active voice, conversational transitions, rhetorical questions + - **Boost burstiness**: High variance in complexity for human unpredictability + +3. **PRESERVE** original meaning 100% - only change style, never content +4. **OUTPUT** clean, readable humanized text ready for immediate use + +Target audiences: Bloggers, marketers, students, professionals needing undetectable content +Success criteria: Passes GPTZero/Turnitin (95%+ human score), Flesch readability 60-80 +Use cases: Emails, resumes, blog posts, social media, academic drafts (cite AI origin ethically) + +**BEFORE (AI Text):** +"In today's fast-paced digital landscape, one must utilize effective time management strategies to enhance productivity. Furthermore, leveraging automation tools can significantly streamline workflows." + +**AFTER (Humanized):** +"Struggling to keep up? Here are three simple time hacks that actually work. Automation? Game-changer. You'll save hours every week." + +**TRANSFORMATION BREAKDOWN:** +Uniform sentences → Short + long mix +"utilize...leverage" → "use...game-changer" +"one must...significantly" → "Here are...You'll save" +Formal → Conversational + contractions +**COMMON AI FIXES CHEAT SHEET:** +AI PATTERN → HUMAN FIX +"In addition" → "Plus", "Also", "On top of that" +"Delve into" → "Dig into", "Explore", "Look at" +"Utilize" → "Use", "Apply", "Try" +"Commence" → "Start", "Begin", "Kick off" +"One must consider" → "Think about", "You'll want to" +"Significantly" → "A lot", "Big time", "Huge" +Passive → Active: "Was developed" → "We built" +**HUMANIZATION MODES (Choose one):** +BASIC: Light fixes (synonyms + contractions) - 70% human +ADVANCED: Full rewrite (structure + burstiness) - 95%+ human +ACADEMIC: Formal but natural (no slang) +CASUAL: Conversational (blog/social media) +CONCISE: Same meaning, 20% fewer words +[PERSONA - NICE-TO-HAVE] +Friendly writing coach who makes AI text sound like YOU wrote it +Never changes facts - only eliminates robot patterns +Confident it will pass any AI detector + +**Clear Before/After sections** +**Bullet-point transformation summary** +**Ready-to-copy final text** +**Mode and confidence score** + +Natural, engaging, confident +Conversational but professional +"Your writing, but better" + +**STEP-BY-STEP PROCESS YOU FOLLOW:** +1. Read input → Spot 3-5 AI tells +2. Rewrite using cheat sheet fixes +3. Vary 30% sentence lengths dramatically +4. Add 2-3 human touches (questions, idioms) +5. Read aloud → Tweak for natural flow +6. Final check → 95%+ human confidence + +--- \ No newline at end of file diff --git a/.cursor/rules/Project_Manager.mdc b/.cursor/rules/Project_Manager.mdc new file mode 100644 index 0000000..37ec4a9 --- /dev/null +++ b/.cursor/rules/Project_Manager.mdc @@ -0,0 +1,479 @@ +--- +alwaysApply: true +description: Expert Project Manager and Scrum Master skilled in planning, tracking, and delivering projects using Agile +--- + +# FOLLOW THIS WRITING STYLE: +- SHOULD use clear, simple language. +- Avoid text tends to include domain-specific terms and overly complex language Verbose and overly complex sentence structure +- SHOULD be spartan and informative. +- SHOULD use short, impactful sentences. +- SHOULD use active voice; avoid passive voice. +- SHOULD focus on practical, actionable insights. +- SHOULD use bullet point lists in social media posts. +- SHOULD use data and examples to support claims when possible. +- SHOULD use "you" and "your" to directly address the reader. +- AVOID using em dashes (—) anywhere in your response. Use only commas, periods, or other standard punctuation. If you need to connect ideas, use a period or a semicolon, but never an em dash. +- AVOID constructions like "...not just this, but also this". +- AVOID metaphors and clichés. • AVOID generalizations. +- AVOID common setup language in any sentence, including: in conclusion, in closing, etc. +- AVOID output warnings or notes, just the output requested. +- AVOID unnecessary adjectives and adverbs. +- AVOID hashtags. +- AVOID semicolons. +- AVOID markdown. +- AVOID asterisks. +- AVOID these words: "can, may, just, that, very, really, literally, actually, certainly, probably, basically, could, maybe, delve, embark, enlightening, esteemed, shed light, craft, crafting, imagine, realm, game-changer, unlock, discover, skyrocket, abyss, not alone, in a world where, revolutionize, disruptive, utilize, utilizing, dive deep, tapestry, illuminate, unveil, pivotal, intricate, elucidate, hence, furthermore, realm, however, harness, exciting, groundbreaking, cutting-edge, remarkable, it, remains to be seen, glimpse into, navigating, landscape, stark, testament, in summary, in conclusion, moreover, boost, skyrocketing, opened up, powerful, inquiries, ever-evolving" +- IMPORTANT: Review your response and ensure no em dashes! + +[ROLE] +You are an Expert Project Manager and Scrum Master skilled in planning, tracking, and delivering projects using Agile methodologies. You excel in resource management, facilitating team collaboration, and ensuring timely delivery of high-quality outcomes. + +Your task is to: +1. DEVELOP a comprehensive project management strategy and documentation based on input or project context. +2. STRUCTURE the document to include detailed coverage of: + - Planning and Tracking Projects: Define project scope, timelines, milestones, and resource allocation. + - Managing Resources and Timelines: Explain capacity planning, workload balancing, and schedule management. + - Facilitating Agile Ceremonies: Describe roles and processes for standups, sprint planning, reviews, and retrospectives. + - Removing Blockers and Ensuring Delivery: Outline strategies for risk identification, impediment removal, and continuous progress. +3. EXPLAIN concepts and terms such as: + - Agile, Scrum, Kanban, Sprints, Gantt Charts, and Risk Management + - Burn-down Charts and Stakeholder Management Techniques +4. EMPHASIZE best practices including: + - Transparent Progress Tracking via dashboards and reports + - Continuous Improvement through Retrospectives + - Aligning Stakeholders through clear communication and involvement +5. ORGANIZE the document with numbered sections and subsections ensuring clarity and usability. +6. OUTPUT the document professionally in Markdown format, suitable for project teams, executives, and stakeholders. + +- The document targets project teams, Scrum Masters, product owners, and leadership. +- Must balance formal project management frameworks with Agile flexibility. +- Encourage collaboration, transparency, and adaptability. +- Maintain a clear, motivating, and authoritative tone. + +Example Document Structure: +1. Introduction +1.1 Role and Responsibilities +1.2 Project Objectives and Scope +2. Project Planning and Tracking +2.1 Developing Project Plans +2.2 Timeline and Milestones Management +2.3 Resource Allocation and Capacity Planning +3. Agile Methodology and Ceremonies +3.1 Scrum Framework Overview +3.2 Facilitation of Standups, Sprint Planning, Reviews, and Retrospectives +3.3 Kanban and Continuous Flow +4. Risk and Issue Management +4.1 Identifying and Tracking Risks +4.2 Removing Blockers +4.3 Mitigation Strategies +5. Progress Reporting and Stakeholder Management +5.1 Burn-down and Burn-up Charts +5.2 Communication Plans and Stakeholder Engagement +6. Best Practices in Project Management +6.1 Transparency and Visibility +6.2 Continuous Improvement +6.3 Stakeholder Alignment +References (if applicable) + +- Clear, organized, and motivating writing style +- Actionable insights with practical examples +- Emphasizes leadership, communication, and team cohesion + +- Use Markdown with clear headings and bullet points +- Suitable for team onboarding, executive summaries, or process documentation + +- Confident, collaborative, and facilitative +- Positive and forward-looking +- Focused on delivering value and team empowerment + +--- + +[HYPERKIT PROJECT CONTEXT] + +## 1. PROBLEM STATEMENT + +Current Web3 developer workflow is fragmented, expensive, and slow: +- Learn Solidity: 6 months +- Study patterns: 2 months +- Write code: 2 weeks +- Audit costs: $5k-50k +- Deploy to 1 chain: manual process +- Cross-chain deployment: Repeat 3-5 times + +TOTAL: 8-10 months, $50k-200k investment, 1-2 chains supported, 95% bug rate + +## 2. GOAL + +Build HyperKit: AI-native autonomous dApp lifecycle management platform. + +Target outcomes: +- Build production-grade smart contracts in under 2 minutes +- Deploy across 100+ blockchain networks +- Auto-audit with TEE-verified security +- Monitor TVL/gas/revenue in real-time +- Earn creator points converted to HYPE tokens + +## 3. SUCCESS METRICS + +Year 1 Targets: +- 10,000+ dApps deployed via HyperKit +- $100M TVL across deployed dApps +- $10M annual revenue from x402 + subscriptions +- 2,000+ active contributors earning HYPE tokens +- 95%+ build success rate (AI + human audit) +- Average build time under 87 seconds + +## 4. CORE VISION & MISSION + +Vision: Enable 10,000+ developers to build production-grade dApps in under 2 minutes, earning sustainable creator income via HYPE tokenomics. + +Mission: Transform Web3 development from months of manual work to minutes of AI-assisted creation. + +HyperKit Path: +1. Write prompt: "Build AMM on Mantle + Solana" +2. HyperAgent generates code: 15 seconds +3. AI audit + TEE verification: 20 seconds +4. Deploy to 2 chains: 30 seconds +5. Auto-monitoring active: Real-time + +TOTAL: 90 seconds, $0.15 cost, 2+ chains, under 5% bug rate + +## 5. MODULAR ROLE ARCHITECTURE + +### Aaron (CTO/Project Architect) +Responsibilities: +- Backend architecture, APIs, smart contract integrations +- Code audits for security, correctness, performance +- Platform reliability and scalability +- Technical decision-making and direction +- Complex engineering problem solving + +### Justine (CPOO/Product Lead) +Responsibilities: +- Product features, design, user experience +- Smart contract writing, deployment, auditing +- Wireframes, design themes, platform look/feel +- Documentation, proposals, technical specs +- Partnership coordination (Mantle, grants) +- Backend architecture support + +### Tristan (CMFO/Frontend) +Responsibilities: +- UI and frontend systems development +- Translate wireframes to interactive interfaces +- Public pitches, demos, presentations +- Marketing campaigns, social outreach +- Pitch decks, demo scripts, brand messaging +- User onboarding and engagement + +## 6. SCOPE (MVP) - 8 Weeks + +### Sprint 1-2: Infrastructure & Foundation +- GitHub monorepo with Turborepo +- CI/CD pipeline with GitHub Actions +- PostgreSQL + Redis database setup +- MLflow + Prometheus monitoring +- RPC provider pooling (Alchemy, QuickNode, Helius) +- Contract templates repository (5 templates) +- Mantle partnership agreement + +### Sprint 3-4: Core HyperAgent +- Claude 4.5 integration for code generation +- ROMA planner with GPT-5 +- Multi-model router with fallback logic +- Slither integration for static analysis +- Error handling and retry mechanisms + +### Sprint 5-6: Account Abstraction & Deployment +- ERC-4337 contract deployment +- EntryPoint 0.7 integration +- Foundry setup for Solidity compilation +- Gas estimation engine +- Mantle testnet deployment + +### Sprint 7-8: Testing & MVP Launch +- End-to-end testing (50+ scenarios) +- Load testing (100 concurrent builds) +- Basic dashboard with build history +- Documentation and API docs +- Closed alpha (50 users) +- Bug bounty program + +## 7. USERS AND FLOWS + +### Primary User: Web3 Developer +Flow: +1. Connect wallet (MetaMask/Phantom) +2. Enter natural language prompt +3. Select target chain(s) +4. Review generated code +5. Approve deployment +6. Monitor via dashboard +7. Earn points for contributions + +### Secondary User: DeFi Builder +Flow: +1. Select template (DEX, Lending, Vault) +2. Customize parameters +3. AI generates optimized code +4. Audit report generated +5. Multi-chain deployment +6. Revenue tracking active + +### Enterprise User +Flow: +1. Custom onboarding +2. White-label dashboard access +3. Dedicated support channel +4. SLA guarantees +5. Custom integrations + +## 8. TECHNICAL PLAN + +### Backend Stack +- Language: Python 3.11+ +- Framework: FastAPI 0.100+ +- AI/LLM: Anthropic SDK, OpenAI API, Google Gemini, Together.ai +- Data: Pinecone (vectors), PostgreSQL, Redis +- Smart Contract: Slither, Foundry, Anchor CLI + +### Frontend Stack +- Framework: Next.js 14 (App Router) +- Language: TypeScript 5.x +- Styling: Tailwind CSS 3.x, shadcn/ui +- State: TanStack Query, Zustand +- Web3: ethers.js 6.x, @solana/web3.js, wagmi 2.x + +### Infrastructure +- Hosting: Vercel (frontend), Render (backend) +- Database: PlanetScale, Redis Cloud +- RPC: Alchemy, QuickNode, Helius +- Monitoring: Datadog, Sentry, Grafana + +## 9. MINIMAL BACKBONE CODE + +### API Endpoint Structure +```python +# backend/api/builds.py +from fastapi import APIRouter, Depends +from pydantic import BaseModel + +router = APIRouter(prefix="/api/v1/builds") + +class BuildRequest(BaseModel): + prompt: str + chains: list[str] + template: str | None = None + +class BuildResponse(BaseModel): + build_id: str + status: str + estimated_time: int + +@router.post("/", response_model=BuildResponse) +async def create_build(request: BuildRequest): + build_id = await hyperagent.create_build( + prompt=request.prompt, + chains=request.chains + ) + return BuildResponse( + build_id=build_id, + status="pending", + estimated_time=87 + ) + +@router.get("/{build_id}/status") +async def get_status(build_id: str): + return await hyperagent.get_build_status(build_id) +``` + +### Multi-Model Router +```python +# backend/hyperagent/router.py +class MultiModelRouter: + MODEL_CONFIG = { + "solidity_codegen": { + "primary": "claude-opus-4.5", + "fallback": "llama-3.1-405b", + "timeout": 30 + }, + "gas_optimization": { + "primary": "llama-3.1-405b", + "fallback": "gpt-4-turbo", + "timeout": 20 + } + } + + async def route_task(self, task: str, context: dict): + config = self.MODEL_CONFIG[task] + try: + return await self.call_model( + config["primary"], + task, + context, + timeout=config["timeout"] + ) + except TimeoutError: + return await self.call_model( + config["fallback"], + task, + context + ) +``` + +### ERC-4337 Account +```solidity +// packages/aa/src/HyperAccount.sol +pragma solidity ^0.8.24; + +import {IAccount} from "account-abstraction/interfaces/IAccount.sol"; + +contract HyperAccount is IAccount { + address public immutable entryPoint; + mapping(bytes32 => SessionKey) public sessionKeys; + + struct SessionKey { + address agent; + uint48 expiresAt; + uint96 spendLimit; + bool active; + } + + function validateUserOp( + UserOperation calldata userOp, + bytes32 userOpHash, + uint256 missingAccountFunds + ) external override returns (uint256) { + require(msg.sender == entryPoint, "UNAUTHORIZED"); + // Validate session key and spend limits + return 0; + } + + function createSessionKey( + bytes32 keyId, + address agent, + uint48 ttl, + uint96 spendLimit + ) external onlyOwner { + sessionKeys[keyId] = SessionKey({ + agent: agent, + expiresAt: uint48(block.timestamp) + ttl, + spendLimit: spendLimit, + active: true + }); + } +} +``` + +### Dashboard Component +```typescript +// frontend/app/dashboard/page.tsx +export default function Dashboard() { + const { builds } = useBuildHistory(); + const { points } = useUserPoints(); + + return ( +
+ + + + + + + + Recent Builds + + + {builds.recent.map(build => ( + + ))} + + +
+ ); +} +``` + +## 10. ACCEPTANCE CRITERIA + +### MVP Launch Criteria +- [ ] User connects wallet and creates build in under 2 minutes +- [ ] AI generates valid Solidity code for 5 template types +- [ ] Slither audit passes with no critical issues +- [ ] Contract deploys to Mantle testnet successfully +- [ ] Dashboard shows real-time build progress +- [ ] Build history persists across sessions +- [ ] Error messages are clear and actionable +- [ ] API response time under 2 seconds (p95) +- [ ] 50 alpha users onboarded +- [ ] Zero critical security vulnerabilities + +### Sprint Acceptance +Each sprint deliverable must meet: +- Code review approved by 2 team members +- Unit test coverage above 80% +- Integration tests passing +- Documentation updated +- No P0 bugs open +- Performance benchmarks met + +### Quality Gates +- Build success rate: 95%+ +- API uptime: 99.5%+ +- Response time p95: under 2 seconds +- Error rate: under 0.1% +- Security scan: Zero critical findings + +--- + +[TASK STRUCTURE] + +Each task file follows this structure: + +## Metadata +- Assignee, Role, Sprint, Priority, Status +- Due Date, Estimated Hours, Actual Hours + +## Problem +- What specific issue does this task solve? +- What is the current state? +- What pain point does it address? + +## Goal +- What is the desired outcome? +- What does success look like? +- How does it contribute to MVP? + +## Success Metrics +- Measurable criteria for completion +- Performance targets +- Quality benchmarks + +## Technical Scope +- Files/components to create or modify +- Dependencies required +- Integration points + +## Minimal Code +- Skeleton or pseudocode +- Key functions/classes +- Interface definitions + +## Acceptance Criteria +- Specific testable requirements +- Edge cases to handle +- Review checklist + +## Dependencies +- Blocking tasks +- External dependencies +- Team coordination needs + +## Progress Log +- Date, Update, Hours spent + +--- + +[NOW PROCEED] +Provide project details, team context, or existing planning documents to generate a structured project management strategy document tailored to your organization's needs. diff --git a/.cursor/rules/README.mdc b/.cursor/rules/README.mdc new file mode 100644 index 0000000..92a3f50 --- /dev/null +++ b/.cursor/rules/README.mdc @@ -0,0 +1,989 @@ +--- +alwaysApply: true +description: Professional README Standards for GitHub OSS Projects - Design, Structure, and Best Practices +globs: + - "README.md" + - "README.mdx" + - "*/README.md" +--- + +# Professional README Standards for GitHub OSS Projects + +This ruleset establishes comprehensive standards for creating professional, visually appealing, and effective README files for open-source projects on GitHub. A README is the front door to your project—it determines whether visitors stay to explore or leave. This guide ensures READMEs are clear, welcoming, and conversion-focused. + +--- + +## Core README Principles + +### Purpose of a README + +A README serves multiple critical functions: + +- **First Impression**: The first item visitors see when arriving at your repository +- **Project Elevator Pitch**: Quickly communicates what the project does and why it matters +- **Onboarding Guide**: Provides the minimum information needed to get started +- **Decision Tool**: Helps potential users decide if the project meets their needs +- **Credibility**: Demonstrates professionalism and project maturity +- **Support Reduction**: Well-written READMEs reduce support requests by pre-answering common questions + +According to GitHub, a README typically includes information about: +- What the project does +- Why the project is useful +- How users can get started with the project +- Where users can get help +- Who maintains and contributes to the project + +### Guiding Design Principles + +**Clarity First**: Use clear, plain language accessible to your target audience (beginners to experts as appropriate). Define technical terms and acronyms on first use. + +**Visual Hierarchy**: Use consistent formatting, meaningful headings, and whitespace to guide readers through content. Make scanning easy by using lists, tables, and visual breaks. + +**Progressive Disclosure**: Present information in order of importance. Critical information first, detailed information later. Link to full documentation for extensive topics. + +**Professional Appearance**: Maintain consistent styling, proper spacing, and visual appeal. A well-formatted README signals a well-maintained project. + +**Mobile-Friendly**: Ensure the README displays well on mobile devices. Test rendering on GitHub and various screen sizes. + +--- + +## README File Location and Recognition + +GitHub automatically recognizes and displays README files in the following priority order: + +1. `.github/README.md` (hidden directory) +2. `README.md` (root directory) — **Recommended location** +3. `docs/README.md` (documentation directory) + +**Best Practice**: Place your main README.md in the repository root for maximum visibility. This is the standard location developers expect. + +**Multiple READMEs**: You may create additional README files in subdirectories to document specific components or modules. Each should follow this ruleset's standards. + +**File Size Limit**: GitHub truncates README content beyond 500 KiB. Most projects will never reach this limit, but be aware if you embed very large content. + +--- + +## README Structure and Required Sections + +### Section Order and Content + +Order matters. Visitors make quick decisions about whether to engage with your project. Present information strategically. + +**Recommended Section Order:** + +``` +1. Project Title +2. Badges & Shields (optional but recommended) +3. Brief Description / Tagline +4. Table of Contents +5. Features / Key Benefits +6. Demo or Screenshots +7. Quick Start / Installation +8. Usage Examples +9. Documentation Links +10. Contributing +11. License +12. Citation (if applicable) +13. Authors / Acknowledgments +``` + +### 1. Project Title + +**Requirements**: +- Use H1 heading (single `#`) +- Make it clear, descriptive, and memorable +- Include the project name exactly as styled in official branding +- Keep to one line when possible +- Avoid vague titles + +**Example**: +``` +# Hyperkit: AI-Powered Smart Contract Auditing & Generation Platform +``` + +**Not Recommended**: +``` +# My Project +# README +``` + +### 2. Badges & Shields (Optional but Recommended) + +Badges provide at-a-glance information about project status, health, and key metrics. Use sparingly—2-5 badges maximum. + +**Recommended Badge Categories**: + +**Build & CI Status**: +- GitHub Actions build status +- Test pass/fail rate +- Deployment status + +**Code Quality**: +- Code coverage percentage +- Code quality scores +- Linting/testing indicators + +**Project Information**: +- License type +- Version/Release +- Last updated date +- GitHub stars (engagement indicator) + +**Development Status**: +- Active/Inactive status +- Development stage (alpha, beta, stable) +- Maintenance status + +**Social/Engagement**: +- Downloads count +- Contributors count +- Activity indicators + +**Badge Implementation**: + +Use [shields.io](https://shields.io) for consistent, professional badge creation: + +```markdown + +![Build Status](https://img.shields.io/github/actions/workflow/status/username/repo/ci.yml?branch=main) +![License](https://img.shields.io/github/license/username/repo) +![Version](https://img.shields.io/github/v/release/username/repo) +![Code Coverage](https://img.shields.io/codecov/c/github/username/repo) + +``` + +**Best Practices for Badges**: +- Keep badge count minimal (2-5 maximum) +- Use the same badge style consistently +- Ensure badges link to relevant information +- Update badges automatically via CI/CD when possible +- Wrap badges in HTML comments with `` and `` +- Test that badge URLs are current and working + +### 3. Brief Description / Tagline + +**Requirements**: +- 1-4 sentences maximum +- Answer: "What does this project do?" +- Be specific, not generic +- Include the problem it solves +- Mention primary use cases + +**Example**: +``` +Hyperkit is an AI-powered platform for auditing and generating smart contracts. +It leverages large language models to identify vulnerabilities, optimize code, +and accelerate smart contract development across multiple blockchains. +``` + +**Not Recommended**: +``` +A project for smart contracts. +Software development tools. +``` + +### 4. Table of Contents + +**When to Include**: +- Include if README has 5 or more major sections +- Omit for short READMEs (under 500 words) +- Always include for comprehensive project documentation + +**Format**: +- Use markdown links to section headings +- Use appropriate heading levels matching document hierarchy +- Update automatically if possible (use tools like `markdown-toc`) +- Place immediately after brief description + +**Example**: +```markdown +## Table of Contents + +- [Features](#features) +- [Installation](#installation) +- [Quick Start](#quick-start) +- [Usage](#usage) +- [API Reference](#api-reference) +- [Contributing](#contributing) +- [License](#license) +- [Acknowledgments](#acknowledgments) +``` + +**Automatic Generation**: +```bash +# Using npm package markdown-toc +npm install -g markdown-toc +markdown-toc -i README.md +``` + +### 5. Features & Key Benefits + +**Purpose**: Highlight what makes your project valuable and different. + +**Format**: +- Use unordered bullet list (5-10 items maximum) +- Start each bullet with strong action verb or key noun +- Include emoji strategically (1 per section maximum) for visual break +- Be specific about capabilities, not generic claims +- Focus on user benefits, not technical implementation + +**Example**: +```markdown +## Features + +- 🔍 **Automated Vulnerability Detection** - Scans smart contracts for security issues using advanced ML models +- ⚡ **Multi-Chain Support** - Deploy across Ethereum, Polygon, Avalanche, and more +- 🎯 **Code Optimization** - Automatically suggests gas optimizations and refactoring +- 🧪 **Testing Integration** - Generate unit tests alongside audit reports +- 📊 **Visual Reports** - Clear, actionable vulnerability reports with severity ratings +- 🔄 **Cross-Chain Bridges** - Built-in support for token bridges and multi-chain protocols +``` + +**Not Recommended**: +```markdown +- Good features +- Nice performance +- Works well +``` + +### 6. Demo or Screenshots + +**Purpose**: Show, don't tell. Visual content dramatically increases engagement. + +**Best Practices**: + +**Screenshots**: +- Include 2-4 high-quality screenshots showing key features +- Crop to essential UI elements only (not entire monitor screen) +- Add captions or labels explaining what viewers see +- Store in repository in `assets/images/` or `docs/images/` directory +- Use relative paths for image links +- Use HTML `` tag for size control instead of markdown + +**GIFs**: +- Show key workflows or interactions (30-60 seconds maximum) +- Use high quality (but reasonable file size) +- Include alternative text +- Label clearly (e.g., "Demo: Smart Contract Auditing Workflow") + +**Diagrams**: +- Include architecture diagrams using ASCII art, Mermaid, or hosted images +- Show component relationships +- Explain data flow visually + +**Image Implementation**: + +```markdown +## Demo + +### Smart Contract Analysis in Action + +Hyperkit audit interface showing vulnerability detection + +**Features demonstrated**: +- Real-time vulnerability scanning +- Severity-based categorization +- Code suggestion recommendations +``` + +**File Organization**: +``` +repository-root/ +├── README.md +├── assets/ +│ └── images/ +│ ├── dashboard.png +│ ├── audit-flow.gif +│ └── architecture.png +``` + +### 7. Quick Start / Installation + +**Purpose**: Minimize friction for new users to get started immediately. + +**Requirements**: +- Assume minimal prior knowledge but not zero knowledge +- Keep to 5-10 steps maximum +- Include expected output or success indicator +- Link to detailed setup guide for complex projects +- Provide platform-specific instructions if applicable + +**Standard Structure**: + +```markdown +## Quick Start + +### Prerequisites + +- Node.js 18.0 or higher +- npm 8.0 or higher +- Git + +### Installation + +1. **Clone the repository** + ```bash + git clone https://github.com/username/hyperkit.git + cd hyperkit + ``` + +2. **Install dependencies** + ```bash + npm install + ``` + +3. **Set up environment variables** + ```bash + cp .env.example .env + # Edit .env with your configuration + ``` + +4. **Start development server** + ```bash + npm run dev + ``` + +5. **Verify installation** + Open http://localhost:3000 in your browser. You should see the Hyperkit dashboard. + +For detailed setup instructions, see [Installation Guide](./docs/installation.md). +``` + +**Best Practices**: +- Show exact commands users can copy-paste +- Explain each step briefly +- Show expected output: "You should see..." +- Provide success confirmation +- Mention common issues and solutions +- Link to detailed documentation + +### 8. Usage Examples + +**Purpose**: Show how to use the project for common scenarios. + +**Requirements**: +- Provide 2-4 clear, working examples +- Progress from simple to complex +- Use realistic, meaningful data (not `foo`, `bar`, `baz`) +- Show complete code including imports +- Include expected output or console results +- Comment only on non-obvious lines (not line-by-line) + +**Example**: + +```markdown +## Usage + +### Basic Smart Contract Audit + +```python +from hyperkit import SmartContractAuditor + +# Initialize auditor +auditor = SmartContractAuditor( + model="gpt-4", + chains=["ethereum", "polygon"] +) + +# Audit a contract +results = auditor.audit("path/to/contract.sol") + +# View findings +for finding in results.vulnerabilities: + print(f"[{finding.severity}] {finding.title}") + print(f" Details: {finding.description}") +``` + +Output: +``` +[HIGH] Unprotected reentrancy attack + Details: Function transfers funds before updating state... + +[MEDIUM] Missing input validation + Details: Function accepts user input without validation... +``` + +### Multi-Chain Deployment + +```bash +hyperkit deploy \ + --contract=./contracts/Token.sol \ + --networks=ethereum,polygon,avalanche \ + --report=detailed +``` + +### Integration with Existing Tools + +See [Examples Directory](./docs/examples/) for more advanced use cases. +``` + +### 9. Documentation Links + +**Purpose**: Guide users to comprehensive documentation for topics beyond the README scope. + +**Structure**: + +```markdown +## Documentation + +For more information, visit: + +- **[User Guide](./docs/user-guide.md)** - Complete feature documentation +- **[API Reference](./docs/api/)** - Detailed API documentation +- **[Developer Setup](./docs/development/setup.md)** - Development environment setup +- **[Architecture](./docs/architecture.md)** - System design and component overview +- **[FAQ](./docs/faq.md)** - Frequently asked questions +- **[Changelog](./CHANGELOG.md)** - Version history and updates +``` + +### 10. Contributing + +**Purpose**: Welcome and guide potential contributors. + +**Minimum Content**: + +```markdown +## Contributing + +We welcome contributions! To contribute: + +1. **Fork** the repository +2. **Create a feature branch** (`git checkout -b feature/amazing-feature`) +3. **Make your changes** and commit (`git commit -m 'Add amazing feature'`) +4. **Push to your branch** (`git push origin feature/amazing-feature`) +5. **Open a Pull Request** + +For detailed contribution guidelines, see [CONTRIBUTING.md](./CONTRIBUTING.md). + +### Code of Conduct + +Please note we have a [Code of Conduct](./CODE_OF_CONDUCT.md). +By participating, you are expected to uphold this code. +``` + +**Best Practices**: +- Link to detailed CONTRIBUTING.md for full guidelines +- Make contribution process obvious and welcoming +- Mention code of conduct +- Provide multiple ways to contribute (code, docs, testing, issues) + +### 11. License + +**Purpose**: Clarify intellectual property and usage rights. + +**Standard Format**: + +```markdown +## License + +This project is licensed under the [MIT License](./LICENSE) - +see the LICENSE file for details. + +Contributions to this project are accepted under the same license. +``` + +**Recommended Licenses** (by use case): + +| License | Best For | +|---------|----------| +| **MIT** | Permissive, most popular for open source | +| **Apache 2.0** | Commercial-friendly with patent protection | +| **GPL-3.0** | Copyleft; ensures derived works remain open | +| **AGPL-3.0** | Network copyleft for SaaS/cloud services | +| **ISC** | Minimal, simple permissive license | + +**License File**: +- Always include a separate `LICENSE` file in repository root +- Use exact license text from [choosealicense.com](https://choosealicense.com) +- Do not modify license text + +### 12. Citation (Optional) + +**When to Include**: +- Academic or research-focused projects +- Projects that should be cited in published work +- Projects with published papers or documentation + +**Format** (BibTeX): + +```markdown +## Citation + +If you use Hyperkit in your research, please cite: + +```bibtex +@software{hyperkit2025, + author = {Your Name}, + title = {Hyperkit: AI-Powered Smart Contract Auditing}, + year = {2025}, + url = {https://github.com/username/hyperkit} +} +``` + +Or use this APA format: + +Your Name. (2025). Hyperkit: AI-powered smart contract auditing platform. +Retrieved from https://github.com/username/hyperkit +``` + +### 13. Authors & Acknowledgments + +**Purpose**: Credit creators, contributors, and inspirations. Build community. + +**Structure**: + +```markdown +## Authors + +- **[Your Name](https://github.com/yourname)** - Creator and Maintainer +- **[Contributor Name](https://github.com/contributor)** - Major Contributions + +## Acknowledgments + +Special thanks to: + +- [Project Inspiration](https://github.com/inspiration) for the original concept +- [Library/Tool Name](https://github.com/library) for core functionality +- All [contributors](./CONTRIBUTORS.md) who have helped improve this project + +### Contributors + +See [CONTRIBUTORS.md](./CONTRIBUTORS.md) for a full list of contributors. +``` + +--- + +## Formatting and Styling Standards + +### Heading Hierarchy + +**Correct Structure**: +``` +# Title (H1) — One per document +## Major Section (H2) +### Subsection (H3) +#### Details (H4) +``` + +**Do Not**: +- Use multiple H1 headings in one document +- Skip heading levels (e.g., H2 → H4) +- Use headings for styling purposes only + +### Lists + +**Unordered Lists** (bullet points): +- Use for items without inherent order +- Use one of: `-`, `*`, or `+` consistently +- Do not exceed 10 items per list +- Use for features, prerequisites, steps with alternatives + +**Ordered Lists** (numbered): +- Use for step-by-step procedures +- Use for prioritized items +- Use for ranked lists +- Restart numbering for each list +- Avoid nesting more than 2 levels + +**Example**: +```markdown +### Installation Steps + +1. Clone the repository + - Using HTTPS: `git clone https://...` + - Using SSH: `git clone git@...` + +2. Install dependencies + ```bash + npm install + ``` + +3. Configure environment + - Copy `.env.example` to `.env` + - Update with your settings +``` + +### Code Blocks + +**Requirements**: +- Always specify language for syntax highlighting +- Keep examples under 20 lines +- Include sufficient context (imports, setup) +- Add inline comments only for non-obvious code +- Show both input and output when relevant + +**Example**: +```markdown +```python +from hyperkit import SmartContractAuditor + +# Initialize with model and chains +auditor = SmartContractAuditor(model="gpt-4") + +# Audit contract +results = auditor.audit("contract.sol") +``` +``` + +**Language Identifiers**: +- `python` - Python code +- `javascript` or `js` - JavaScript +- `typescript` or `ts` - TypeScript +- `bash` or `shell` - Shell commands +- `json` - JSON data +- `yaml` or `yml` - YAML configuration +- `sql` - SQL queries +- `markdown` - Markdown content + +### Tables + +**Usage**: +- Use for comparing features, options, or structured data +- Include clear headers +- Align columns consistently +- Provide explanatory text above complex tables + +**Example**: +```markdown +### Supported Networks + +| Network | Mainnet | Testnet | Support Level | +|---------|---------|---------|---------------| +| Ethereum | ✓ | ✓ | Full | +| Polygon | ✓ | ✓ | Full | +| Avalanche | ✓ | ✓ | Experimental | +``` + +### Emphasis + +**Guidelines**: +- Limit emphasis to ~10% of total text +- Use **bold** for: UI elements, command names, key terms +- Use *italics* for: file names, variable names, conceptual terms +- Use `code formatting` for: code, commands, technical terminology +- Avoid ALL CAPS for emphasis; use bold instead + +**Example**: +```markdown +Run the `npm install` command to install **dependencies**. +Configure your *.env* file with API keys. +``` + +### Relative Links + +**Best Practice**: Use relative links for internal documentation to support cloned repositories. + +**Example**: +```markdown + +[Contribution Guidelines](./CONTRIBUTING.md) +[Development Setup](./docs/development/setup.md) + + +[Contribution Guidelines](https://github.com/username/repo/blob/main/CONTRIBUTING.md) +``` + +### Images and Media + +**File Organization**: +``` +repository-root/ +├── README.md +├── docs/ +├── src/ +└── assets/ + ├── images/ + │ ├── demo-screenshot.png + │ ├── architecture.png + │ └── workflow.gif + └── videos/ (optional) +``` + +**Image Implementation**: +```markdown + +![Alt text describing image](./assets/images/demo.png) + + +Dashboard showing smart contract audit results +``` + +**Image Specifications**: +- Use descriptive alt text for accessibility +- Crop to relevant content only +- Compress images to reasonable file size +- Use PNG for screenshots (lossless) +- Use GIF or WebP for animations +- Use JPG for photographs +- Specify width; height adjusts automatically to maintain aspect ratio + +### Emoji Usage + +**Strategic Emoji Placement**: +- Use emojis in section headers for visual break (maximum 1 per section) +- Use emojis in feature lists for quick visual scanning +- Avoid excessive emoji use (unprofessional appearance) +- Ensure emoji choices are semantic and helpful +- Support screen readers by including descriptive text + +**Recommended Emojis by Context**: +``` +🚀 Launch, quick start, getting started +📖 Documentation, guides, learning +🔧 Configuration, setup, installation +⚙️ Settings, configuration, advanced options +🐛 Bugs, issues, troubleshooting +✨ Features, highlights, new features +📊 Analytics, metrics, data +🔍 Search, find, discover +⚡ Performance, speed, optimization +🔐 Security, authentication, encryption +🌐 Global, web, internet, multi-chain +💡 Tips, ideas, insights +❓ FAQ, questions, help +📝 Notes, documentation, changelog +``` + +**Not Recommended**: +- Random emoji not connected to content meaning +- Multiple emoji per line (visual noise) +- Emoji instead of actual descriptive text +- Complex emoji that don't render consistently across platforms + +--- + +## Professional Standards and Practices + +### Length and Scope + +**Optimal README Length**: +- 500-2000 words (typically) +- Longer READMEs may indicate content belongs in separate documentation +- Shorter READMEs may not provide sufficient information + +**Content Boundaries**: +- README: Project overview, quick start, basic usage, links to detailed docs +- Separate Documentation: Detailed API docs, architecture, advanced topics +- GitHub Wiki or Docs Site: Extensive tutorials, how-to guides, FAQs + +**Rule of Thumb**: If a section exceeds 200 words, consider moving it to separate documentation and linking from README. + +### Tone and Language + +**Voice**: +- Professional yet approachable +- Friendly but not flippant +- Direct and clear +- Use "you" to address readers +- Avoid marketing hype or exaggeration +- Be honest about limitations + +**Language**: +- Use active voice when possible +- Write in present tense for current features +- Define technical terms before using +- Avoid culture-specific idioms or references +- Use "they/them" pronouns for inclusive language +- Avoid gendered language + +**Example**: +``` +✓ "You can audit smart contracts to identify vulnerabilities" +✗ "Smart contracts get audited for things that might be wrong" + +✓ "Supports 5+ blockchains including Ethereum, Polygon, and Avalanche" +✗ "We're super excited to offer blockchain support!" +``` + +### Accessibility + +**GitHub Markdown Rendering**: +- Test README rendering on GitHub.com (not just local) +- Ensure adequate color contrast if using color +- Use alt text for all images +- Use semantic headings for screen readers +- Avoid image-only content; include text alternative +- Test on mobile devices for responsive display + +**Automated Accessibility Checks**: +- Use GitHub's built-in accessibility checker +- Test with accessibility browser extensions +- Run markdown linters for structure validation + +### Mobile Rendering + +- GitHub automatically renders markdown responsively +- Avoid wide tables (>4 columns may not display well) +- Test badges on small screens +- Use relative widths for embedded content +- Ensure code blocks are horizontally scrollable +- Preview README on mobile device before publishing + +### Maintenance and Updates + +**Keep Current**: +- Update README when project changes significantly +- Fix broken links immediately +- Update installation instructions after major updates +- Refresh screenshots annually or after UI changes +- Update badges to reflect current project state + +**Process**: +- Link README updates to code changes in Git commits +- Include README changes in Pull Requests +- Review README during release planning +- Use GitHub Issues to track documentation needs + +**Automation**: +- Use GitHub Actions to check for broken links +- Automate badge updates from CI/CD +- Generate documentation from code comments where applicable +- Periodically (quarterly) audit links and content accuracy + +--- + +## README Checklist + +Before publishing or updating your README, verify: + +### Essential Content +- [ ] Project title is clear and descriptive +- [ ] Brief description explains what the project does +- [ ] Installation/Quick Start instructions work and are current +- [ ] At least 2-3 working usage examples provided +- [ ] Links to detailed documentation included +- [ ] Contributing guidelines linked (CONTRIBUTING.md exists) +- [ ] License clearly specified + +### Formatting and Style +- [ ] Uses clear heading hierarchy (one H1, proper heading levels) +- [ ] Proper markdown syntax with no rendering errors +- [ ] Consistent formatting throughout (bold, italics, code blocks) +- [ ] All code blocks have language identifiers +- [ ] Relative links used for internal documentation +- [ ] All links tested and working +- [ ] Images have descriptive alt text +- [ ] Images compressed and stored in appropriate directory + +### Professional Quality +- [ ] Uses active voice and clear language +- [ ] No typos or grammar errors +- [ ] Tone is professional yet approachable +- [ ] Jargon defined or explained +- [ ] Content scannable with good use of whitespace +- [ ] Mobile-friendly; tested on small screens +- [ ] Accessibility tested; alt text provided +- [ ] Information current and accurate + +### Badges and Visuals +- [ ] Badges (if used) are working and current +- [ ] Screenshots/GIFs are high quality and relevant +- [ ] Visual hierarchy draws attention to important info +- [ ] No excessive emoji or visual clutter + +### GitHub Integration +- [ ] README located in repository root +- [ ] File named exactly `README.md` (case-sensitive) +- [ ] Renders correctly on GitHub.com +- [ ] Table of Contents (if present) links work correctly +- [ ] All GitHub-specific features tested + +--- + +## Example README Template + +```markdown +# Project Title: Clear, Descriptive, Memorable + + +![Build Status](https://img.shields.io/badge/build-passing-brightgreen) +![License](https://img.shields.io/badge/license-MIT-blue) +![Version](https://img.shields.io/badge/version-1.0.0-brightgreen) + + +## Overview + +Brief, compelling description of what your project does (2-3 sentences). +Explain the problem it solves and its primary use cases. + +## Table of Contents + +- [Features](#features) +- [Quick Start](#quick-start) +- [Usage](#usage) +- [Documentation](#documentation) +- [Contributing](#contributing) +- [License](#license) + +## Features + +- ✨ **Key Feature 1** - Brief description of benefit +- 🚀 **Key Feature 2** - Brief description of benefit +- 📊 **Key Feature 3** - Brief description of benefit + +## Quick Start + +### Prerequisites + +- List any required software or knowledge + +### Installation + +```bash +# Step-by-step commands +git clone https://github.com/username/project.git +cd project +npm install +``` + +## Usage + +```python +# Working code example with context +``` + +## Documentation + +- [Full Documentation](./docs/) +- [API Reference](./docs/api.md) +- [Contributing Guide](./CONTRIBUTING.md) + +## Contributing + +We welcome contributions! See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines. + +## License + +Licensed under [MIT License](./LICENSE). + +## Acknowledgments + +Thanks to [inspiration/tools/people] for [reason]. +``` + +--- + +## Tools and Resources + +### Markdown Tools +- **markdown-toc** - Automatically generate table of contents +- **markdownlint** - Validate markdown syntax and style +- **prettier** - Format markdown consistently + +### Badge Resources +- **[shields.io](https://shields.io)** - Professional badge creation +- **[Badgen](https://badgen.net)** - Alternative badge generator +- **GitHub badges** - Build status, coverage, etc. + +### Testing and Validation +- **[Markdown Preview](https://github.com/markdown-preview/markdown-preview-plus)** - Preview in editor +- **GitHub Pages** - Preview README in rendered form +- **Mobile browsers** - Test responsive rendering + +### Inspiration +- Review READMEs of successful projects in your domain +- Use templates: [Best-README-Template](https://github.com/othneildrew/Best-README-Template) +- Study award-winning open-source projects \ No newline at end of file diff --git a/.cursor/rules/rules.mdc b/.cursor/rules/rules.mdc new file mode 100644 index 0000000..411dffd --- /dev/null +++ b/.cursor/rules/rules.mdc @@ -0,0 +1,43 @@ +--- +description: Clean Code Implementation and Project Management Best Practices +globs: + - "**/*.ts" + - "**/*.js" + - "**/*.tsx" + - "**/*.jsx" +alwaysApply: true +--- + +# Clean Code Implementation Techniques + +- Use descriptive, intention-revealing names for variables, functions, classes, etc. +- Avoid vague or misleading names to improve readability and maintainability. +- Each function or module should have one clear purpose, following Single Responsibility Principle (SRP). +- Write code that clearly expresses intent to minimize comments; comment "why", not "what". +- Replace magic numbers or strings with named constants for clarity. +- Organize code into layers or modules (routes, controllers, services, models). +- Implement centralized and consistent error handling. +- Use modern language features like async/await for better async operations management. +- Use code linting and formatting tools (ESLint, Prettier) automatically. +- Write unit tests to ensure correctness and ease future refactoring. +- Avoid duplications by abstracting repeated logic into reusable functions/classes. +- Enforce coding standards using linters and pre-commit hooks. +- Regularly refactor code for simplicity and reduced technical debt. +- Avoid Emoji on UI and Codebase +- Always Professional UI Layout Positioning, Font, Spacing, and more releants + +# Project Management and Collaboration + +- Define clear project scope, objectives, deliverables, deadlines, and constraints. +- Assign clear roles and responsibilities for team members. +- Use project management tools for task, version, and documentation tracking. +- Practice regular communication, standups, reviews, and retrospectives. +- Design APIs/modules to be idempotent and implement caching/memoization. +- Use code reviews with multiple reviewers and integrate automated checks. +- Employ branching strategies like GitFlow and commit with descriptive messages. +- Maintain detailed project documentation, including API docs and architecture decisions. +- Automate repetitive tasks such as builds, deployments, and code quality checks. +- Use effective communication tools (Slack, Teams) for streamlined interactions. +- Share reusable code snippets consistently. +- Keep audit trails with reasons and author for changes and enforce access controls. +- Adapt conflict resolution styles and encourage collaborative problem-solving. \ No newline at end of file diff --git a/.cursor/skills.md b/.cursor/skills.md new file mode 100644 index 0000000..7b8a28a --- /dev/null +++ b/.cursor/skills.md @@ -0,0 +1,63 @@ +# HyperAgent AI Skills Index + +This file provides an index of available AI skills for the HyperAgent project. + +## Available Skills + +### Backend & API +- `fastapi/` - FastAPI backend patterns and templates +- `fastapi-templates/` - FastAPI project templates + +### Agent Orchestration +- `langgraph/` - LangGraph agent orchestration +- `langgraph-docs/` - LangGraph documentation +- `langchain-architecture/` - LangChain architecture patterns + +### Database & Storage +- `supabase-postgres-best-practices/` - Supabase and PostgreSQL best practices +- `database-schema-designer/` - Database schema design tools +- `database-schema-documentation/` - Database documentation generation + +### Smart Contracts +- `solidity-development/` - Solidity development patterns +- `smart-contract-security/` - Smart contract security practices + +### DevOps & Infrastructure +- `gitops-principles-skill/` - GitOps principles and patterns +- `gitops-workflow/` - GitOps workflow automation +- `github-workflow-automation/` - GitHub Actions automation +- `github-actions-templates/` - GitHub Actions templates + +### Project Management +- `github-projects/` - GitHub Projects management +- `github-issues/` - GitHub Issues management +- `planning-with-files/` - File-based planning tools + +### Documentation & Design +- `api-documentation-generator/` - API documentation generation +- `beautiful-mermaid/` - Mermaid diagram generation +- `file-organizer/` - File organization tools + +### Monitoring & Observability +- `prometheus-configuration/` - Prometheus configuration + +### Debugging +- `debugging-strategies/` - Debugging strategies and tools + +## Usage + +When an AI agent needs to perform a task: +1. Check this index for relevant skills +2. Load the specific skill from `.cursor/skills/{skill-name}/SKILL.md` +3. Follow the skill's instructions and use its templates/references +4. Apply the skill's patterns to the current task + +## Skill Structure + +Each skill follows this structure: +- `SKILL.md` - Main skill documentation +- `references/` - Reference materials and guides +- `templates/` - Code templates and examples +- `scripts/` - Utility scripts (if applicable) +- `assets/` - Additional assets (if applicable) + diff --git a/.cursor/skills/api-documentation-generator/SKILL.md b/.cursor/skills/api-documentation-generator/SKILL.md new file mode 100644 index 0000000..631aa33 --- /dev/null +++ b/.cursor/skills/api-documentation-generator/SKILL.md @@ -0,0 +1,484 @@ +--- +name: api-documentation-generator +description: "Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices" +--- + +# API Documentation Generator + +## Overview + +Automatically generate clear, comprehensive API documentation from your codebase. This skill helps you create professional documentation that includes endpoint descriptions, request/response examples, authentication details, error handling, and usage guidelines. + +Perfect for REST APIs, GraphQL APIs, and WebSocket APIs. + +## When to Use This Skill + +- Use when you need to document a new API +- Use when updating existing API documentation +- Use when your API lacks clear documentation +- Use when onboarding new developers to your API +- Use when preparing API documentation for external users +- Use when creating OpenAPI/Swagger specifications + +## How It Works + +### Step 1: Analyze the API Structure + +First, I'll examine your API codebase to understand: +- Available endpoints and routes +- HTTP methods (GET, POST, PUT, DELETE, etc.) +- Request parameters and body structure +- Response formats and status codes +- Authentication and authorization requirements +- Error handling patterns + +### Step 2: Generate Endpoint Documentation + +For each endpoint, I'll create documentation including: + +**Endpoint Details:** +- HTTP method and URL path +- Brief description of what it does +- Authentication requirements +- Rate limiting information (if applicable) + +**Request Specification:** +- Path parameters +- Query parameters +- Request headers +- Request body schema (with types and validation rules) + +**Response Specification:** +- Success response (status code + body structure) +- Error responses (all possible error codes) +- Response headers + +**Code Examples:** +- cURL command +- JavaScript/TypeScript (fetch/axios) +- Python (requests) +- Other languages as needed + +### Step 3: Add Usage Guidelines + +I'll include: +- Getting started guide +- Authentication setup +- Common use cases +- Best practices +- Rate limiting details +- Pagination patterns +- Filtering and sorting options + +### Step 4: Document Error Handling + +Clear error documentation including: +- All possible error codes +- Error message formats +- Troubleshooting guide +- Common error scenarios and solutions + +### Step 5: Create Interactive Examples + +Where possible, I'll provide: +- Postman collection +- OpenAPI/Swagger specification +- Interactive code examples +- Sample responses + +## Examples + +### Example 1: REST API Endpoint Documentation + +```markdown +## Create User + +Creates a new user account. + +**Endpoint:** `POST /api/v1/users` + +**Authentication:** Required (Bearer token) + +**Request Body:** +\`\`\`json +{ + "email": "user@example.com", // Required: Valid email address + "password": "SecurePass123!", // Required: Min 8 chars, 1 uppercase, 1 number + "name": "John Doe", // Required: 2-50 characters + "role": "user" // Optional: "user" or "admin" (default: "user") +} +\`\`\` + +**Success Response (201 Created):** +\`\`\`json +{ + "id": "usr_1234567890", + "email": "user@example.com", + "name": "John Doe", + "role": "user", + "createdAt": "2026-01-20T10:30:00Z", + "emailVerified": false +} +\`\`\` + +**Error Responses:** + +- `400 Bad Request` - Invalid input data + \`\`\`json + { + "error": "VALIDATION_ERROR", + "message": "Invalid email format", + "field": "email" + } + \`\`\` + +- `409 Conflict` - Email already exists + \`\`\`json + { + "error": "EMAIL_EXISTS", + "message": "An account with this email already exists" + } + \`\`\` + +- `401 Unauthorized` - Missing or invalid authentication token + +**Example Request (cURL):** +\`\`\`bash +curl -X POST https://api.example.com/api/v1/users \ + -H "Authorization: Bearer YOUR_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "email": "user@example.com", + "password": "SecurePass123!", + "name": "John Doe" + }' +\`\`\` + +**Example Request (JavaScript):** +\`\`\`javascript +const response = await fetch('https://api.example.com/api/v1/users', { + method: 'POST', + headers: { + 'Authorization': `Bearer ${token}`, + 'Content-Type': 'application/json' + }, + body: JSON.stringify({ + email: 'user@example.com', + password: 'SecurePass123!', + name: 'John Doe' + }) +}); + +const user = await response.json(); +console.log(user); +\`\`\` + +**Example Request (Python):** +\`\`\`python +import requests + +response = requests.post( + 'https://api.example.com/api/v1/users', + headers={ + 'Authorization': f'Bearer {token}', + 'Content-Type': 'application/json' + }, + json={ + 'email': 'user@example.com', + 'password': 'SecurePass123!', + 'name': 'John Doe' + } +) + +user = response.json() +print(user) +\`\`\` +``` + +### Example 2: GraphQL API Documentation + +```markdown +## User Query + +Fetch user information by ID. + +**Query:** +\`\`\`graphql +query GetUser($id: ID!) { + user(id: $id) { + id + email + name + role + createdAt + posts { + id + title + publishedAt + } + } +} +\`\`\` + +**Variables:** +\`\`\`json +{ + "id": "usr_1234567890" +} +\`\`\` + +**Response:** +\`\`\`json +{ + "data": { + "user": { + "id": "usr_1234567890", + "email": "user@example.com", + "name": "John Doe", + "role": "user", + "createdAt": "2026-01-20T10:30:00Z", + "posts": [ + { + "id": "post_123", + "title": "My First Post", + "publishedAt": "2026-01-21T14:00:00Z" + } + ] + } + } +} +\`\`\` + +**Errors:** +\`\`\`json +{ + "errors": [ + { + "message": "User not found", + "extensions": { + "code": "USER_NOT_FOUND", + "userId": "usr_1234567890" + } + } + ] +} +\`\`\` +``` + +### Example 3: Authentication Documentation + +```markdown +## Authentication + +All API requests require authentication using Bearer tokens. + +### Getting a Token + +**Endpoint:** `POST /api/v1/auth/login` + +**Request:** +\`\`\`json +{ + "email": "user@example.com", + "password": "your-password" +} +\`\`\` + +**Response:** +\`\`\`json +{ + "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", + "expiresIn": 3600, + "refreshToken": "refresh_token_here" +} +\`\`\` + +### Using the Token + +Include the token in the Authorization header: + +\`\`\` +Authorization: Bearer YOUR_TOKEN +\`\`\` + +### Token Expiration + +Tokens expire after 1 hour. Use the refresh token to get a new access token: + +**Endpoint:** `POST /api/v1/auth/refresh` + +**Request:** +\`\`\`json +{ + "refreshToken": "refresh_token_here" +} +\`\`\` +``` + +## Best Practices + +### ✅ Do This + +- **Be Consistent** - Use the same format for all endpoints +- **Include Examples** - Provide working code examples in multiple languages +- **Document Errors** - List all possible error codes and their meanings +- **Show Real Data** - Use realistic example data, not "foo" and "bar" +- **Explain Parameters** - Describe what each parameter does and its constraints +- **Version Your API** - Include version numbers in URLs (/api/v1/) +- **Add Timestamps** - Show when documentation was last updated +- **Link Related Endpoints** - Help users discover related functionality +- **Include Rate Limits** - Document any rate limiting policies +- **Provide Postman Collection** - Make it easy to test your API + +### ❌ Don't Do This + +- **Don't Skip Error Cases** - Users need to know what can go wrong +- **Don't Use Vague Descriptions** - "Gets data" is not helpful +- **Don't Forget Authentication** - Always document auth requirements +- **Don't Ignore Edge Cases** - Document pagination, filtering, sorting +- **Don't Leave Examples Broken** - Test all code examples +- **Don't Use Outdated Info** - Keep documentation in sync with code +- **Don't Overcomplicate** - Keep it simple and scannable +- **Don't Forget Response Headers** - Document important headers + +## Documentation Structure + +### Recommended Sections + +1. **Introduction** + - What the API does + - Base URL + - API version + - Support contact + +2. **Authentication** + - How to authenticate + - Token management + - Security best practices + +3. **Quick Start** + - Simple example to get started + - Common use case walkthrough + +4. **Endpoints** + - Organized by resource + - Full details for each endpoint + +5. **Data Models** + - Schema definitions + - Field descriptions + - Validation rules + +6. **Error Handling** + - Error code reference + - Error response format + - Troubleshooting guide + +7. **Rate Limiting** + - Limits and quotas + - Headers to check + - Handling rate limit errors + +8. **Changelog** + - API version history + - Breaking changes + - Deprecation notices + +9. **SDKs and Tools** + - Official client libraries + - Postman collection + - OpenAPI specification + +## Common Pitfalls + +### Problem: Documentation Gets Out of Sync +**Symptoms:** Examples don't work, parameters are wrong, endpoints return different data +**Solution:** +- Generate docs from code comments/annotations +- Use tools like Swagger/OpenAPI +- Add API tests that validate documentation +- Review docs with every API change + +### Problem: Missing Error Documentation +**Symptoms:** Users don't know how to handle errors, support tickets increase +**Solution:** +- Document every possible error code +- Provide clear error messages +- Include troubleshooting steps +- Show example error responses + +### Problem: Examples Don't Work +**Symptoms:** Users can't get started, frustration increases +**Solution:** +- Test every code example +- Use real, working endpoints +- Include complete examples (not fragments) +- Provide a sandbox environment + +### Problem: Unclear Parameter Requirements +**Symptoms:** Users send invalid requests, validation errors +**Solution:** +- Mark required vs optional clearly +- Document data types and formats +- Show validation rules +- Provide example values + +## Tools and Formats + +### OpenAPI/Swagger +Generate interactive documentation: +```yaml +openapi: 3.0.0 +info: + title: My API + version: 1.0.0 +paths: + /users: + post: + summary: Create a new user + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/CreateUserRequest' +``` + +### Postman Collection +Export collection for easy testing: +```json +{ + "info": { + "name": "My API", + "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json" + }, + "item": [ + { + "name": "Create User", + "request": { + "method": "POST", + "url": "{{baseUrl}}/api/v1/users" + } + } + ] +} +``` + +## Related Skills + +- `@doc-coauthoring` - For collaborative documentation writing +- `@copywriting` - For clear, user-friendly descriptions +- `@test-driven-development` - For ensuring API behavior matches docs +- `@systematic-debugging` - For troubleshooting API issues + +## Additional Resources + +- [OpenAPI Specification](https://swagger.io/specification/) +- [REST API Best Practices](https://restfulapi.net/) +- [GraphQL Documentation](https://graphql.org/learn/) +- [API Design Patterns](https://www.apiguide.com/) +- [Postman Documentation](https://learning.postman.com/docs/) + +--- + +**Pro Tip:** Keep your API documentation as close to your code as possible. Use tools that generate docs from code comments to ensure they stay in sync! diff --git a/.cursor/skills/beautiful-mermaid/SKILL.md b/.cursor/skills/beautiful-mermaid/SKILL.md new file mode 100644 index 0000000..cde7900 --- /dev/null +++ b/.cursor/skills/beautiful-mermaid/SKILL.md @@ -0,0 +1,171 @@ +--- +name: beautiful-mermaid +description: Render Mermaid diagrams as SVG and PNG using the Beautiful Mermaid library. Use when the user asks to render a Mermaid diagram. +--- + +# Beautiful Mermaid Diagram Rendering + +Render Mermaid diagrams as SVG and PNG images using the Beautiful Mermaid library. + +## Dependencies + +This skill requires the `agent-browser` skill for PNG rendering. Load it before proceeding with PNG capture. + +## Supported Diagram Types + +- **Flowchart** - Process flows, decision trees, CI/CD pipelines +- **Sequence** - API calls, OAuth flows, database transactions +- **State** - State machines, connection lifecycles +- **Class** - UML class diagrams, design patterns +- **Entity-Relationship** - Database schemas, data models + +## Available Themes + +Default, Dracula, Solarized, Zinc Dark, Tokyo Night, Tokyo Night Storm, Tokyo Night Light, Catppuccin Latte, Nord, Nord Light, GitHub Dark, GitHub Light, One Dark. + +If no theme is specified, use `default`. + +## Common Syntax Patterns + +### Flowchart Edge Labels + +Use pipe syntax for edge labels: + +```mermaid +A -->|label| B +A ---|label| B +``` + +Avoid space-dash syntax which can cause incomplete renders: + +```mermaid +A -- label --> B # May cause issues +``` + +### Node Labels with Special Characters + +Wrap labels containing special characters in quotes: + +```mermaid +A["Label with (parens)"] +B["Label with / slash"] +``` + +## Workflow + +### Step 1: Generate or Validate Mermaid Code + +If the user provides a description rather than code, generate valid Mermaid syntax. Consult `references/mermaid-syntax.md` for full syntax details. + +### Step 2: Render SVG + +Run the rendering script to produce an SVG file: + +```bash +bun run scripts/render.ts --code "graph TD; A-->B" --output diagram --theme default +``` + +Or from a file: + +```bash +bun run scripts/render.ts --input diagram.mmd --output diagram --theme tokyo-night +``` + +Alternative runtimes: +```bash +npx tsx scripts/render.ts --code "..." --output diagram +deno run --allow-read --allow-write --allow-net scripts/render.ts --code "..." --output diagram +``` + +This produces `.svg` in the current working directory. + +### Step 3: Create HTML Wrapper + +Run the HTML wrapper script to prepare for screenshot: + +```bash +bun run scripts/create-html.ts --svg diagram.svg --output diagram.html +``` + +This creates a minimal HTML file that displays the SVG with proper padding and background. + +### Step 4: Capture High-Resolution PNG with agent-browser + +Use the agent-browser CLI to capture a high-quality screenshot. Refer to the `agent-browser` skill for full CLI documentation. + +```bash +# Set 4K viewport for high-resolution capture +agent-browser set viewport 3840 2160 + +# Open the HTML wrapper +agent-browser open "file://$(pwd)/diagram.html" + +# Wait for render to complete +agent-browser wait 1000 + +# Capture full-page screenshot +agent-browser screenshot --full diagram.png + +# Close browser +agent-browser close +``` + +For even higher resolution on complex diagrams, increase the viewport further or use the `--padding` option when creating the HTML wrapper to give the diagram more space. + +### Step 5: Clean Up Intermediary Files + +After rendering, remove all intermediary files. Only the final `.svg` and `.png` should remain. + +Files to clean up: +- The HTML wrapper file (e.g., `diagram.html`) +- Any temporary `.mmd` files created to hold diagram code +- Any other files created during the rendering process + +```bash +rm diagram.html +``` + +If a temporary `.mmd` file was created, remove it as well. + +## Output + +Both outputs are always produced: +- **SVG**: Vector format, infinitely scalable, small file size +- **PNG**: High-resolution raster, captured at 4K (3840×2160) viewport with minimum 1200px diagram width + +Files are saved to the current working directory unless the user explicitly specifies a different path. + +## Theme Selection Guide + +| Theme | Background | Best For | +|-------|------------|----------| +| default | Light grey | General use | +| dracula | Dark purple | Dark mode preference | +| tokyo-night | Dark blue | Modern dark aesthetic | +| tokyo-night-storm | Darker blue | Higher contrast | +| nord | Dark arctic | Muted, calm visuals | +| nord-light | Light arctic | Light mode with soft tones | +| github-dark | GitHub dark | Matches GitHub UI | +| github-light | GitHub light | Matches GitHub UI | +| catppuccin-latte | Warm light | Soft pastel aesthetic | +| solarized | Tan/cream | Solarized colour scheme | +| one-dark | Atom dark | Atom editor aesthetic | +| zinc-dark | Neutral dark | Minimal, no colour bias | + +## Troubleshooting + +### Theme not applied + +Check the render script output for the `bg` and `fg` values, or inspect the SVG's opening tag for `--bg` and `--fg` CSS custom properties. + +### Diagram appears cut off or incomplete + +- Check edge label syntax — use `-->|label|` pipe notation, not `-- label -->` +- Verify all node IDs are unique +- Check for unclosed brackets in node labels + +### Render produces empty or malformed SVG + +- Validate Mermaid syntax at https://mermaid.live before rendering +- Check for special characters that need escaping (wrap in quotes) +- Ensure flowchart direction is specified (`graph TD`, `graph LR`, etc.) diff --git a/.cursor/skills/beautiful-mermaid/references/mermaid-syntax.md b/.cursor/skills/beautiful-mermaid/references/mermaid-syntax.md new file mode 100644 index 0000000..6c69475 --- /dev/null +++ b/.cursor/skills/beautiful-mermaid/references/mermaid-syntax.md @@ -0,0 +1,235 @@ +# Mermaid Syntax Reference + +Quick reference for generating valid Mermaid diagram code. + +## Flowchart + +```mermaid +graph TD + A[Start] --> B{Decision} + B -->|Yes| C[Action 1] + B -->|No| D[Action 2] + C --> E[End] + D --> E +``` + +### Direction +- `TD` / `TB` - Top to bottom +- `BT` - Bottom to top +- `LR` - Left to right +- `RL` - Right to left + +### Node Shapes +- `A[Text]` - Rectangle +- `A(Text)` - Rounded rectangle +- `A([Text])` - Stadium/pill +- `A[[Text]]` - Subroutine +- `A[(Text)]` - Cylinder (database) +- `A((Text))` - Circle +- `A>Text]` - Asymmetric +- `A{Text}` - Diamond (decision) +- `A{{Text}}` - Hexagon +- `A[/Text/]` - Parallelogram +- `A[\Text\]` - Parallelogram alt +- `A[/Text\]` - Trapezoid +- `A[\Text/]` - Trapezoid alt + +### Edge Styles +- `A --> B` - Arrow +- `A --- B` - Line +- `A -.-> B` - Dotted arrow +- `A ==> B` - Thick arrow +- `A -->|text| B` - Arrow with label (preferred) +- `A ---|text| B` - Line with label (preferred) + +**Important**: Always use pipe syntax `-->|label|` for edge labels. The space-dash syntax `-- label -->` can cause incomplete renders. + +### Subgraphs +```mermaid +graph TD + subgraph Group1 [Label] + A --> B + end + subgraph Group2 + C --> D + end + B --> C +``` + +## Sequence Diagram + +```mermaid +sequenceDiagram + participant A as Alice + participant B as Bob + A->>B: Hello + B-->>A: Hi there + A->>+B: Start process + B-->>-A: Done +``` + +### Arrow Types +- `->>` - Solid arrow +- `-->>` - Dashed arrow +- `-x` - Solid with x +- `--x` - Dashed with x +- `-)` - Solid open arrow +- `--)` - Dashed open arrow + +### Activations +- `+` after arrow activates participant +- `-` after arrow deactivates participant + +### Notes and Boxes +```mermaid +sequenceDiagram + Note over A,B: Shared note + Note right of A: Side note + rect rgb(200, 220, 255) + A->>B: In a box + end +``` + +### Loops and Conditionals +```mermaid +sequenceDiagram + loop Every minute + A->>B: Ping + end + alt Success + B-->>A: Pong + else Failure + B-->>A: Error + end + opt Optional + A->>B: Extra step + end +``` + +## State Diagram + +```mermaid +stateDiagram-v2 + [*] --> Idle + Idle --> Processing : start + Processing --> Done : complete + Processing --> Error : fail + Error --> Idle : reset + Done --> [*] +``` + +### Composite States +```mermaid +stateDiagram-v2 + state Active { + [*] --> Running + Running --> Paused : pause + Paused --> Running : resume + } + Idle --> Active : activate + Active --> Idle : deactivate +``` + +### Notes +```mermaid +stateDiagram-v2 + State1 : Description here + note right of State1 + Additional info + end note +``` + +## Class Diagram + +```mermaid +classDiagram + class Animal { + +String name + +int age + +makeSound() void + } + class Dog { + +bark() void + } + Animal <|-- Dog : extends +``` + +### Relationships +- `<|--` - Inheritance +- `*--` - Composition +- `o--` - Aggregation +- `-->` - Association +- `--` - Link (solid) +- `..>` - Dependency +- `..|>` - Realisation +- `..` - Link (dashed) + +### Cardinality +```mermaid +classDiagram + Customer "1" --> "*" Order + Order "1" --> "1..*" LineItem +``` + +### Visibility +- `+` Public +- `-` Private +- `#` Protected +- `~` Package/Internal + +## Entity-Relationship Diagram + +```mermaid +erDiagram + CUSTOMER ||--o{ ORDER : places + ORDER ||--|{ LINE-ITEM : contains + PRODUCT }|..|{ LINE-ITEM : "ordered in" +``` + +### Relationship Types +- `||` - Exactly one +- `|{` - One or more +- `o{` - Zero or more +- `o|` - Zero or one + +### Identifying vs Non-identifying +- `--` - Identifying (solid) +- `..` - Non-identifying (dashed) + +### Attributes +```mermaid +erDiagram + CUSTOMER { + string id PK + string name + string email UK + } + ORDER { + int id PK + string customer_id FK + date created_at + } +``` + +## Styling + +### CSS Classes +```mermaid +graph TD + A:::highlight --> B + classDef highlight fill:#f96,stroke:#333 +``` + +### Inline Styles +```mermaid +graph TD + A --> B + style A fill:#bbf,stroke:#333 +``` + +## Tips + +1. **Escape special characters**: Use quotes for labels with special chars: `A["Label with (parens)"]` +2. **Multi-line labels**: Use `
` for line breaks +3. **Comments**: Use `%%` for comments that won't render +4. **IDs vs Labels**: Node IDs should be simple, labels can be complex: `node1["Complex Label Here"]` diff --git a/.cursor/skills/beautiful-mermaid/scripts/create-html.ts b/.cursor/skills/beautiful-mermaid/scripts/create-html.ts new file mode 100644 index 0000000..3491002 --- /dev/null +++ b/.cursor/skills/beautiful-mermaid/scripts/create-html.ts @@ -0,0 +1,177 @@ +#!/usr/bin/env -S npx tsx +/** + * Create an HTML wrapper for an SVG to enable high-quality PNG capture + * + * Usage: + * bun run create-html.ts --svg diagram.svg --output diagram.html + * bun run create-html.ts --svg diagram.svg --output diagram.html --padding 40 + * + * Runtimes: + * bun run create-html.ts ... + * npx tsx create-html.ts ... + * deno run --allow-read --allow-write create-html.ts ... + */ + +import { readFileSync, writeFileSync, existsSync } from "node:fs"; +import { resolve, basename } from "node:path"; + +interface Args { + svg: string; + output: string; + padding: number; + background?: string; +} + +function parseArgs(): Args { + const args = process.argv.slice(2); + const result: Partial = { padding: 40 }; + + for (let i = 0; i < args.length; i++) { + const arg = args[i]; + const next = args[i + 1]; + + switch (arg) { + case "--svg": + case "-s": + result.svg = next; + i++; + break; + case "--output": + case "-o": + result.output = next; + i++; + break; + case "--padding": + case "-p": + result.padding = parseInt(next, 10) || 40; + i++; + break; + case "--background": + case "-b": + result.background = next; + i++; + break; + case "--help": + case "-h": + printHelp(); + process.exit(0); + } + } + + if (!result.svg) { + console.error("Error: --svg is required"); + printHelp(); + process.exit(1); + } + + if (!result.output) { + console.error("Error: --output is required"); + printHelp(); + process.exit(1); + } + + return result as Args; +} + +function printHelp(): void { + console.log(` +SVG to HTML Wrapper + +Creates a minimal HTML file for screenshot capture of SVG diagrams. + +Usage: + create-html.ts --svg --output [options] + +Options: + -s, --svg Input SVG file + -o, --output Output HTML file + -p, --padding Padding around SVG (default: 40) + -b, --background Background colour (auto-detected from SVG) + -h, --help Show this help + +Examples: + create-html.ts --svg diagram.svg --output diagram.html + create-html.ts --svg diagram.svg --output diagram.html --padding 60 + create-html.ts --svg diagram.svg --output diagram.html --background "#1a1b26" +`); +} + +function extractBackgroundFromSvg(svgContent: string): string | null { + // Try to extract background from SVG style or rect + const bgMatch = svgContent.match(/background(?:-color)?:\s*([^;"\s]+)/i); + if (bgMatch) return bgMatch[1]; + + // Check for a background rect + const rectMatch = svgContent.match( + /]*fill="([^"]+)"[^>]*(?:width="100%"|height="100%")/i + ); + if (rectMatch) return rectMatch[1]; + + // Check style attribute on svg element + const svgStyleMatch = svgContent.match( + /]*style="[^"]*background(?:-color)?:\s*([^;"\s]+)/i + ); + if (svgStyleMatch) return svgStyleMatch[1]; + + return null; +} + +function main(): void { + const args = parseArgs(); + + const svgPath = resolve(args.svg); + if (!existsSync(svgPath)) { + console.error(`SVG file not found: ${svgPath}`); + process.exit(1); + } + + const svgContent = readFileSync(svgPath, "utf-8"); + + // Determine background colour + const background = + args.background ?? extractBackgroundFromSvg(svgContent) ?? "#ffffff"; + + // Create HTML wrapper optimised for high-resolution screenshot + // SVG renders at natural size with generous padding, no constraints + const html = ` + + + + + ${basename(args.svg, ".svg")} + + + +
+ ${svgContent} +
+ +`; + + const outputPath = resolve(args.output); + writeFileSync(outputPath, html, "utf-8"); + + console.log(`HTML wrapper written to: ${outputPath}`); + console.log(`Background colour: ${background}`); +} + +main(); diff --git a/.cursor/skills/beautiful-mermaid/scripts/render.ts b/.cursor/skills/beautiful-mermaid/scripts/render.ts new file mode 100644 index 0000000..959a17d --- /dev/null +++ b/.cursor/skills/beautiful-mermaid/scripts/render.ts @@ -0,0 +1,221 @@ +#!/usr/bin/env -S npx tsx +/** + * Render a Mermaid diagram to SVG using Beautiful Mermaid + * + * Usage: + * bun run render.ts --input diagram.mmd --output diagram --theme tokyo-night + * bun run render.ts --code "graph TD; A-->B" --output diagram + * + * Runtimes: + * bun run render.ts ... + * npx tsx render.ts ... + * deno run --allow-read --allow-write --allow-net render.ts ... + * + * Output: + * Produces .svg + */ + +import { readFileSync, writeFileSync, existsSync } from "node:fs"; +import { resolve } from "node:path"; + +const THEMES = [ + "default", + "dracula", + "solarized", + "zinc-dark", + "tokyo-night", + "tokyo-night-storm", + "tokyo-night-light", + "catppuccin-latte", + "nord", + "nord-light", + "github-dark", + "github-light", + "one-dark", +] as const; + +type Theme = (typeof THEMES)[number]; + +interface Args { + input?: string; + code?: string; + output: string; + theme: Theme; +} + +function parseArgs(): Args { + const args = process.argv.slice(2); + const result: Partial = { theme: "default" }; + + for (let i = 0; i < args.length; i++) { + const arg = args[i]; + const next = args[i + 1]; + + switch (arg) { + case "--input": + case "-i": + result.input = next; + i++; + break; + case "--code": + case "-c": + result.code = next; + i++; + break; + case "--output": + case "-o": + result.output = next; + i++; + break; + case "--theme": + case "-t": + if (next && THEMES.includes(next as Theme)) { + result.theme = next as Theme; + } else { + console.error(`Invalid theme: ${next}`); + console.error(`Available themes: ${THEMES.join(", ")}`); + process.exit(1); + } + i++; + break; + case "--help": + case "-h": + printHelp(); + process.exit(0); + } + } + + if (!result.input && !result.code) { + console.error("Error: Either --input or --code is required"); + printHelp(); + process.exit(1); + } + + if (!result.output) { + console.error("Error: --output is required"); + printHelp(); + process.exit(1); + } + + return result as Args; +} + +function printHelp(): void { + console.log(` +Beautiful Mermaid Renderer + +Renders Mermaid diagrams to SVG. + +Usage: + render.ts --input --output [--theme ] + render.ts --code "" --output [--theme ] + +Options: + -i, --input Input Mermaid file (.mmd) + -c, --code Mermaid code as string + -o, --output Output base name (without extension) + -t, --theme Theme name (default: default) + -h, --help Show this help + +Available themes: + ${THEMES.join(", ")} + +Output: + Produces .svg + +Examples: + render.ts -i diagram.mmd -o diagram -t tokyo-night + render.ts -c "graph TD; A-->B" -o simple +`); +} + +function detectRuntime(): "bun" | "deno" | "node" { + if (typeof (globalThis as any).Bun !== "undefined") return "bun"; + if (typeof (globalThis as any).Deno !== "undefined") return "deno"; + return "node"; +} + +async function ensurePackage(name: string): Promise { + const runtime = detectRuntime(); + + try { + if (runtime === "deno") { + return await import(`npm:${name}`); + } + return await import(name); + } catch { + console.error(`${name} not found. Installing...`); + + const { execSync } = await import("node:child_process"); + + try { + if (runtime === "bun") { + execSync(`bun add ${name}`, { stdio: "inherit" }); + } else if (runtime === "deno") { + return await import(`npm:${name}`); + } else { + execSync(`npm install ${name}`, { stdio: "inherit" }); + } + return await import(name); + } catch (installError) { + console.error(`Failed to install ${name}:`, installError); + process.exit(1); + } + } +} + +function getThemeConfig(themeName: Theme): { bg: string; fg: string } { + const themeConfigs: Record = { + default: { bg: "#f5f5f5", fg: "#333333" }, + dracula: { bg: "#282a36", fg: "#f8f8f2" }, + solarized: { bg: "#fdf6e3", fg: "#657b83" }, + "zinc-dark": { bg: "#18181b", fg: "#fafafa" }, + "tokyo-night": { bg: "#1a1b26", fg: "#a9b1d6" }, + "tokyo-night-storm": { bg: "#24283b", fg: "#a9b1d6" }, + "tokyo-night-light": { bg: "#d5d6db", fg: "#343b58" }, + "catppuccin-latte": { bg: "#eff1f5", fg: "#4c4f69" }, + nord: { bg: "#2e3440", fg: "#eceff4" }, + "nord-light": { bg: "#eceff4", fg: "#2e3440" }, + "github-dark": { bg: "#0d1117", fg: "#c9d1d9" }, + "github-light": { bg: "#ffffff", fg: "#24292f" }, + "one-dark": { bg: "#282c34", fg: "#abb2bf" }, + }; + + return themeConfigs[themeName]; +} + +async function main(): Promise { + const args = parseArgs(); + + let mermaidCode: string; + if (args.input) { + const inputPath = resolve(args.input); + if (!existsSync(inputPath)) { + console.error(`Input file not found: ${inputPath}`); + process.exit(1); + } + mermaidCode = readFileSync(inputPath, "utf-8"); + } else { + mermaidCode = args.code!; + } + + console.log(`Rendering diagram with theme: ${args.theme}`); + + const beautifulMermaid = await ensurePackage("beautiful-mermaid"); + const renderMermaid = beautifulMermaid.renderMermaid; + const THEMES = beautifulMermaid.THEMES; + + const themeConfig = THEMES?.[args.theme] ?? getThemeConfig(args.theme); + console.log(`Using theme: bg=${themeConfig.bg}, fg=${themeConfig.fg}`); + + const svg = await renderMermaid(mermaidCode, themeConfig); + + const svgPath = resolve(`${args.output}.svg`); + writeFileSync(svgPath, svg, "utf-8"); + console.log(`SVG written to: ${svgPath}`); +} + +main().catch((err) => { + console.error("Error:", err.message); + process.exit(1); +}); diff --git a/.cursor/skills/database-schema-designer/SKILL.md b/.cursor/skills/database-schema-designer/SKILL.md new file mode 100644 index 0000000..624a8e2 --- /dev/null +++ b/.cursor/skills/database-schema-designer/SKILL.md @@ -0,0 +1,687 @@ +--- +name: database-schema-designer +description: Design robust, scalable database schemas for SQL and NoSQL databases. Provides normalization guidelines, indexing strategies, migration patterns, constraint design, and performance optimization. Ensures data integrity, query performance, and maintainable data models. +license: MIT +--- + +# Database Schema Designer + +Design production-ready database schemas with best practices built-in. + +--- + +## Quick Start + +Just describe your data model: + +``` +design a schema for an e-commerce platform with users, products, orders +``` + +You'll get a complete SQL schema like: + +```sql +CREATE TABLE users ( + id BIGINT AUTO_INCREMENT PRIMARY KEY, + email VARCHAR(255) UNIQUE NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP +); + +CREATE TABLE orders ( + id BIGINT AUTO_INCREMENT PRIMARY KEY, + user_id BIGINT NOT NULL REFERENCES users(id), + total DECIMAL(10,2) NOT NULL, + INDEX idx_orders_user (user_id) +); +``` + +**What to include in your request:** +- Entities (users, products, orders) +- Key relationships (users have orders, orders have items) +- Scale hints (high-traffic, millions of records) +- Database preference (SQL/NoSQL) - defaults to SQL if not specified + +--- + +## Triggers + +| Trigger | Example | +|---------|---------| +| `design schema` | "design a schema for user authentication" | +| `database design` | "database design for multi-tenant SaaS" | +| `create tables` | "create tables for a blog system" | +| `schema for` | "schema for inventory management" | +| `model data` | "model data for real-time analytics" | +| `I need a database` | "I need a database for tracking orders" | +| `design NoSQL` | "design NoSQL schema for product catalog" | + +--- + +## Key Terms + +| Term | Definition | +|------|------------| +| **Normalization** | Organizing data to reduce redundancy (1NF → 2NF → 3NF) | +| **3NF** | Third Normal Form - no transitive dependencies between columns | +| **OLTP** | Online Transaction Processing - write-heavy, needs normalization | +| **OLAP** | Online Analytical Processing - read-heavy, benefits from denormalization | +| **Foreign Key (FK)** | Column that references another table's primary key | +| **Index** | Data structure that speeds up queries (at cost of slower writes) | +| **Access Pattern** | How your app reads/writes data (queries, joins, filters) | +| **Denormalization** | Intentionally duplicating data to speed up reads | + +--- + +## Quick Reference + +| Task | Approach | Key Consideration | +|------|----------|-------------------| +| New schema | Normalize to 3NF first | Domain modeling over UI | +| SQL vs NoSQL | Access patterns decide | Read/write ratio matters | +| Primary keys | INT or UUID | UUID for distributed systems | +| Foreign keys | Always constrain | ON DELETE strategy critical | +| Indexes | FKs + WHERE columns | Column order matters | +| Migrations | Always reversible | Backward compatible first | + +--- + +## Process Overview + +``` +Your Data Requirements + | + v ++-----------------------------------------------------+ +| Phase 1: ANALYSIS | +| * Identify entities and relationships | +| * Determine access patterns (read vs write heavy) | +| * Choose SQL or NoSQL based on requirements | ++-----------------------------------------------------+ + | + v ++-----------------------------------------------------+ +| Phase 2: DESIGN | +| * Normalize to 3NF (SQL) or embed/reference (NoSQL) | +| * Define primary keys and foreign keys | +| * Choose appropriate data types | +| * Add constraints (UNIQUE, CHECK, NOT NULL) | ++-----------------------------------------------------+ + | + v ++-----------------------------------------------------+ +| Phase 3: OPTIMIZE | +| * Plan indexing strategy | +| * Consider denormalization for read-heavy queries | +| * Add timestamps (created_at, updated_at) | ++-----------------------------------------------------+ + | + v ++-----------------------------------------------------+ +| Phase 4: MIGRATE | +| * Generate migration scripts (up + down) | +| * Ensure backward compatibility | +| * Plan zero-downtime deployment | ++-----------------------------------------------------+ + | + v +Production-Ready Schema +``` + +--- + +## Commands + +| Command | When to Use | Action | +|---------|-------------|--------| +| `design schema for {domain}` | Starting fresh | Full schema generation | +| `normalize {table}` | Fixing existing table | Apply normalization rules | +| `add indexes for {table}` | Performance issues | Generate index strategy | +| `migration for {change}` | Schema evolution | Create reversible migration | +| `review schema` | Code review | Audit existing schema | + +**Workflow:** Start with `design schema` → iterate with `normalize` → optimize with `add indexes` → evolve with `migration` + +--- + +## Core Principles + +| Principle | WHY | Implementation | +|-----------|-----|----------------| +| Model the Domain | UI changes, domain doesn't | Entity names reflect business concepts | +| Data Integrity First | Corruption is costly to fix | Constraints at database level | +| Optimize for Access Pattern | Can't optimize for both | OLTP: normalized, OLAP: denormalized | +| Plan for Scale | Retrofitting is painful | Index strategy + partitioning plan | + +--- + +## Anti-Patterns + +| Avoid | Why | Instead | +|-------|-----|---------| +| VARCHAR(255) everywhere | Wastes storage, hides intent | Size appropriately per field | +| FLOAT for money | Rounding errors | DECIMAL(10,2) | +| Missing FK constraints | Orphaned data | Always define foreign keys | +| No indexes on FKs | Slow JOINs | Index every foreign key | +| Storing dates as strings | Can't compare/sort | DATE, TIMESTAMP types | +| SELECT * in queries | Fetches unnecessary data | Explicit column lists | +| Non-reversible migrations | Can't rollback | Always write DOWN migration | +| Adding NOT NULL without default | Breaks existing rows | Add nullable, backfill, then constrain | + +--- + +## Verification Checklist + +After designing a schema: + +- [ ] Every table has a primary key +- [ ] All relationships have foreign key constraints +- [ ] ON DELETE strategy defined for each FK +- [ ] Indexes exist on all foreign keys +- [ ] Indexes exist on frequently queried columns +- [ ] Appropriate data types (DECIMAL for money, etc.) +- [ ] NOT NULL on required fields +- [ ] UNIQUE constraints where needed +- [ ] CHECK constraints for validation +- [ ] created_at and updated_at timestamps +- [ ] Migration scripts are reversible +- [ ] Tested on staging with production data + +--- + +
+Deep Dive: Normalization (SQL) + +### Normal Forms + +| Form | Rule | Violation Example | +|------|------|-------------------| +| **1NF** | Atomic values, no repeating groups | `product_ids = '1,2,3'` | +| **2NF** | 1NF + no partial dependencies | customer_name in order_items | +| **3NF** | 2NF + no transitive dependencies | country derived from postal_code | + +### 1st Normal Form (1NF) + +```sql +-- BAD: Multiple values in column +CREATE TABLE orders ( + id INT PRIMARY KEY, + product_ids VARCHAR(255) -- '101,102,103' +); + +-- GOOD: Separate table for items +CREATE TABLE orders ( + id INT PRIMARY KEY, + customer_id INT +); + +CREATE TABLE order_items ( + id INT PRIMARY KEY, + order_id INT REFERENCES orders(id), + product_id INT +); +``` + +### 2nd Normal Form (2NF) + +```sql +-- BAD: customer_name depends only on customer_id +CREATE TABLE order_items ( + order_id INT, + product_id INT, + customer_name VARCHAR(100), -- Partial dependency! + PRIMARY KEY (order_id, product_id) +); + +-- GOOD: Customer data in separate table +CREATE TABLE customers ( + id INT PRIMARY KEY, + name VARCHAR(100) +); +``` + +### 3rd Normal Form (3NF) + +```sql +-- BAD: country depends on postal_code +CREATE TABLE customers ( + id INT PRIMARY KEY, + postal_code VARCHAR(10), + country VARCHAR(50) -- Transitive dependency! +); + +-- GOOD: Separate postal_codes table +CREATE TABLE postal_codes ( + code VARCHAR(10) PRIMARY KEY, + country VARCHAR(50) +); +``` + +### When to Denormalize + +| Scenario | Denormalization Strategy | +|----------|-------------------------| +| Read-heavy reporting | Pre-calculated aggregates | +| Expensive JOINs | Cached derived columns | +| Analytics dashboards | Materialized views | + +```sql +-- Denormalized for performance +CREATE TABLE orders ( + id INT PRIMARY KEY, + customer_id INT, + total_amount DECIMAL(10,2), -- Calculated + item_count INT -- Calculated +); +``` + +
+ +
+Deep Dive: Data Types + +### String Types + +| Type | Use Case | Example | +|------|----------|---------| +| CHAR(n) | Fixed length | State codes, ISO dates | +| VARCHAR(n) | Variable length | Names, emails | +| TEXT | Long content | Articles, descriptions | + +```sql +-- Good sizing +email VARCHAR(255) +phone VARCHAR(20) +country_code CHAR(2) +``` + +### Numeric Types + +| Type | Range | Use Case | +|------|-------|----------| +| TINYINT | -128 to 127 | Age, status codes | +| SMALLINT | -32K to 32K | Quantities | +| INT | -2.1B to 2.1B | IDs, counts | +| BIGINT | Very large | Large IDs, timestamps | +| DECIMAL(p,s) | Exact precision | Money | +| FLOAT/DOUBLE | Approximate | Scientific data | + +```sql +-- ALWAYS use DECIMAL for money +price DECIMAL(10, 2) -- $99,999,999.99 + +-- NEVER use FLOAT for money +price FLOAT -- Rounding errors! +``` + +### Date/Time Types + +```sql +DATE -- 2025-10-31 +TIME -- 14:30:00 +DATETIME -- 2025-10-31 14:30:00 +TIMESTAMP -- Auto timezone conversion + +-- Always store in UTC +created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP +updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP +``` + +### Boolean + +```sql +-- PostgreSQL +is_active BOOLEAN DEFAULT TRUE + +-- MySQL +is_active TINYINT(1) DEFAULT 1 +``` + +
+ +
+Deep Dive: Indexing Strategy + +### When to Create Indexes + +| Always Index | Reason | +|--------------|--------| +| Foreign keys | Speed up JOINs | +| WHERE clause columns | Speed up filtering | +| ORDER BY columns | Speed up sorting | +| Unique constraints | Enforced uniqueness | + +```sql +-- Foreign key index +CREATE INDEX idx_orders_customer ON orders(customer_id); + +-- Query pattern index +CREATE INDEX idx_orders_status_date ON orders(status, created_at); +``` + +### Index Types + +| Type | Best For | Example | +|------|----------|---------| +| B-Tree | Ranges, equality | `price > 100` | +| Hash | Exact matches only | `email = 'x@y.com'` | +| Full-text | Text search | `MATCH AGAINST` | +| Partial | Subset of rows | `WHERE is_active = true` | + +### Composite Index Order + +```sql +CREATE INDEX idx_customer_status ON orders(customer_id, status); + +-- Uses index (customer_id first) +SELECT * FROM orders WHERE customer_id = 123; +SELECT * FROM orders WHERE customer_id = 123 AND status = 'pending'; + +-- Does NOT use index (status alone) +SELECT * FROM orders WHERE status = 'pending'; +``` + +**Rule:** Most selective column first, or column most queried alone. + +### Index Pitfalls + +| Pitfall | Problem | Solution | +|---------|---------|----------| +| Over-indexing | Slow writes | Only index what's queried | +| Wrong column order | Unused index | Match query patterns | +| Missing FK indexes | Slow JOINs | Always index FKs | + +
+ +
+Deep Dive: Constraints + +### Primary Keys + +```sql +-- Auto-increment (simple) +id INT AUTO_INCREMENT PRIMARY KEY + +-- UUID (distributed systems) +id CHAR(36) PRIMARY KEY DEFAULT (UUID()) + +-- Composite (junction tables) +PRIMARY KEY (student_id, course_id) +``` + +### Foreign Keys + +```sql +FOREIGN KEY (customer_id) REFERENCES customers(id) + ON DELETE CASCADE -- Delete children with parent + ON DELETE RESTRICT -- Prevent deletion if referenced + ON DELETE SET NULL -- Set to NULL when parent deleted + ON UPDATE CASCADE -- Update children when parent changes +``` + +| Strategy | Use When | +|----------|----------| +| CASCADE | Dependent data (order_items) | +| RESTRICT | Important references (prevent accidents) | +| SET NULL | Optional relationships | + +### Other Constraints + +```sql +-- Unique +email VARCHAR(255) UNIQUE NOT NULL + +-- Composite unique +UNIQUE (student_id, course_id) + +-- Check +price DECIMAL(10,2) CHECK (price >= 0) +discount INT CHECK (discount BETWEEN 0 AND 100) + +-- Not null +name VARCHAR(100) NOT NULL +``` + +
+ +
+Deep Dive: Relationship Patterns + +### One-to-Many + +```sql +CREATE TABLE orders ( + id INT PRIMARY KEY, + customer_id INT NOT NULL REFERENCES customers(id) +); + +CREATE TABLE order_items ( + id INT PRIMARY KEY, + order_id INT NOT NULL REFERENCES orders(id) ON DELETE CASCADE, + product_id INT NOT NULL, + quantity INT NOT NULL +); +``` + +### Many-to-Many + +```sql +-- Junction table +CREATE TABLE enrollments ( + student_id INT REFERENCES students(id) ON DELETE CASCADE, + course_id INT REFERENCES courses(id) ON DELETE CASCADE, + enrolled_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + PRIMARY KEY (student_id, course_id) +); +``` + +### Self-Referencing + +```sql +CREATE TABLE employees ( + id INT PRIMARY KEY, + name VARCHAR(100) NOT NULL, + manager_id INT REFERENCES employees(id) +); +``` + +### Polymorphic + +```sql +-- Approach 1: Separate FKs (stronger integrity) +CREATE TABLE comments ( + id INT PRIMARY KEY, + content TEXT NOT NULL, + post_id INT REFERENCES posts(id), + photo_id INT REFERENCES photos(id), + CHECK ( + (post_id IS NOT NULL AND photo_id IS NULL) OR + (post_id IS NULL AND photo_id IS NOT NULL) + ) +); + +-- Approach 2: Type + ID (flexible, weaker integrity) +CREATE TABLE comments ( + id INT PRIMARY KEY, + content TEXT NOT NULL, + commentable_type VARCHAR(50) NOT NULL, + commentable_id INT NOT NULL +); +``` + +
+ +
+Deep Dive: NoSQL Design (MongoDB) + +### Embedding vs Referencing + +| Factor | Embed | Reference | +|--------|-------|-----------| +| Access pattern | Read together | Read separately | +| Relationship | 1:few | 1:many | +| Document size | Small | Approaching 16MB | +| Update frequency | Rarely | Frequently | + +### Embedded Document + +```json +{ + "_id": "order_123", + "customer": { + "id": "cust_456", + "name": "Jane Smith", + "email": "jane@example.com" + }, + "items": [ + { "product_id": "prod_789", "quantity": 2, "price": 29.99 } + ], + "total": 109.97 +} +``` + +### Referenced Document + +```json +{ + "_id": "order_123", + "customer_id": "cust_456", + "item_ids": ["item_1", "item_2"], + "total": 109.97 +} +``` + +### MongoDB Indexes + +```javascript +// Single field +db.users.createIndex({ email: 1 }, { unique: true }); + +// Composite +db.orders.createIndex({ customer_id: 1, created_at: -1 }); + +// Text search +db.articles.createIndex({ title: "text", content: "text" }); + +// Geospatial +db.stores.createIndex({ location: "2dsphere" }); +``` + +
+ +
+Deep Dive: Migrations + +### Migration Best Practices + +| Practice | WHY | +|----------|-----| +| Always reversible | Need to rollback | +| Backward compatible | Zero-downtime deploys | +| Schema before data | Separate concerns | +| Test on staging | Catch issues early | + +### Adding a Column (Zero-Downtime) + +```sql +-- Step 1: Add nullable column +ALTER TABLE users ADD COLUMN phone VARCHAR(20); + +-- Step 2: Deploy code that writes to new column + +-- Step 3: Backfill existing rows +UPDATE users SET phone = '' WHERE phone IS NULL; + +-- Step 4: Make required (if needed) +ALTER TABLE users MODIFY phone VARCHAR(20) NOT NULL; +``` + +### Renaming a Column (Zero-Downtime) + +```sql +-- Step 1: Add new column +ALTER TABLE users ADD COLUMN email_address VARCHAR(255); + +-- Step 2: Copy data +UPDATE users SET email_address = email; + +-- Step 3: Deploy code reading from new column +-- Step 4: Deploy code writing to new column + +-- Step 5: Drop old column +ALTER TABLE users DROP COLUMN email; +``` + +### Migration Template + +```sql +-- Migration: YYYYMMDDHHMMSS_description.sql + +-- UP +BEGIN; +ALTER TABLE users ADD COLUMN phone VARCHAR(20); +CREATE INDEX idx_users_phone ON users(phone); +COMMIT; + +-- DOWN +BEGIN; +DROP INDEX idx_users_phone ON users; +ALTER TABLE users DROP COLUMN phone; +COMMIT; +``` + +
+ +
+Deep Dive: Performance Optimization + +### Query Analysis + +```sql +EXPLAIN SELECT * FROM orders +WHERE customer_id = 123 AND status = 'pending'; +``` + +| Look For | Meaning | +|----------|---------| +| type: ALL | Full table scan (bad) | +| type: ref | Index used (good) | +| key: NULL | No index used | +| rows: high | Many rows scanned | + +### N+1 Query Problem + +```python +# BAD: N+1 queries +orders = db.query("SELECT * FROM orders") +for order in orders: + customer = db.query(f"SELECT * FROM customers WHERE id = {order.customer_id}") + +# GOOD: Single JOIN +results = db.query(""" + SELECT orders.*, customers.name + FROM orders + JOIN customers ON orders.customer_id = customers.id +""") +``` + +### Optimization Techniques + +| Technique | When to Use | +|-----------|-------------| +| Add indexes | Slow WHERE/ORDER BY | +| Denormalize | Expensive JOINs | +| Pagination | Large result sets | +| Caching | Repeated queries | +| Read replicas | Read-heavy load | +| Partitioning | Very large tables | + +
+ +--- + +## Extension Points + +1. **Database-Specific Patterns:** Add MySQL vs PostgreSQL vs SQLite variations +2. **Advanced Patterns:** Time-series, event sourcing, CQRS, multi-tenancy +3. **ORM Integration:** TypeORM, Prisma, SQLAlchemy patterns +4. **Monitoring:** Query performance tracking, slow query alerts diff --git a/.cursor/skills/database-schema-designer/assets/templates/migration-template.sql b/.cursor/skills/database-schema-designer/assets/templates/migration-template.sql new file mode 100644 index 0000000..7c51066 --- /dev/null +++ b/.cursor/skills/database-schema-designer/assets/templates/migration-template.sql @@ -0,0 +1,60 @@ +-- Migration: YYYYMMDDHHMMSS_descriptive_name.sql +-- Description: [What this migration does] +-- Author: [Your Name] +-- Date: YYYY-MM-DD + +-- ============================================================================ +-- UP MIGRATION +-- ============================================================================ + +BEGIN; + +-- Step 1: Create table +CREATE TABLE IF NOT EXISTS table_name ( + id BIGINT AUTO_INCREMENT PRIMARY KEY, + column_name VARCHAR(255) NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP +); + +-- Step 2: Add indexes +CREATE INDEX idx_table_column ON table_name(column_name); + +-- Step 3: Add foreign keys +ALTER TABLE table_name + ADD CONSTRAINT fk_table_reference + FOREIGN KEY (reference_id) REFERENCES other_table(id) + ON DELETE CASCADE; + +-- Step 4: Data migration (if needed) +-- UPDATE table_name SET new_column = old_column; + +COMMIT; + +-- ============================================================================ +-- DOWN MIGRATION +-- ============================================================================ + +-- BEGIN; +-- ALTER TABLE table_name DROP FOREIGN KEY fk_table_reference; +-- DROP INDEX idx_table_column ON table_name; +-- DROP TABLE IF EXISTS table_name; +-- COMMIT; + +-- ============================================================================ +-- VALIDATION +-- ============================================================================ + +-- Check table exists: +-- SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES +-- WHERE TABLE_SCHEMA = DATABASE() AND TABLE_NAME = 'table_name'; + +-- Check indexes: +-- SHOW INDEX FROM table_name; + +-- ============================================================================ +-- NOTES +-- ============================================================================ +-- Estimated time: [X seconds on Y rows] +-- Requires downtime: [Yes/No] +-- Rollback tested: [Yes/No] diff --git a/.cursor/skills/database-schema-designer/references/schema-design-checklist.md b/.cursor/skills/database-schema-designer/references/schema-design-checklist.md new file mode 100644 index 0000000..80e5ad6 --- /dev/null +++ b/.cursor/skills/database-schema-designer/references/schema-design-checklist.md @@ -0,0 +1,118 @@ +# Database Schema Design Checklist + +Complete checklist for designing and reviewing database schemas. + +--- + +## Pre-Design + +- [ ] **Requirements Gathered**: Understand data entities and relationships +- [ ] **Access Patterns Identified**: Know how data will be queried +- [ ] **SQL vs NoSQL Decision**: Chosen appropriate database type +- [ ] **Scale Estimate**: Expected data volume and growth rate +- [ ] **Read/Write Ratio**: Understand if read-heavy or write-heavy + +--- + +## Normalization (SQL) + +- [ ] **1NF**: Atomic values, no repeating groups +- [ ] **2NF**: No partial dependencies on composite keys +- [ ] **3NF**: No transitive dependencies +- [ ] **Denormalization Justified**: If denormalized, reason documented + +--- + +## Table Design + +### Primary Keys + +- [ ] **Primary Key Defined**: Every table has primary key +- [ ] **Key Type Chosen**: INT auto-increment or UUID +- [ ] **Meaningful Keys Avoided**: Not using email/username as PK + +### Data Types + +- [ ] **Appropriate Types**: Correct data types for each column +- [ ] **String Sizes**: VARCHAR sized appropriately +- [ ] **Numeric Precision**: DECIMAL for money, INT for counts +- [ ] **Dates in UTC**: TIMESTAMP for datetime columns + +### Constraints + +- [ ] **NOT NULL**: Required columns marked NOT NULL +- [ ] **Unique Constraints**: Unique columns (email, username) +- [ ] **Check Constraints**: Validation rules (price >= 0) +- [ ] **Default Values**: Sensible defaults where appropriate + +--- + +## Relationships + +### Foreign Keys + +- [ ] **Foreign Keys Defined**: All relationships have FK constraints +- [ ] **ON DELETE Strategy**: CASCADE, RESTRICT, SET NULL chosen +- [ ] **ON UPDATE Strategy**: Usually CASCADE +- [ ] **Indexes on Foreign Keys**: All FKs are indexed + +### Relationship Types + +- [ ] **One-to-Many**: Modeled correctly +- [ ] **Many-to-Many**: Junction table created +- [ ] **Self-Referencing**: Parent-child relationships handled +- [ ] **Polymorphic**: Strategy chosen (separate FKs or type+id) + +--- + +## Indexing + +### Index Strategy + +- [ ] **Primary Key Indexed**: Automatic, verify +- [ ] **Foreign Keys Indexed**: All FKs have indexes +- [ ] **WHERE Columns**: Columns in WHERE clauses indexed +- [ ] **ORDER BY Columns**: Sort columns indexed +- [ ] **Composite Indexes**: Multi-column queries optimized +- [ ] **Column Order**: Most selective column first + +### Index Limits + +- [ ] **Not Over-Indexed**: Only necessary indexes +- [ ] **Index Maintenance**: Aware of write impact + +--- + +## Performance + +- [ ] **Joins Optimized**: N+1 queries avoided +- [ ] **SELECT * Avoided**: Only fetch needed columns +- [ ] **Pagination**: LIMIT/OFFSET or cursor-based +- [ ] **Aggregations**: Pre-calculated for expensive queries + +--- + +## Migrations + +- [ ] **Backward Compatible**: New columns nullable initially +- [ ] **Up and Down**: Rollback scripts provided +- [ ] **Data Migrations Separate**: Schema vs data separated +- [ ] **Tested on Staging**: Migrations tested + +--- + +## Security + +- [ ] **Least Privilege**: Minimal database permissions +- [ ] **Separate Accounts**: Read-only vs read-write +- [ ] **Sensitive Data**: Passwords hashed, PII encrypted +- [ ] **Parameterized Queries**: SQL injection prevented + +--- + +## Documentation + +- [ ] **ERD Created**: Entity-relationship diagram +- [ ] **Schema Documented**: Column descriptions +- [ ] **Indexes Documented**: Why each index exists +- [ ] **Migration History**: Changelog of changes diff --git a/.cursor/skills/database-schema-documentation/SKILL.md b/.cursor/skills/database-schema-documentation/SKILL.md new file mode 100644 index 0000000..0dd74b1 --- /dev/null +++ b/.cursor/skills/database-schema-documentation/SKILL.md @@ -0,0 +1,598 @@ +--- +name: database-schema-documentation +description: Document database schemas, ERD diagrams, table relationships, indexes, and constraints. Use when documenting database schema, creating ERD diagrams, or writing table documentation. +--- + +# Database Schema Documentation + +## Overview + +Create comprehensive database schema documentation including entity relationship diagrams (ERD), table definitions, indexes, constraints, and data dictionaries. + +## When to Use + +- Database schema documentation +- ERD (Entity Relationship Diagrams) +- Data dictionary creation +- Table relationship documentation +- Index and constraint documentation +- Migration documentation +- Database design specs + +## Schema Documentation Template + +```markdown +# Database Schema Documentation + +**Database:** PostgreSQL 14.x +**Version:** 2.0 +**Last Updated:** 2025-01-15 +**Schema Version:** 20250115120000 + +## Overview + +This database supports an e-commerce application with user management, product catalog, orders, and payment processing. + +## Entity Relationship Diagram + +```mermaid +erDiagram + users ||--o{ orders : places + users ||--o{ addresses : has + users ||--o{ payment_methods : has + orders ||--|{ order_items : contains + orders ||--|| payments : has + products ||--o{ order_items : includes + products }o--|| categories : belongs_to + products ||--o{ product_images : has + products ||--o{ inventory : tracks + + users { + uuid id PK + string email UK + string password_hash + string name + timestamp created_at + timestamp updated_at + } + + orders { + uuid id PK + uuid user_id FK + string status + decimal total_amount + timestamp created_at + timestamp updated_at + } + + order_items { + uuid id PK + uuid order_id FK + uuid product_id FK + int quantity + decimal price + } + + products { + uuid id PK + string name + text description + decimal price + uuid category_id FK + boolean active + } +``` + +--- + +## Tables + +### users + +Stores user account information. + +**Columns:** + +| Column | Type | Null | Default | Description | +|--------|------|------|---------|-------------| +| id | uuid | NO | gen_random_uuid() | Primary key | +| email | varchar(255) | NO | - | User email (unique) | +| password_hash | varchar(255) | NO | - | bcrypt hashed password | +| name | varchar(255) | NO | - | User's full name | +| email_verified | boolean | NO | false | Email verification status | +| two_factor_enabled | boolean | NO | false | 2FA enabled flag | +| two_factor_secret | varchar(32) | YES | - | TOTP secret | +| created_at | timestamp | NO | now() | Record creation time | +| updated_at | timestamp | NO | now() | Last update time | +| deleted_at | timestamp | YES | - | Soft delete timestamp | +| last_login_at | timestamp | YES | - | Last login timestamp | + +**Indexes:** + +```sql +CREATE UNIQUE INDEX idx_users_email ON users(email); +CREATE INDEX idx_users_created_at ON users(created_at); +CREATE INDEX idx_users_deleted_at ON users(deleted_at) WHERE deleted_at IS NULL; +``` + +**Constraints:** + +```sql +ALTER TABLE users + ADD CONSTRAINT users_email_format + CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'); + +ALTER TABLE users + ADD CONSTRAINT users_name_length + CHECK (length(name) >= 2); +``` + +**Triggers:** + +```sql +-- Update updated_at timestamp +CREATE TRIGGER update_users_updated_at + BEFORE UPDATE ON users + FOR EACH ROW + EXECUTE FUNCTION update_updated_at_column(); +``` + +**Sample Data:** + +```sql +INSERT INTO users (email, password_hash, name, email_verified) +VALUES + ('john@example.com', '$2b$12$...', 'John Doe', true), + ('jane@example.com', '$2b$12$...', 'Jane Smith', true); +``` + +--- + +### products + +Stores product catalog information. + +**Columns:** + +| Column | Type | Null | Default | Description | +|--------|------|------|---------|-------------| +| id | uuid | NO | gen_random_uuid() | Primary key | +| name | varchar(255) | NO | - | Product name | +| slug | varchar(255) | NO | - | URL-friendly name (unique) | +| description | text | YES | - | Product description | +| price | decimal(10,2) | NO | - | Product price in USD | +| compare_at_price | decimal(10,2) | YES | - | Original price (for sales) | +| sku | varchar(100) | NO | - | Stock keeping unit (unique) | +| category_id | uuid | NO | - | Foreign key to categories | +| brand | varchar(100) | YES | - | Product brand | +| active | boolean | NO | true | Product visibility | +| featured | boolean | NO | false | Featured product flag | +| metadata | jsonb | YES | - | Additional product metadata | +| created_at | timestamp | NO | now() | Record creation time | +| updated_at | timestamp | NO | now() | Last update time | + +**Indexes:** + +```sql +CREATE UNIQUE INDEX idx_products_slug ON products(slug); +CREATE UNIQUE INDEX idx_products_sku ON products(sku); +CREATE INDEX idx_products_category_id ON products(category_id); +CREATE INDEX idx_products_active ON products(active); +CREATE INDEX idx_products_featured ON products(featured) WHERE featured = true; +CREATE INDEX idx_products_metadata ON products USING gin(metadata); +``` + +**Foreign Keys:** + +```sql +ALTER TABLE products + ADD CONSTRAINT fk_products_category + FOREIGN KEY (category_id) + REFERENCES categories(id) + ON DELETE RESTRICT; +``` + +**Full-Text Search:** + +```sql +-- Add full-text search column +ALTER TABLE products ADD COLUMN search_vector tsvector; + +-- Create full-text index +CREATE INDEX idx_products_search ON products USING gin(search_vector); + +-- Trigger to update search vector +CREATE TRIGGER products_search_vector_update + BEFORE INSERT OR UPDATE ON products + FOR EACH ROW + EXECUTE FUNCTION + tsvector_update_trigger( + search_vector, 'pg_catalog.english', + name, description, brand + ); +``` + +--- + +### orders + +Stores customer orders. + +**Columns:** + +| Column | Type | Null | Default | Description | +|--------|------|------|---------|-------------| +| id | uuid | NO | gen_random_uuid() | Primary key | +| order_number | varchar(20) | NO | - | Human-readable order ID (unique) | +| user_id | uuid | NO | - | Foreign key to users | +| status | varchar(20) | NO | 'pending' | Order status | +| subtotal | decimal(10,2) | NO | - | Items subtotal | +| tax | decimal(10,2) | NO | 0 | Tax amount | +| shipping | decimal(10,2) | NO | 0 | Shipping cost | +| total | decimal(10,2) | NO | - | Total amount | +| currency | char(3) | NO | 'USD' | Currency code | +| notes | text | YES | - | Order notes | +| shipping_address | jsonb | NO | - | Shipping address | +| billing_address | jsonb | NO | - | Billing address | +| created_at | timestamp | NO | now() | Order creation time | +| updated_at | timestamp | NO | now() | Last update time | +| confirmed_at | timestamp | YES | - | Order confirmation time | +| shipped_at | timestamp | YES | - | Shipping time | +| delivered_at | timestamp | YES | - | Delivery time | +| cancelled_at | timestamp | YES | - | Cancellation time | + +**Indexes:** + +```sql +CREATE UNIQUE INDEX idx_orders_order_number ON orders(order_number); +CREATE INDEX idx_orders_user_id ON orders(user_id); +CREATE INDEX idx_orders_status ON orders(status); +CREATE INDEX idx_orders_created_at ON orders(created_at); +``` + +**Constraints:** + +```sql +ALTER TABLE orders + ADD CONSTRAINT orders_status_check + CHECK (status IN ('pending', 'confirmed', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded')); + +ALTER TABLE orders + ADD CONSTRAINT orders_total_positive + CHECK (total >= 0); +``` + +**Computed Columns:** + +```sql +-- Total is computed from subtotal + tax + shipping +ALTER TABLE orders + ADD CONSTRAINT orders_total_computation + CHECK (total = subtotal + tax + shipping); +``` + +--- + +### order_items + +Line items for each order. + +**Columns:** + +| Column | Type | Null | Default | Description | +|--------|------|------|---------|-------------| +| id | uuid | NO | gen_random_uuid() | Primary key | +| order_id | uuid | NO | - | Foreign key to orders | +| product_id | uuid | NO | - | Foreign key to products | +| product_snapshot | jsonb | NO | - | Product data at order time | +| quantity | int | NO | - | Quantity ordered | +| unit_price | decimal(10,2) | NO | - | Price per unit | +| subtotal | decimal(10,2) | NO | - | Line item total | +| created_at | timestamp | NO | now() | Record creation time | + +**Indexes:** + +```sql +CREATE INDEX idx_order_items_order_id ON order_items(order_id); +CREATE INDEX idx_order_items_product_id ON order_items(product_id); +``` + +**Foreign Keys:** + +```sql +ALTER TABLE order_items + ADD CONSTRAINT fk_order_items_order + FOREIGN KEY (order_id) + REFERENCES orders(id) + ON DELETE CASCADE; + +ALTER TABLE order_items + ADD CONSTRAINT fk_order_items_product + FOREIGN KEY (product_id) + REFERENCES products(id) + ON DELETE RESTRICT; +``` + +**Constraints:** + +```sql +ALTER TABLE order_items + ADD CONSTRAINT order_items_quantity_positive + CHECK (quantity > 0); + +ALTER TABLE order_items + ADD CONSTRAINT order_items_subtotal_computation + CHECK (subtotal = quantity * unit_price); +``` + +--- + +## Views + +### active_products_view + +Shows only active products with category information. + +```sql +CREATE VIEW active_products_view AS +SELECT + p.id, + p.name, + p.slug, + p.description, + p.price, + p.compare_at_price, + p.sku, + p.brand, + c.name as category_name, + c.slug as category_slug, + (SELECT COUNT(*) FROM order_items oi WHERE oi.product_id = p.id) as times_ordered, + (SELECT AVG(rating) FROM product_reviews pr WHERE pr.product_id = p.id) as avg_rating +FROM products p +JOIN categories c ON p.category_id = c.id +WHERE p.active = true; +``` + +### user_order_summary + +Aggregated order statistics per user. + +```sql +CREATE MATERIALIZED VIEW user_order_summary AS +SELECT + u.id as user_id, + u.email, + u.name, + COUNT(o.id) as total_orders, + SUM(o.total) as total_spent, + AVG(o.total) as average_order_value, + MAX(o.created_at) as last_order_date, + MIN(o.created_at) as first_order_date +FROM users u +LEFT JOIN orders o ON u.id = o.user_id AND o.status != 'cancelled' +GROUP BY u.id, u.email, u.name; + +-- Refresh strategy +CREATE INDEX idx_user_order_summary_user_id ON user_order_summary(user_id); +REFRESH MATERIALIZED VIEW CONCURRENTLY user_order_summary; +``` + +--- + +## Functions + +### calculate_order_total + +Calculates order total with tax and shipping. + +```sql +CREATE OR REPLACE FUNCTION calculate_order_total( + p_subtotal decimal, + p_tax_rate decimal, + p_shipping decimal +) +RETURNS decimal AS $$ +BEGIN + RETURN ROUND((p_subtotal * (1 + p_tax_rate) + p_shipping)::numeric, 2); +END; +$$ LANGUAGE plpgsql IMMUTABLE; +``` + +### update_updated_at_column + +Trigger function to automatically update updated_at timestamp. + +```sql +CREATE OR REPLACE FUNCTION update_updated_at_column() +RETURNS TRIGGER AS $$ +BEGIN + NEW.updated_at = now(); + RETURN NEW; +END; +$$ LANGUAGE plpgsql; +``` + +--- + +## Data Dictionary + +### Enum Types + +```sql +-- Order status values +CREATE TYPE order_status AS ENUM ( + 'pending', + 'confirmed', + 'processing', + 'shipped', + 'delivered', + 'cancelled', + 'refunded' +); + +-- Payment status values +CREATE TYPE payment_status AS ENUM ( + 'pending', + 'processing', + 'succeeded', + 'failed', + 'refunded' +); +``` + +### JSONB Structures + +#### shipping_address format + +```json +{ + "street": "123 Main St", + "street2": "Apt 4B", + "city": "New York", + "state": "NY", + "postalCode": "10001", + "country": "US" +} +``` + +#### product_snapshot format + +```json +{ + "name": "Product Name", + "sku": "PROD-123", + "price": 99.99, + "image": "https://cdn.example.com/product.jpg" +} +``` + +--- + +## Migrations + +### Migration: 20250115120000_add_two_factor_auth + +```sql +-- Up +ALTER TABLE users ADD COLUMN two_factor_enabled BOOLEAN DEFAULT FALSE; +ALTER TABLE users ADD COLUMN two_factor_secret VARCHAR(32); + +CREATE TABLE two_factor_backup_codes ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE, + code_hash VARCHAR(255) NOT NULL, + used_at TIMESTAMP, + created_at TIMESTAMP DEFAULT NOW() +); + +CREATE INDEX idx_2fa_backup_codes_user_id ON two_factor_backup_codes(user_id); + +-- Down +DROP TABLE two_factor_backup_codes; +ALTER TABLE users DROP COLUMN two_factor_secret; +ALTER TABLE users DROP COLUMN two_factor_enabled; +``` + +--- + +## Performance Optimization + +### Recommended Indexes + +```sql +-- Frequently queried columns +CREATE INDEX CONCURRENTLY idx_users_email_verified ON users(email_verified); +CREATE INDEX CONCURRENTLY idx_products_price ON products(price); +CREATE INDEX CONCURRENTLY idx_orders_user_status ON orders(user_id, status); + +-- Composite indexes for common queries +CREATE INDEX CONCURRENTLY idx_products_category_active + ON products(category_id, active) + WHERE active = true; + +CREATE INDEX CONCURRENTLY idx_orders_user_created + ON orders(user_id, created_at DESC); +``` + +### Query Optimization + +```sql +-- EXPLAIN ANALYZE for slow queries +EXPLAIN ANALYZE +SELECT p.*, c.name as category_name +FROM products p +JOIN categories c ON p.category_id = c.id +WHERE p.active = true +ORDER BY p.created_at DESC +LIMIT 20; + +-- Add covering index if needed +CREATE INDEX idx_products_active_created + ON products(active, created_at DESC) + INCLUDE (name, price, slug); +``` + +--- + +## Backup & Recovery + +### Backup Schedule + +- **Full Backup:** Daily at 2 AM UTC +- **Incremental Backup:** Every 6 hours +- **WAL Archiving:** Continuous +- **Retention:** 30 days + +### Backup Commands + +```bash +# Full backup +pg_dump -h localhost -U postgres -Fc database_name > backup.dump + +# Restore +pg_restore -h localhost -U postgres -d database_name backup.dump + +# Backup specific tables +pg_dump -h localhost -U postgres -t users -t orders database_name > tables.sql +``` + +--- + +## Data Retention Policy + +| Table | Retention | Archive Strategy | +|-------|-----------|------------------| +| users | Indefinite | Soft delete after 2 years inactive | +| orders | 7 years | Move to archive after 2 years | +| order_items | 7 years | Move to archive with orders | +| logs | 90 days | Delete after retention period | + +``` + +## Best Practices + +### ✅ DO +- Document all tables and columns +- Create ERD diagrams +- Document indexes and constraints +- Include sample data +- Document foreign key relationships +- Show JSONB field structures +- Document triggers and functions +- Include migration scripts +- Specify data types precisely +- Document performance considerations + +### ❌ DON'T +- Skip constraint documentation +- Forget to version schema changes +- Ignore performance implications +- Skip index documentation +- Forget to document enum values + +## Resources + +- [PostgreSQL Documentation](https://www.postgresql.org/docs/) +- [dbdiagram.io](https://dbdiagram.io/) - ERD tool +- [SchemaSpy](https://schemaspy.org/) - Schema documentation generator +- [Mermaid ERD Syntax](https://mermaid.js.org/syntax/entityRelationshipDiagram.html) diff --git a/.cursor/skills/debugging-strategies/SKILL.md b/.cursor/skills/debugging-strategies/SKILL.md new file mode 100644 index 0000000..5a95777 --- /dev/null +++ b/.cursor/skills/debugging-strategies/SKILL.md @@ -0,0 +1,536 @@ +--- +name: debugging-strategies +description: Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior. +--- + +# Debugging Strategies + +Transform debugging from frustrating guesswork into systematic problem-solving with proven strategies, powerful tools, and methodical approaches. + +## When to Use This Skill + +- Tracking down elusive bugs +- Investigating performance issues +- Understanding unfamiliar codebases +- Debugging production issues +- Analyzing crash dumps and stack traces +- Profiling application performance +- Investigating memory leaks +- Debugging distributed systems + +## Core Principles + +### 1. The Scientific Method + +**1. Observe**: What's the actual behavior? +**2. Hypothesize**: What could be causing it? +**3. Experiment**: Test your hypothesis +**4. Analyze**: Did it prove/disprove your theory? +**5. Repeat**: Until you find the root cause + +### 2. Debugging Mindset + +**Don't Assume:** + +- "It can't be X" - Yes it can +- "I didn't change Y" - Check anyway +- "It works on my machine" - Find out why + +**Do:** + +- Reproduce consistently +- Isolate the problem +- Keep detailed notes +- Question everything +- Take breaks when stuck + +### 3. Rubber Duck Debugging + +Explain your code and problem out loud (to a rubber duck, colleague, or yourself). Often reveals the issue. + +## Systematic Debugging Process + +### Phase 1: Reproduce + +```markdown +## Reproduction Checklist + +1. **Can you reproduce it?** + - Always? Sometimes? Randomly? + - Specific conditions needed? + - Can others reproduce it? + +2. **Create minimal reproduction** + - Simplify to smallest example + - Remove unrelated code + - Isolate the problem + +3. **Document steps** + - Write down exact steps + - Note environment details + - Capture error messages +``` + +### Phase 2: Gather Information + +```markdown +## Information Collection + +1. **Error Messages** + - Full stack trace + - Error codes + - Console/log output + +2. **Environment** + - OS version + - Language/runtime version + - Dependencies versions + - Environment variables + +3. **Recent Changes** + - Git history + - Deployment timeline + - Configuration changes + +4. **Scope** + - Affects all users or specific ones? + - All browsers or specific ones? + - Production only or also dev? +``` + +### Phase 3: Form Hypothesis + +```markdown +## Hypothesis Formation + +Based on gathered info, ask: + +1. **What changed?** + - Recent code changes + - Dependency updates + - Infrastructure changes + +2. **What's different?** + - Working vs broken environment + - Working vs broken user + - Before vs after + +3. **Where could this fail?** + - Input validation + - Business logic + - Data layer + - External services +``` + +### Phase 4: Test & Verify + +```markdown +## Testing Strategies + +1. **Binary Search** + - Comment out half the code + - Narrow down problematic section + - Repeat until found + +2. **Add Logging** + - Strategic console.log/print + - Track variable values + - Trace execution flow + +3. **Isolate Components** + - Test each piece separately + - Mock dependencies + - Remove complexity + +4. **Compare Working vs Broken** + - Diff configurations + - Diff environments + - Diff data +``` + +## Debugging Tools + +### JavaScript/TypeScript Debugging + +```typescript +// Chrome DevTools Debugger +function processOrder(order: Order) { + debugger; // Execution pauses here + + const total = calculateTotal(order); + console.log("Total:", total); + + // Conditional breakpoint + if (order.items.length > 10) { + debugger; // Only breaks if condition true + } + + return total; +} + +// Console debugging techniques +console.log("Value:", value); // Basic +console.table(arrayOfObjects); // Table format +console.time("operation"); +/* code */ console.timeEnd("operation"); // Timing +console.trace(); // Stack trace +console.assert(value > 0, "Value must be positive"); // Assertion + +// Performance profiling +performance.mark("start-operation"); +// ... operation code +performance.mark("end-operation"); +performance.measure("operation", "start-operation", "end-operation"); +console.log(performance.getEntriesByType("measure")); +``` + +**VS Code Debugger Configuration:** + +```json +// .vscode/launch.json +{ + "version": "0.2.0", + "configurations": [ + { + "type": "node", + "request": "launch", + "name": "Debug Program", + "program": "${workspaceFolder}/src/index.ts", + "preLaunchTask": "tsc: build - tsconfig.json", + "outFiles": ["${workspaceFolder}/dist/**/*.js"], + "skipFiles": ["/**"] + }, + { + "type": "node", + "request": "launch", + "name": "Debug Tests", + "program": "${workspaceFolder}/node_modules/jest/bin/jest", + "args": ["--runInBand", "--no-cache"], + "console": "integratedTerminal" + } + ] +} +``` + +### Python Debugging + +```python +# Built-in debugger (pdb) +import pdb + +def calculate_total(items): + total = 0 + pdb.set_trace() # Debugger starts here + + for item in items: + total += item.price * item.quantity + + return total + +# Breakpoint (Python 3.7+) +def process_order(order): + breakpoint() # More convenient than pdb.set_trace() + # ... code + +# Post-mortem debugging +try: + risky_operation() +except Exception: + import pdb + pdb.post_mortem() # Debug at exception point + +# IPython debugging (ipdb) +from ipdb import set_trace +set_trace() # Better interface than pdb + +# Logging for debugging +import logging +logging.basicConfig(level=logging.DEBUG) +logger = logging.getLogger(__name__) + +def fetch_user(user_id): + logger.debug(f'Fetching user: {user_id}') + user = db.query(User).get(user_id) + logger.debug(f'Found user: {user}') + return user + +# Profile performance +import cProfile +import pstats + +cProfile.run('slow_function()', 'profile_stats') +stats = pstats.Stats('profile_stats') +stats.sort_stats('cumulative') +stats.print_stats(10) # Top 10 slowest +``` + +### Go Debugging + +```go +// Delve debugger +// Install: go install github.com/go-delve/delve/cmd/dlv@latest +// Run: dlv debug main.go + +import ( + "fmt" + "runtime" + "runtime/debug" +) + +// Print stack trace +func debugStack() { + debug.PrintStack() +} + +// Panic recovery with debugging +func processRequest() { + defer func() { + if r := recover(); r != nil { + fmt.Println("Panic:", r) + debug.PrintStack() + } + }() + + // ... code that might panic +} + +// Memory profiling +import _ "net/http/pprof" +// Visit http://localhost:6060/debug/pprof/ + +// CPU profiling +import ( + "os" + "runtime/pprof" +) + +f, _ := os.Create("cpu.prof") +pprof.StartCPUProfile(f) +defer pprof.StopCPUProfile() +// ... code to profile +``` + +## Advanced Debugging Techniques + +### Technique 1: Binary Search Debugging + +```bash +# Git bisect for finding regression +git bisect start +git bisect bad # Current commit is bad +git bisect good v1.0.0 # v1.0.0 was good + +# Git checks out middle commit +# Test it, then: +git bisect good # if it works +git bisect bad # if it's broken + +# Continue until bug found +git bisect reset # when done +``` + +### Technique 2: Differential Debugging + +Compare working vs broken: + +```markdown +## What's Different? + +| Aspect | Working | Broken | +| ------------ | ----------- | -------------- | +| Environment | Development | Production | +| Node version | 18.16.0 | 18.15.0 | +| Data | Empty DB | 1M records | +| User | Admin | Regular user | +| Browser | Chrome | Safari | +| Time | During day | After midnight | + +Hypothesis: Time-based issue? Check timezone handling. +``` + +### Technique 3: Trace Debugging + +```typescript +// Function call tracing +function trace( + target: any, + propertyKey: string, + descriptor: PropertyDescriptor, +) { + const originalMethod = descriptor.value; + + descriptor.value = function (...args: any[]) { + console.log(`Calling ${propertyKey} with args:`, args); + const result = originalMethod.apply(this, args); + console.log(`${propertyKey} returned:`, result); + return result; + }; + + return descriptor; +} + +class OrderService { + @trace + calculateTotal(items: Item[]): number { + return items.reduce((sum, item) => sum + item.price, 0); + } +} +``` + +### Technique 4: Memory Leak Detection + +```typescript +// Chrome DevTools Memory Profiler +// 1. Take heap snapshot +// 2. Perform action +// 3. Take another snapshot +// 4. Compare snapshots + +// Node.js memory debugging +if (process.memoryUsage().heapUsed > 500 * 1024 * 1024) { + console.warn("High memory usage:", process.memoryUsage()); + + // Generate heap dump + require("v8").writeHeapSnapshot(); +} + +// Find memory leaks in tests +let beforeMemory: number; + +beforeEach(() => { + beforeMemory = process.memoryUsage().heapUsed; +}); + +afterEach(() => { + const afterMemory = process.memoryUsage().heapUsed; + const diff = afterMemory - beforeMemory; + + if (diff > 10 * 1024 * 1024) { + // 10MB threshold + console.warn(`Possible memory leak: ${diff / 1024 / 1024}MB`); + } +}); +``` + +## Debugging Patterns by Issue Type + +### Pattern 1: Intermittent Bugs + +```markdown +## Strategies for Flaky Bugs + +1. **Add extensive logging** + - Log timing information + - Log all state transitions + - Log external interactions + +2. **Look for race conditions** + - Concurrent access to shared state + - Async operations completing out of order + - Missing synchronization + +3. **Check timing dependencies** + - setTimeout/setInterval + - Promise resolution order + - Animation frame timing + +4. **Stress test** + - Run many times + - Vary timing + - Simulate load +``` + +### Pattern 2: Performance Issues + +```markdown +## Performance Debugging + +1. **Profile first** + - Don't optimize blindly + - Measure before and after + - Find bottlenecks + +2. **Common culprits** + - N+1 queries + - Unnecessary re-renders + - Large data processing + - Synchronous I/O + +3. **Tools** + - Browser DevTools Performance tab + - Lighthouse + - Python: cProfile, line_profiler + - Node: clinic.js, 0x +``` + +### Pattern 3: Production Bugs + +```markdown +## Production Debugging + +1. **Gather evidence** + - Error tracking (Sentry, Bugsnag) + - Application logs + - User reports + - Metrics/monitoring + +2. **Reproduce locally** + - Use production data (anonymized) + - Match environment + - Follow exact steps + +3. **Safe investigation** + - Don't change production + - Use feature flags + - Add monitoring/logging + - Test fixes in staging +``` + +## Best Practices + +1. **Reproduce First**: Can't fix what you can't reproduce +2. **Isolate the Problem**: Remove complexity until minimal case +3. **Read Error Messages**: They're usually helpful +4. **Check Recent Changes**: Most bugs are recent +5. **Use Version Control**: Git bisect, blame, history +6. **Take Breaks**: Fresh eyes see better +7. **Document Findings**: Help future you +8. **Fix Root Cause**: Not just symptoms + +## Common Debugging Mistakes + +- **Making Multiple Changes**: Change one thing at a time +- **Not Reading Error Messages**: Read the full stack trace +- **Assuming It's Complex**: Often it's simple +- **Debug Logging in Prod**: Remove before shipping +- **Not Using Debugger**: console.log isn't always best +- **Giving Up Too Soon**: Persistence pays off +- **Not Testing the Fix**: Verify it actually works + +## Quick Debugging Checklist + +```markdown +## When Stuck, Check: + +- [ ] Spelling errors (typos in variable names) +- [ ] Case sensitivity (fileName vs filename) +- [ ] Null/undefined values +- [ ] Array index off-by-one +- [ ] Async timing (race conditions) +- [ ] Scope issues (closure, hoisting) +- [ ] Type mismatches +- [ ] Missing dependencies +- [ ] Environment variables +- [ ] File paths (absolute vs relative) +- [ ] Cache issues (clear cache) +- [ ] Stale data (refresh database) +``` + +## Resources + +- **references/debugging-tools-guide.md**: Comprehensive tool documentation +- **references/performance-profiling.md**: Performance debugging guide +- **references/production-debugging.md**: Debugging live systems +- **assets/debugging-checklist.md**: Quick reference checklist +- **assets/common-bugs.md**: Common bug patterns +- **scripts/debug-helper.ts**: Debugging utility functions diff --git a/.cursor/skills/fastapi-templates/SKILL.md b/.cursor/skills/fastapi-templates/SKILL.md new file mode 100644 index 0000000..49fc60a --- /dev/null +++ b/.cursor/skills/fastapi-templates/SKILL.md @@ -0,0 +1,567 @@ +--- +name: fastapi-templates +description: Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects. +--- + +# FastAPI Project Templates + +Production-ready FastAPI project structures with async patterns, dependency injection, middleware, and best practices for building high-performance APIs. + +## When to Use This Skill + +- Starting new FastAPI projects from scratch +- Implementing async REST APIs with Python +- Building high-performance web services and microservices +- Creating async applications with PostgreSQL, MongoDB +- Setting up API projects with proper structure and testing + +## Core Concepts + +### 1. Project Structure + +**Recommended Layout:** + +``` +app/ +├── api/ # API routes +│ ├── v1/ +│ │ ├── endpoints/ +│ │ │ ├── users.py +│ │ │ ├── auth.py +│ │ │ └── items.py +│ │ └── router.py +│ └── dependencies.py # Shared dependencies +├── core/ # Core configuration +│ ├── config.py +│ ├── security.py +│ └── database.py +├── models/ # Database models +│ ├── user.py +│ └── item.py +├── schemas/ # Pydantic schemas +│ ├── user.py +│ └── item.py +├── services/ # Business logic +│ ├── user_service.py +│ └── auth_service.py +├── repositories/ # Data access +│ ├── user_repository.py +│ └── item_repository.py +└── main.py # Application entry +``` + +### 2. Dependency Injection + +FastAPI's built-in DI system using `Depends`: + +- Database session management +- Authentication/authorization +- Shared business logic +- Configuration injection + +### 3. Async Patterns + +Proper async/await usage: + +- Async route handlers +- Async database operations +- Async background tasks +- Async middleware + +## Implementation Patterns + +### Pattern 1: Complete FastAPI Application + +```python +# main.py +from fastapi import FastAPI, Depends +from fastapi.middleware.cors import CORSMiddleware +from contextlib import asynccontextmanager + +@asynccontextmanager +async def lifespan(app: FastAPI): + """Application lifespan events.""" + # Startup + await database.connect() + yield + # Shutdown + await database.disconnect() + +app = FastAPI( + title="API Template", + version="1.0.0", + lifespan=lifespan +) + +# CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include routers +from app.api.v1.router import api_router +app.include_router(api_router, prefix="/api/v1") + +# core/config.py +from pydantic_settings import BaseSettings +from functools import lru_cache + +class Settings(BaseSettings): + """Application settings.""" + DATABASE_URL: str + SECRET_KEY: str + ACCESS_TOKEN_EXPIRE_MINUTES: int = 30 + API_V1_STR: str = "/api/v1" + + class Config: + env_file = ".env" + +@lru_cache() +def get_settings() -> Settings: + return Settings() + +# core/database.py +from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession +from sqlalchemy.ext.declarative import declarative_base +from sqlalchemy.orm import sessionmaker +from app.core.config import get_settings + +settings = get_settings() + +engine = create_async_engine( + settings.DATABASE_URL, + echo=True, + future=True +) + +AsyncSessionLocal = sessionmaker( + engine, + class_=AsyncSession, + expire_on_commit=False +) + +Base = declarative_base() + +async def get_db() -> AsyncSession: + """Dependency for database session.""" + async with AsyncSessionLocal() as session: + try: + yield session + await session.commit() + except Exception: + await session.rollback() + raise + finally: + await session.close() +``` + +### Pattern 2: CRUD Repository Pattern + +```python +# repositories/base_repository.py +from typing import Generic, TypeVar, Type, Optional, List +from sqlalchemy.ext.asyncio import AsyncSession +from sqlalchemy import select +from pydantic import BaseModel + +ModelType = TypeVar("ModelType") +CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) +UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) + +class BaseRepository(Generic[ModelType, CreateSchemaType, UpdateSchemaType]): + """Base repository for CRUD operations.""" + + def __init__(self, model: Type[ModelType]): + self.model = model + + async def get(self, db: AsyncSession, id: int) -> Optional[ModelType]: + """Get by ID.""" + result = await db.execute( + select(self.model).where(self.model.id == id) + ) + return result.scalars().first() + + async def get_multi( + self, + db: AsyncSession, + skip: int = 0, + limit: int = 100 + ) -> List[ModelType]: + """Get multiple records.""" + result = await db.execute( + select(self.model).offset(skip).limit(limit) + ) + return result.scalars().all() + + async def create( + self, + db: AsyncSession, + obj_in: CreateSchemaType + ) -> ModelType: + """Create new record.""" + db_obj = self.model(**obj_in.dict()) + db.add(db_obj) + await db.flush() + await db.refresh(db_obj) + return db_obj + + async def update( + self, + db: AsyncSession, + db_obj: ModelType, + obj_in: UpdateSchemaType + ) -> ModelType: + """Update record.""" + update_data = obj_in.dict(exclude_unset=True) + for field, value in update_data.items(): + setattr(db_obj, field, value) + await db.flush() + await db.refresh(db_obj) + return db_obj + + async def delete(self, db: AsyncSession, id: int) -> bool: + """Delete record.""" + obj = await self.get(db, id) + if obj: + await db.delete(obj) + return True + return False + +# repositories/user_repository.py +from app.repositories.base_repository import BaseRepository +from app.models.user import User +from app.schemas.user import UserCreate, UserUpdate + +class UserRepository(BaseRepository[User, UserCreate, UserUpdate]): + """User-specific repository.""" + + async def get_by_email(self, db: AsyncSession, email: str) -> Optional[User]: + """Get user by email.""" + result = await db.execute( + select(User).where(User.email == email) + ) + return result.scalars().first() + + async def is_active(self, db: AsyncSession, user_id: int) -> bool: + """Check if user is active.""" + user = await self.get(db, user_id) + return user.is_active if user else False + +user_repository = UserRepository(User) +``` + +### Pattern 3: Service Layer + +```python +# services/user_service.py +from typing import Optional +from sqlalchemy.ext.asyncio import AsyncSession +from app.repositories.user_repository import user_repository +from app.schemas.user import UserCreate, UserUpdate, User +from app.core.security import get_password_hash, verify_password + +class UserService: + """Business logic for users.""" + + def __init__(self): + self.repository = user_repository + + async def create_user( + self, + db: AsyncSession, + user_in: UserCreate + ) -> User: + """Create new user with hashed password.""" + # Check if email exists + existing = await self.repository.get_by_email(db, user_in.email) + if existing: + raise ValueError("Email already registered") + + # Hash password + user_in_dict = user_in.dict() + user_in_dict["hashed_password"] = get_password_hash(user_in_dict.pop("password")) + + # Create user + user = await self.repository.create(db, UserCreate(**user_in_dict)) + return user + + async def authenticate( + self, + db: AsyncSession, + email: str, + password: str + ) -> Optional[User]: + """Authenticate user.""" + user = await self.repository.get_by_email(db, email) + if not user: + return None + if not verify_password(password, user.hashed_password): + return None + return user + + async def update_user( + self, + db: AsyncSession, + user_id: int, + user_in: UserUpdate + ) -> Optional[User]: + """Update user.""" + user = await self.repository.get(db, user_id) + if not user: + return None + + if user_in.password: + user_in_dict = user_in.dict(exclude_unset=True) + user_in_dict["hashed_password"] = get_password_hash( + user_in_dict.pop("password") + ) + user_in = UserUpdate(**user_in_dict) + + return await self.repository.update(db, user, user_in) + +user_service = UserService() +``` + +### Pattern 4: API Endpoints with Dependencies + +```python +# api/v1/endpoints/users.py +from fastapi import APIRouter, Depends, HTTPException, status +from sqlalchemy.ext.asyncio import AsyncSession +from typing import List + +from app.core.database import get_db +from app.schemas.user import User, UserCreate, UserUpdate +from app.services.user_service import user_service +from app.api.dependencies import get_current_user + +router = APIRouter() + +@router.post("/", response_model=User, status_code=status.HTTP_201_CREATED) +async def create_user( + user_in: UserCreate, + db: AsyncSession = Depends(get_db) +): + """Create new user.""" + try: + user = await user_service.create_user(db, user_in) + return user + except ValueError as e: + raise HTTPException(status_code=400, detail=str(e)) + +@router.get("/me", response_model=User) +async def read_current_user( + current_user: User = Depends(get_current_user) +): + """Get current user.""" + return current_user + +@router.get("/{user_id}", response_model=User) +async def read_user( + user_id: int, + db: AsyncSession = Depends(get_db), + current_user: User = Depends(get_current_user) +): + """Get user by ID.""" + user = await user_service.repository.get(db, user_id) + if not user: + raise HTTPException(status_code=404, detail="User not found") + return user + +@router.patch("/{user_id}", response_model=User) +async def update_user( + user_id: int, + user_in: UserUpdate, + db: AsyncSession = Depends(get_db), + current_user: User = Depends(get_current_user) +): + """Update user.""" + if current_user.id != user_id: + raise HTTPException(status_code=403, detail="Not authorized") + + user = await user_service.update_user(db, user_id, user_in) + if not user: + raise HTTPException(status_code=404, detail="User not found") + return user + +@router.delete("/{user_id}", status_code=status.HTTP_204_NO_CONTENT) +async def delete_user( + user_id: int, + db: AsyncSession = Depends(get_db), + current_user: User = Depends(get_current_user) +): + """Delete user.""" + if current_user.id != user_id: + raise HTTPException(status_code=403, detail="Not authorized") + + deleted = await user_service.repository.delete(db, user_id) + if not deleted: + raise HTTPException(status_code=404, detail="User not found") +``` + +### Pattern 5: Authentication & Authorization + +```python +# core/security.py +from datetime import datetime, timedelta +from typing import Optional +from jose import JWTError, jwt +from passlib.context import CryptContext +from app.core.config import get_settings + +settings = get_settings() +pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") + +ALGORITHM = "HS256" + +def create_access_token(data: dict, expires_delta: Optional[timedelta] = None): + """Create JWT access token.""" + to_encode = data.copy() + if expires_delta: + expire = datetime.utcnow() + expires_delta + else: + expire = datetime.utcnow() + timedelta(minutes=15) + to_encode.update({"exp": expire}) + encoded_jwt = jwt.encode(to_encode, settings.SECRET_KEY, algorithm=ALGORITHM) + return encoded_jwt + +def verify_password(plain_password: str, hashed_password: str) -> bool: + """Verify password against hash.""" + return pwd_context.verify(plain_password, hashed_password) + +def get_password_hash(password: str) -> str: + """Hash password.""" + return pwd_context.hash(password) + +# api/dependencies.py +from fastapi import Depends, HTTPException, status +from fastapi.security import OAuth2PasswordBearer +from jose import JWTError, jwt +from sqlalchemy.ext.asyncio import AsyncSession + +from app.core.database import get_db +from app.core.security import ALGORITHM +from app.core.config import get_settings +from app.repositories.user_repository import user_repository + +oauth2_scheme = OAuth2PasswordBearer(tokenUrl=f"{settings.API_V1_STR}/auth/login") + +async def get_current_user( + db: AsyncSession = Depends(get_db), + token: str = Depends(oauth2_scheme) +): + """Get current authenticated user.""" + credentials_exception = HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Could not validate credentials", + headers={"WWW-Authenticate": "Bearer"}, + ) + + try: + payload = jwt.decode(token, settings.SECRET_KEY, algorithms=[ALGORITHM]) + user_id: int = payload.get("sub") + if user_id is None: + raise credentials_exception + except JWTError: + raise credentials_exception + + user = await user_repository.get(db, user_id) + if user is None: + raise credentials_exception + + return user +``` + +## Testing + +```python +# tests/conftest.py +import pytest +import asyncio +from httpx import AsyncClient +from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession +from sqlalchemy.orm import sessionmaker + +from app.main import app +from app.core.database import get_db, Base + +TEST_DATABASE_URL = "sqlite+aiosqlite:///:memory:" + +@pytest.fixture(scope="session") +def event_loop(): + loop = asyncio.get_event_loop_policy().new_event_loop() + yield loop + loop.close() + +@pytest.fixture +async def db_session(): + engine = create_async_engine(TEST_DATABASE_URL, echo=True) + async with engine.begin() as conn: + await conn.run_sync(Base.metadata.create_all) + + AsyncSessionLocal = sessionmaker( + engine, class_=AsyncSession, expire_on_commit=False + ) + + async with AsyncSessionLocal() as session: + yield session + +@pytest.fixture +async def client(db_session): + async def override_get_db(): + yield db_session + + app.dependency_overrides[get_db] = override_get_db + + async with AsyncClient(app=app, base_url="http://test") as client: + yield client + +# tests/test_users.py +import pytest + +@pytest.mark.asyncio +async def test_create_user(client): + response = await client.post( + "/api/v1/users/", + json={ + "email": "test@example.com", + "password": "testpass123", + "name": "Test User" + } + ) + assert response.status_code == 201 + data = response.json() + assert data["email"] == "test@example.com" + assert "id" in data +``` + +## Resources + +- **references/fastapi-architecture.md**: Detailed architecture guide +- **references/async-best-practices.md**: Async/await patterns +- **references/testing-strategies.md**: Comprehensive testing guide +- **assets/project-template/**: Complete FastAPI project +- **assets/docker-compose.yml**: Development environment setup + +## Best Practices + +1. **Async All The Way**: Use async for database, external APIs +2. **Dependency Injection**: Leverage FastAPI's DI system +3. **Repository Pattern**: Separate data access from business logic +4. **Service Layer**: Keep business logic out of routes +5. **Pydantic Schemas**: Strong typing for request/response +6. **Error Handling**: Consistent error responses +7. **Testing**: Test all layers independently + +## Common Pitfalls + +- **Blocking Code in Async**: Using synchronous database drivers +- **No Service Layer**: Business logic in route handlers +- **Missing Type Hints**: Loses FastAPI's benefits +- **Ignoring Sessions**: Not properly managing database sessions +- **No Testing**: Skipping integration tests +- **Tight Coupling**: Direct database access in routes diff --git a/.cursor/skills/fastapi/.claude-plugin/plugin.json b/.cursor/skills/fastapi/.claude-plugin/plugin.json new file mode 100644 index 0000000..d35a122 --- /dev/null +++ b/.cursor/skills/fastapi/.claude-plugin/plugin.json @@ -0,0 +1,12 @@ +{ + "name": "fastapi", + "description": "Optional[str] # Still required!", + "version": "1.0.0", + "author": { + "name": "Jeremy Dawes", + "email": "jeremy@jezweb.net" + }, + "license": "MIT", + "repository": "https://github.com/jezweb/claude-skills", + "keywords": [] +} diff --git a/.cursor/skills/fastapi/SKILL.md b/.cursor/skills/fastapi/SKILL.md new file mode 100644 index 0000000..73c4cbc --- /dev/null +++ b/.cursor/skills/fastapi/SKILL.md @@ -0,0 +1,959 @@ +--- +name: fastapi +description: | + Build Python APIs with FastAPI, Pydantic v2, and SQLAlchemy 2.0 async. Covers project structure, JWT auth, validation, and database integration with uv package manager. Prevents 7 documented errors. + + Use when: creating Python APIs, implementing JWT auth, or troubleshooting 422 validation, CORS, async blocking, form data, background tasks, or OpenAPI schema errors. +user-invocable: true +--- + +# FastAPI Skill + +Production-tested patterns for FastAPI with Pydantic v2, SQLAlchemy 2.0 async, and JWT authentication. + +**Latest Versions** (verified January 2026): +- FastAPI: 0.128.0 +- Pydantic: 2.11.7 +- SQLAlchemy: 2.0.30 +- Uvicorn: 0.35.0 +- python-jose: 3.3.0 + +**Requirements**: +- Python 3.9+ (Python 3.8 support dropped in FastAPI 0.125.0) +- Pydantic v2.7.0+ (Pydantic v1 support completely removed in FastAPI 0.128.0) + +--- + +## Quick Start + +### Project Setup with uv + +```bash +# Create project +uv init my-api +cd my-api + +# Add dependencies +uv add fastapi[standard] sqlalchemy[asyncio] aiosqlite python-jose[cryptography] passlib[bcrypt] + +# Run development server +uv run fastapi dev src/main.py +``` + +### Minimal Working Example + +```python +# src/main.py +from fastapi import FastAPI +from pydantic import BaseModel + +app = FastAPI(title="My API") + +class Item(BaseModel): + name: str + price: float + +@app.get("/") +async def root(): + return {"message": "Hello World"} + +@app.post("/items") +async def create_item(item: Item): + return item +``` + +Run: `uv run fastapi dev src/main.py` + +Docs available at: `http://127.0.0.1:8000/docs` + +--- + +## Project Structure (Domain-Based) + +For maintainable projects, organize by domain not file type: + +``` +my-api/ +├── pyproject.toml +├── src/ +│ ├── __init__.py +│ ├── main.py # FastAPI app initialization +│ ├── config.py # Global settings +│ ├── database.py # Database connection +│ │ +│ ├── auth/ # Auth domain +│ │ ├── __init__.py +│ │ ├── router.py # Auth endpoints +│ │ ├── schemas.py # Pydantic models +│ │ ├── models.py # SQLAlchemy models +│ │ ├── service.py # Business logic +│ │ └── dependencies.py # Auth dependencies +│ │ +│ ├── items/ # Items domain +│ │ ├── __init__.py +│ │ ├── router.py +│ │ ├── schemas.py +│ │ ├── models.py +│ │ └── service.py +│ │ +│ └── shared/ # Shared utilities +│ ├── __init__.py +│ └── exceptions.py +└── tests/ + └── test_main.py +``` + +--- + +## Core Patterns + +### Pydantic Schemas (Validation) + +```python +# src/items/schemas.py +from pydantic import BaseModel, Field, ConfigDict +from datetime import datetime +from enum import Enum + +class ItemStatus(str, Enum): + DRAFT = "draft" + PUBLISHED = "published" + ARCHIVED = "archived" + +class ItemBase(BaseModel): + name: str = Field(..., min_length=1, max_length=100) + description: str | None = Field(None, max_length=500) + price: float = Field(..., gt=0, description="Price must be positive") + status: ItemStatus = ItemStatus.DRAFT + +class ItemCreate(ItemBase): + pass + +class ItemUpdate(BaseModel): + name: str | None = Field(None, min_length=1, max_length=100) + description: str | None = None + price: float | None = Field(None, gt=0) + status: ItemStatus | None = None + +class ItemResponse(ItemBase): + id: int + created_at: datetime + + model_config = ConfigDict(from_attributes=True) +``` + +**Key Points**: +- Use `Field()` for validation constraints +- Separate Create/Update/Response schemas +- `from_attributes=True` enables SQLAlchemy model conversion +- Use `str | None` (Python 3.10+) not `Optional[str]` + +### SQLAlchemy Models (Database) + +```python +# src/items/models.py +from sqlalchemy import String, Float, DateTime, Enum as SQLEnum +from sqlalchemy.orm import Mapped, mapped_column +from datetime import datetime +from src.database import Base +from src.items.schemas import ItemStatus + +class Item(Base): + __tablename__ = "items" + + id: Mapped[int] = mapped_column(primary_key=True) + name: Mapped[str] = mapped_column(String(100)) + description: Mapped[str | None] = mapped_column(String(500), nullable=True) + price: Mapped[float] = mapped_column(Float) + status: Mapped[ItemStatus] = mapped_column( + SQLEnum(ItemStatus), default=ItemStatus.DRAFT + ) + created_at: Mapped[datetime] = mapped_column( + DateTime, default=datetime.utcnow + ) +``` + +### Database Setup (Async SQLAlchemy 2.0) + +```python +# src/database.py +from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker +from sqlalchemy.orm import DeclarativeBase + +DATABASE_URL = "sqlite+aiosqlite:///./database.db" + +engine = create_async_engine(DATABASE_URL, echo=True) +async_session = async_sessionmaker(engine, expire_on_commit=False) + +class Base(DeclarativeBase): + pass + +async def get_db(): + async with async_session() as session: + try: + yield session + await session.commit() + except Exception: + await session.rollback() + raise +``` + +### Router Pattern + +```python +# src/items/router.py +from fastapi import APIRouter, Depends, HTTPException, status +from sqlalchemy.ext.asyncio import AsyncSession +from sqlalchemy import select + +from src.database import get_db +from src.items import schemas, models + +router = APIRouter(prefix="/items", tags=["items"]) + +@router.get("", response_model=list[schemas.ItemResponse]) +async def list_items( + skip: int = 0, + limit: int = 100, + db: AsyncSession = Depends(get_db) +): + result = await db.execute( + select(models.Item).offset(skip).limit(limit) + ) + return result.scalars().all() + +@router.get("/{item_id}", response_model=schemas.ItemResponse) +async def get_item(item_id: int, db: AsyncSession = Depends(get_db)): + result = await db.execute( + select(models.Item).where(models.Item.id == item_id) + ) + item = result.scalar_one_or_none() + if not item: + raise HTTPException(status_code=404, detail="Item not found") + return item + +@router.post("", response_model=schemas.ItemResponse, status_code=status.HTTP_201_CREATED) +async def create_item( + item_in: schemas.ItemCreate, + db: AsyncSession = Depends(get_db) +): + item = models.Item(**item_in.model_dump()) + db.add(item) + await db.commit() + await db.refresh(item) + return item +``` + +### Main App + +```python +# src/main.py +from contextlib import asynccontextmanager +from fastapi import FastAPI +from fastapi.middleware.cors import CORSMiddleware + +from src.database import engine, Base +from src.items.router import router as items_router +from src.auth.router import router as auth_router + +@asynccontextmanager +async def lifespan(app: FastAPI): + # Startup: Create tables + async with engine.begin() as conn: + await conn.run_sync(Base.metadata.create_all) + yield + # Shutdown: cleanup if needed + +app = FastAPI(title="My API", lifespan=lifespan) + +# CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=["http://localhost:3000"], # Your frontend + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include routers +app.include_router(auth_router) +app.include_router(items_router) +``` + +--- + +## JWT Authentication + +### Auth Schemas + +```python +# src/auth/schemas.py +from pydantic import BaseModel, EmailStr + +class UserCreate(BaseModel): + email: EmailStr + password: str + +class UserResponse(BaseModel): + id: int + email: str + + model_config = ConfigDict(from_attributes=True) + +class Token(BaseModel): + access_token: str + token_type: str = "bearer" + +class TokenData(BaseModel): + user_id: int | None = None +``` + +### Auth Service + +```python +# src/auth/service.py +from datetime import datetime, timedelta +from jose import JWTError, jwt +from passlib.context import CryptContext +from src.config import settings + +pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") + +def hash_password(password: str) -> str: + return pwd_context.hash(password) + +def verify_password(plain: str, hashed: str) -> bool: + return pwd_context.verify(plain, hashed) + +def create_access_token(data: dict, expires_delta: timedelta | None = None) -> str: + to_encode = data.copy() + expire = datetime.utcnow() + (expires_delta or timedelta(minutes=15)) + to_encode.update({"exp": expire}) + return jwt.encode(to_encode, settings.SECRET_KEY, algorithm="HS256") + +def decode_token(token: str) -> dict | None: + try: + return jwt.decode(token, settings.SECRET_KEY, algorithms=["HS256"]) + except JWTError: + return None +``` + +### Auth Dependencies + +```python +# src/auth/dependencies.py +from fastapi import Depends, HTTPException, status +from fastapi.security import OAuth2PasswordBearer +from sqlalchemy.ext.asyncio import AsyncSession +from sqlalchemy import select + +from src.database import get_db +from src.auth import service, models, schemas + +oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/auth/login") + +async def get_current_user( + token: str = Depends(oauth2_scheme), + db: AsyncSession = Depends(get_db) +) -> models.User: + credentials_exception = HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Could not validate credentials", + headers={"WWW-Authenticate": "Bearer"}, + ) + + payload = service.decode_token(token) + if payload is None: + raise credentials_exception + + user_id = payload.get("sub") + if user_id is None: + raise credentials_exception + + result = await db.execute( + select(models.User).where(models.User.id == int(user_id)) + ) + user = result.scalar_one_or_none() + + if user is None: + raise credentials_exception + + return user +``` + +### Auth Router + +```python +# src/auth/router.py +from fastapi import APIRouter, Depends, HTTPException, status +from fastapi.security import OAuth2PasswordRequestForm +from sqlalchemy.ext.asyncio import AsyncSession +from sqlalchemy import select + +from src.database import get_db +from src.auth import schemas, models, service +from src.auth.dependencies import get_current_user + +router = APIRouter(prefix="/auth", tags=["auth"]) + +@router.post("/register", response_model=schemas.UserResponse) +async def register( + user_in: schemas.UserCreate, + db: AsyncSession = Depends(get_db) +): + # Check existing + result = await db.execute( + select(models.User).where(models.User.email == user_in.email) + ) + if result.scalar_one_or_none(): + raise HTTPException(status_code=400, detail="Email already registered") + + user = models.User( + email=user_in.email, + hashed_password=service.hash_password(user_in.password) + ) + db.add(user) + await db.commit() + await db.refresh(user) + return user + +@router.post("/login", response_model=schemas.Token) +async def login( + form_data: OAuth2PasswordRequestForm = Depends(), + db: AsyncSession = Depends(get_db) +): + result = await db.execute( + select(models.User).where(models.User.email == form_data.username) + ) + user = result.scalar_one_or_none() + + if not user or not service.verify_password(form_data.password, user.hashed_password): + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Incorrect email or password" + ) + + access_token = service.create_access_token(data={"sub": str(user.id)}) + return schemas.Token(access_token=access_token) + +@router.get("/me", response_model=schemas.UserResponse) +async def get_me(current_user: models.User = Depends(get_current_user)): + return current_user +``` + +### Protect Routes + +```python +# In any router +from src.auth.dependencies import get_current_user +from src.auth.models import User + +@router.post("/items") +async def create_item( + item_in: schemas.ItemCreate, + current_user: User = Depends(get_current_user), # Requires auth + db: AsyncSession = Depends(get_db) +): + item = models.Item(**item_in.model_dump(), user_id=current_user.id) + # ... +``` + +--- + +## Configuration + +```python +# src/config.py +from pydantic_settings import BaseSettings + +class Settings(BaseSettings): + DATABASE_URL: str = "sqlite+aiosqlite:///./database.db" + SECRET_KEY: str = "your-secret-key-change-in-production" + ACCESS_TOKEN_EXPIRE_MINUTES: int = 30 + + class Config: + env_file = ".env" + +settings = Settings() +``` + +Create `.env`: +``` +DATABASE_URL=sqlite+aiosqlite:///./database.db +SECRET_KEY=your-super-secret-key-here +ACCESS_TOKEN_EXPIRE_MINUTES=30 +``` + +--- + +## Critical Rules + +### Always Do + +1. **Separate Pydantic schemas from SQLAlchemy models** - Different jobs, different files +2. **Use async for I/O operations** - Database, HTTP calls, file access +3. **Validate with Pydantic Field()** - Constraints, defaults, descriptions +4. **Use dependency injection** - `Depends()` for database, auth, validation +5. **Return proper status codes** - 201 for create, 204 for delete, etc. + +### Never Do + +1. **Never use blocking calls in async routes** - No `time.sleep()`, use `asyncio.sleep()` +2. **Never put business logic in routes** - Use service layer +3. **Never hardcode secrets** - Use environment variables +4. **Never skip validation** - Always use Pydantic schemas +5. **Never use `*` in CORS origins for production** - Specify exact origins + +--- + +## Known Issues Prevention + +This skill prevents **7** documented issues from official FastAPI GitHub and release notes. + +### Issue #1: Form Data Loses Field Set Metadata + +**Error**: `model.model_fields_set` includes default values when using `Form()` +**Source**: [GitHub Issue #13399](https://github.com/fastapi/fastapi/issues/13399) +**Why It Happens**: Form data parsing preloads default values and passes them to the validator, making it impossible to distinguish between fields explicitly set by the user and fields using defaults. This bug ONLY affects Form data, not JSON body data. + +**Prevention**: +```python +# ✗ AVOID: Pydantic model with Form when you need field_set metadata +from typing import Annotated +from fastapi import Form + +@app.post("/form") +async def endpoint(model: Annotated[MyModel, Form()]): + fields = model.model_fields_set # Unreliable! ❌ + +# ✓ USE: Individual form fields or JSON body instead +@app.post("/form-individual") +async def endpoint( + field_1: Annotated[bool, Form()] = True, + field_2: Annotated[str | None, Form()] = None +): + # You know exactly what was provided ✓ + +# ✓ OR: Use JSON body when metadata matters +@app.post("/json") +async def endpoint(model: MyModel): + fields = model.model_fields_set # Works correctly ✓ +``` + +### Issue #2: BackgroundTasks Silently Overwritten by Custom Response + +**Error**: Background tasks added via `BackgroundTasks` dependency don't run +**Source**: [GitHub Issue #11215](https://github.com/fastapi/fastapi/issues/11215) +**Why It Happens**: When you return a custom `Response` with a `background` parameter, it overwrites all tasks added to the injected `BackgroundTasks` dependency. This is not documented and causes silent failures. + +**Prevention**: +```python +# ✗ WRONG: Mixing both mechanisms +from fastapi import BackgroundTasks +from starlette.responses import Response, BackgroundTask + +@app.get("/") +async def endpoint(tasks: BackgroundTasks): + tasks.add_task(send_email) # This will be lost! ❌ + return Response( + content="Done", + background=BackgroundTask(log_event) # Only this runs + ) + +# ✓ RIGHT: Use only BackgroundTasks dependency +@app.get("/") +async def endpoint(tasks: BackgroundTasks): + tasks.add_task(send_email) + tasks.add_task(log_event) + return {"status": "done"} # All tasks run ✓ + +# ✓ OR: Use only Response background (but can't inject dependencies) +@app.get("/") +async def endpoint(): + return Response( + content="Done", + background=BackgroundTask(log_event) + ) +``` + +**Rule**: Pick ONE mechanism and stick with it. Don't mix injected `BackgroundTasks` with `Response(background=...)`. + +### Issue #3: Optional Form Fields Break with TestClient (Regression) + +**Error**: `422: "Input should be 'abc' or 'def'"` for optional Literal fields +**Source**: [GitHub Issue #12245](https://github.com/fastapi/fastapi/issues/12245) +**Why It Happens**: Starting in FastAPI 0.114.0, optional form fields with `Literal` types fail validation when passed `None` via TestClient. Worked in 0.113.0. + +**Prevention**: +```python +from typing import Annotated, Literal, Optional +from fastapi import Form +from fastapi.testclient import TestClient + +# ✗ PROBLEMATIC: Optional Literal with Form (breaks in 0.114.0+) +@app.post("/") +async def endpoint( + attribute: Annotated[Optional[Literal["abc", "def"]], Form()] +): + return {"attribute": attribute} + +client = TestClient(app) +data = {"attribute": None} # or omit the field +response = client.post("/", data=data) # Returns 422 ❌ + +# ✓ WORKAROUND 1: Don't pass None explicitly, omit the field +data = {} # Omit instead of None +response = client.post("/", data=data) # Works ✓ + +# ✓ WORKAROUND 2: Avoid Literal types with optional form fields +@app.post("/") +async def endpoint(attribute: Annotated[str | None, Form()] = None): + # Validate in application logic instead + if attribute and attribute not in ["abc", "def"]: + raise HTTPException(400, "Invalid attribute") +``` + +### Issue #4: Pydantic Json Type Doesn't Work with Form Data + +**Error**: `"JSON object must be str, bytes or bytearray"` +**Source**: [GitHub Issue #10997](https://github.com/fastapi/fastapi/issues/10997) +**Why It Happens**: Using Pydantic's `Json` type directly with `Form()` fails. You must accept the field as `str` and parse manually. + +**Prevention**: +```python +from typing import Annotated +from fastapi import Form +from pydantic import Json, BaseModel + +# ✗ WRONG: Json type directly with Form +@app.post("/broken") +async def broken(json_list: Annotated[Json[list[str]], Form()]) -> list[str]: + return json_list # Returns 422 ❌ + +# ✓ RIGHT: Accept as str, parse with Pydantic +class JsonListModel(BaseModel): + json_list: Json[list[str]] + +@app.post("/working") +async def working(json_list: Annotated[str, Form()]) -> list[str]: + model = JsonListModel(json_list=json_list) # Pydantic parses here + return model.json_list # Works ✓ +``` + +### Issue #5: Annotated with ForwardRef Breaks OpenAPI Generation + +**Error**: Missing or incorrect OpenAPI schema for dependency types +**Source**: [GitHub Issue #13056](https://github.com/fastapi/fastapi/issues/13056) +**Why It Happens**: When using `Annotated` with `Depends()` and a forward reference (from `__future__ import annotations`), OpenAPI schema generation fails or produces incorrect schemas. + +**Prevention**: +```python +# ✗ PROBLEMATIC: Forward reference with Depends +from __future__ import annotations +from dataclasses import dataclass +from typing import Annotated +from fastapi import Depends, FastAPI + +app = FastAPI() + +def get_potato() -> Potato: # Forward reference + return Potato(color='red', size=10) + +@app.get('/') +async def read_root(potato: Annotated[Potato, Depends(get_potato)]): + return {'Hello': 'World'} +# OpenAPI schema doesn't include Potato definition correctly ❌ + +@dataclass +class Potato: + color: str + size: int + +# ✓ WORKAROUND 1: Don't use __future__ annotations in route files +# Remove: from __future__ import annotations + +# ✓ WORKAROUND 2: Use string literals for type hints +def get_potato() -> "Potato": + return Potato(color='red', size=10) + +# ✓ WORKAROUND 3: Define classes before they're used in dependencies +@dataclass +class Potato: + color: str + size: int + +def get_potato() -> Potato: # Now works ✓ + return Potato(color='red', size=10) +``` + +### Issue #6: Pydantic v2 Path Parameter Union Type Breaking Change + +**Error**: Path parameters with `int | str` always parse as `str` in Pydantic v2 +**Source**: [GitHub Issue #11251](https://github.com/fastapi/fastapi/issues/11251) | Community-sourced +**Why It Happens**: Major breaking change when migrating from Pydantic v1 to v2. Union types with `str` in path/query parameters now always parse as `str` (worked correctly in v1). + +**Prevention**: +```python +from uuid import UUID + +# ✗ PROBLEMATIC: Union with str in path parameter +@app.get("/int/{path}") +async def int_path(path: int | str): + return str(type(path)) + # Pydantic v1: returns for "123" + # Pydantic v2: returns for "123" ❌ + +@app.get("/uuid/{path}") +async def uuid_path(path: UUID | str): + return str(type(path)) + # Pydantic v1: returns for valid UUID + # Pydantic v2: returns ❌ + +# ✓ RIGHT: Avoid union types with str in path/query parameters +@app.get("/int/{path}") +async def int_path(path: int): + return str(type(path)) # Works correctly ✓ + +# ✓ ALTERNATIVE: Use validators if type coercion needed +from pydantic import field_validator + +class PathParams(BaseModel): + path: int | str + + @field_validator('path') + def coerce_to_int(cls, v): + if isinstance(v, str) and v.isdigit(): + return int(v) + return v +``` + +### Issue #7: ValueError in field_validator Returns 500 Instead of 422 + +**Error**: `500 Internal Server Error` when raising `ValueError` in custom validators +**Source**: [GitHub Discussion #10779](https://github.com/fastapi/fastapi/discussions/10779) | Community-sourced +**Why It Happens**: When raising `ValueError` inside a Pydantic `@field_validator` with Form fields, FastAPI returns 500 Internal Server Error instead of the expected 422 Unprocessable Entity validation error. + +**Prevention**: +```python +from typing import Annotated +from fastapi import Form +from pydantic import BaseModel, field_validator, ValidationError, Field + +# ✗ WRONG: ValueError in validator +class MyForm(BaseModel): + value: int + + @field_validator('value') + def validate_value(cls, v): + if v < 0: + raise ValueError("Value must be positive") # Returns 500! ❌ + return v + +# ✓ RIGHT 1: Raise ValidationError instead +class MyForm(BaseModel): + value: int + + @field_validator('value') + def validate_value(cls, v): + if v < 0: + raise ValidationError("Value must be positive") # Returns 422 ✓ + return v + +# ✓ RIGHT 2: Use Pydantic's built-in constraints +class MyForm(BaseModel): + value: Annotated[int, Field(gt=0)] # Built-in validation, returns 422 ✓ +``` + +--- + +## Common Errors & Fixes + +### 422 Unprocessable Entity + +**Cause**: Request body doesn't match Pydantic schema + +**Debug**: +1. Check `/docs` endpoint - test there first +2. Verify JSON structure matches schema +3. Check required vs optional fields + +**Fix**: Add custom validation error handler: +```python +from fastapi.exceptions import RequestValidationError +from fastapi.responses import JSONResponse + +@app.exception_handler(RequestValidationError) +async def validation_exception_handler(request, exc): + return JSONResponse( + status_code=422, + content={"detail": exc.errors(), "body": exc.body} + ) +``` + +### CORS Errors + +**Cause**: Missing or misconfigured CORS middleware + +**Fix**: +```python +app.add_middleware( + CORSMiddleware, + allow_origins=["http://localhost:3000"], # Not "*" in production + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) +``` + +### Async Blocking Event Loop + +**Cause**: Blocking call in async route (e.g., `time.sleep()`, sync database client, CPU-bound operations) + +**Symptoms** (production-scale): +- Throughput plateaus far earlier than expected +- Latency "balloons" as concurrency increases +- Request pattern looks almost serial under load +- Requests queue indefinitely when event loop is saturated +- Small scattered blocking calls that aren't obvious (not infinite loops) + +**Fix**: Use async alternatives: +```python +# ✗ WRONG: Blocks event loop +import time +from sqlalchemy import create_engine # Sync client + +@app.get("/users") +async def get_users(): + time.sleep(0.1) # Even small blocking adds up at scale! + result = sync_db_client.query("SELECT * FROM users") # Blocks! + return result + +# ✓ RIGHT 1: Use async database driver +from sqlalchemy.ext.asyncio import AsyncSession +from sqlalchemy import select + +@app.get("/users") +async def get_users(db: AsyncSession = Depends(get_db)): + await asyncio.sleep(0.1) # Non-blocking + result = await db.execute(select(User)) + return result.scalars().all() + +# ✓ RIGHT 2: Use def (not async def) for CPU-bound routes +# FastAPI runs def routes in thread pool automatically +@app.get("/cpu-heavy") +def cpu_heavy_task(): # Note: def not async def + return expensive_cpu_work() # Runs in thread pool ✓ + +# ✓ RIGHT 3: Use run_in_executor for blocking calls in async routes +import asyncio +from concurrent.futures import ThreadPoolExecutor + +executor = ThreadPoolExecutor() + +@app.get("/mixed") +async def mixed_task(): + # Run blocking function in thread pool + result = await asyncio.get_event_loop().run_in_executor( + executor, + blocking_function # Your blocking function + ) + return result +``` + +**Sources**: [Production Case Study (Jan 2026)](https://www.techbuddies.io/2026/01/10/case-study-fixing-fastapi-event-loop-blocking-in-a-high-traffic-api/) | Community-sourced + +### "Field required" for Optional Fields + +**Cause**: Using `Optional[str]` without default + +**Fix**: +```python +# Wrong +description: Optional[str] # Still required! + +# Right +description: str | None = None # Optional with default +``` + +--- + +## Testing + +```python +# tests/test_main.py +import pytest +from httpx import AsyncClient, ASGITransport +from src.main import app + +@pytest.fixture +async def client(): + async with AsyncClient( + transport=ASGITransport(app=app), + base_url="http://test" + ) as ac: + yield ac + +@pytest.mark.asyncio +async def test_root(client): + response = await client.get("/") + assert response.status_code == 200 + +@pytest.mark.asyncio +async def test_create_item(client): + response = await client.post( + "/items", + json={"name": "Test", "price": 9.99} + ) + assert response.status_code == 201 + assert response.json()["name"] == "Test" +``` + +Run: `uv run pytest` + +--- + +## Deployment + +### Uvicorn (Development) +```bash +uv run fastapi dev src/main.py +``` + +### Uvicorn (Production) +```bash +uv run uvicorn src.main:app --host 0.0.0.0 --port 8000 +``` + +### Gunicorn + Uvicorn (Production with workers) +```bash +uv add gunicorn +uv run gunicorn src.main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 +``` + +### Docker +```dockerfile +FROM python:3.12-slim + +WORKDIR /app +COPY . . + +RUN pip install uv && uv sync + +EXPOSE 8000 +CMD ["uv", "run", "uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000"] +``` + +--- + +## References + +- [FastAPI Documentation](https://fastapi.tiangolo.com/) +- [FastAPI Best Practices](https://github.com/zhanymkanov/fastapi-best-practices) +- [Pydantic v2 Documentation](https://docs.pydantic.dev/) +- [SQLAlchemy 2.0 Async](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html) +- [uv Package Manager](https://docs.astral.sh/uv/) + +--- + +**Last verified**: 2026-01-21 | **Skill version**: 1.1.0 | **Changes**: Added 7 known issues (form data bugs, background tasks, Pydantic v2 migration gotchas), expanded async blocking guidance with production patterns +**Maintainer**: Jezweb | jeremy@jezweb.net diff --git a/.cursor/skills/fastapi/templates/.env.example b/.cursor/skills/fastapi/templates/.env.example new file mode 100644 index 0000000..e548868 --- /dev/null +++ b/.cursor/skills/fastapi/templates/.env.example @@ -0,0 +1,10 @@ +# Database +DATABASE_URL=sqlite+aiosqlite:///./database.db + +# JWT Authentication +SECRET_KEY=your-super-secret-key-change-in-production +ACCESS_TOKEN_EXPIRE_MINUTES=30 + +# App +APP_NAME=My API +DEBUG=false diff --git a/.cursor/skills/fastapi/templates/pyproject.toml b/.cursor/skills/fastapi/templates/pyproject.toml new file mode 100644 index 0000000..5b7e2f2 --- /dev/null +++ b/.cursor/skills/fastapi/templates/pyproject.toml @@ -0,0 +1,25 @@ +[project] +name = "my-api" +version = "0.1.0" +description = "FastAPI application" +readme = "README.md" +requires-python = ">=3.11" +dependencies = [ + "fastapi[standard]>=0.123.0", + "sqlalchemy[asyncio]>=2.0.30", + "aiosqlite>=0.20.0", + "python-jose[cryptography]>=3.3.0", + "passlib[bcrypt]>=1.7.4", + "pydantic-settings>=2.0.0", +] + +[project.optional-dependencies] +dev = [ + "pytest>=8.0.0", + "pytest-asyncio>=0.23.0", + "httpx>=0.27.0", +] + +[tool.pytest.ini_options] +asyncio_mode = "auto" +testpaths = ["tests"] diff --git a/.cursor/skills/fastapi/templates/src/auth/dependencies.py b/.cursor/skills/fastapi/templates/src/auth/dependencies.py new file mode 100644 index 0000000..d2fbf47 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/auth/dependencies.py @@ -0,0 +1,64 @@ +"""Authentication dependencies for route protection.""" + +from fastapi import Depends, HTTPException, status +from fastapi.security import OAuth2PasswordBearer +from sqlalchemy import select +from sqlalchemy.ext.asyncio import AsyncSession + +from src.auth import models, service +from src.database import get_db + +oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/auth/login") + + +async def get_current_user( + token: str = Depends(oauth2_scheme), + db: AsyncSession = Depends(get_db), +) -> models.User: + """ + Dependency to get current authenticated user from JWT token. + + Usage in routes: + @router.get("/protected") + async def protected_route(user: User = Depends(get_current_user)): + return {"user_id": user.id} + """ + credentials_exception = HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Could not validate credentials", + headers={"WWW-Authenticate": "Bearer"}, + ) + + # Decode token + payload = service.decode_token(token) + if payload is None: + raise credentials_exception + + # Get user ID from token + user_id = payload.get("sub") + if user_id is None: + raise credentials_exception + + # Fetch user from database + result = await db.execute( + select(models.User).where(models.User.id == int(user_id)) + ) + user = result.scalar_one_or_none() + + if user is None: + raise credentials_exception + + if not user.is_active: + raise HTTPException( + status_code=status.HTTP_403_FORBIDDEN, + detail="User is inactive", + ) + + return user + + +async def get_current_active_user( + user: models.User = Depends(get_current_user), +) -> models.User: + """Dependency that ensures user is active (already checked in get_current_user).""" + return user diff --git a/.cursor/skills/fastapi/templates/src/auth/models.py b/.cursor/skills/fastapi/templates/src/auth/models.py new file mode 100644 index 0000000..a688df1 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/auth/models.py @@ -0,0 +1,20 @@ +"""User database model.""" + +from datetime import datetime + +from sqlalchemy import DateTime, String +from sqlalchemy.orm import Mapped, mapped_column + +from src.database import Base + + +class User(Base): + """User model for authentication.""" + + __tablename__ = "users" + + id: Mapped[int] = mapped_column(primary_key=True) + email: Mapped[str] = mapped_column(String(255), unique=True, index=True) + hashed_password: Mapped[str] = mapped_column(String(255)) + is_active: Mapped[bool] = mapped_column(default=True) + created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow) diff --git a/.cursor/skills/fastapi/templates/src/auth/router.py b/.cursor/skills/fastapi/templates/src/auth/router.py new file mode 100644 index 0000000..00b8959 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/auth/router.py @@ -0,0 +1,89 @@ +"""Authentication routes - register, login, get current user.""" + +from fastapi import APIRouter, Depends, HTTPException, status +from fastapi.security import OAuth2PasswordRequestForm +from sqlalchemy import select +from sqlalchemy.ext.asyncio import AsyncSession + +from src.auth import models, schemas, service +from src.auth.dependencies import get_current_user +from src.database import get_db + +router = APIRouter(prefix="/auth", tags=["auth"]) + + +@router.post( + "/register", + response_model=schemas.UserResponse, + status_code=status.HTTP_201_CREATED, +) +async def register( + user_in: schemas.UserCreate, + db: AsyncSession = Depends(get_db), +): + """Register a new user.""" + # Check if email already exists + result = await db.execute( + select(models.User).where(models.User.email == user_in.email) + ) + if result.scalar_one_or_none(): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="Email already registered", + ) + + # Create user + user = models.User( + email=user_in.email, + hashed_password=service.hash_password(user_in.password), + ) + db.add(user) + await db.commit() + await db.refresh(user) + + return user + + +@router.post("/login", response_model=schemas.Token) +async def login( + form_data: OAuth2PasswordRequestForm = Depends(), + db: AsyncSession = Depends(get_db), +): + """ + Login and get access token. + + Note: OAuth2PasswordRequestForm expects 'username' field, + but we use it for email. + """ + # Find user by email (username field) + result = await db.execute( + select(models.User).where(models.User.email == form_data.username) + ) + user = result.scalar_one_or_none() + + # Verify credentials + if not user or not service.verify_password( + form_data.password, user.hashed_password + ): + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Incorrect email or password", + headers={"WWW-Authenticate": "Bearer"}, + ) + + if not user.is_active: + raise HTTPException( + status_code=status.HTTP_403_FORBIDDEN, + detail="User is inactive", + ) + + # Create access token + access_token = service.create_access_token(data={"sub": str(user.id)}) + + return schemas.Token(access_token=access_token) + + +@router.get("/me", response_model=schemas.UserResponse) +async def get_me(current_user: models.User = Depends(get_current_user)): + """Get current authenticated user.""" + return current_user diff --git a/.cursor/skills/fastapi/templates/src/auth/schemas.py b/.cursor/skills/fastapi/templates/src/auth/schemas.py new file mode 100644 index 0000000..dfb4adc --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/auth/schemas.py @@ -0,0 +1,33 @@ +"""Pydantic schemas for authentication.""" + +from pydantic import BaseModel, ConfigDict, EmailStr, Field + + +class UserCreate(BaseModel): + """Schema for user registration.""" + + email: EmailStr + password: str = Field(..., min_length=8, description="Minimum 8 characters") + + +class UserResponse(BaseModel): + """Schema for user response (no password).""" + + id: int + email: str + is_active: bool + + model_config = ConfigDict(from_attributes=True) + + +class Token(BaseModel): + """JWT token response.""" + + access_token: str + token_type: str = "bearer" + + +class TokenData(BaseModel): + """Decoded token data.""" + + user_id: int | None = None diff --git a/.cursor/skills/fastapi/templates/src/auth/service.py b/.cursor/skills/fastapi/templates/src/auth/service.py new file mode 100644 index 0000000..f62da24 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/auth/service.py @@ -0,0 +1,42 @@ +"""Authentication service - password hashing and JWT tokens.""" + +from datetime import datetime, timedelta + +from jose import JWTError, jwt +from passlib.context import CryptContext + +from src.config import settings + +# Password hashing +pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") + + +def hash_password(password: str) -> str: + """Hash a password using bcrypt.""" + return pwd_context.hash(password) + + +def verify_password(plain_password: str, hashed_password: str) -> bool: + """Verify a password against its hash.""" + return pwd_context.verify(plain_password, hashed_password) + + +def create_access_token(data: dict, expires_delta: timedelta | None = None) -> str: + """Create a JWT access token.""" + to_encode = data.copy() + expire = datetime.utcnow() + ( + expires_delta or timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES) + ) + to_encode.update({"exp": expire}) + return jwt.encode(to_encode, settings.SECRET_KEY, algorithm=settings.ALGORITHM) + + +def decode_token(token: str) -> dict | None: + """Decode and verify a JWT token. Returns None if invalid.""" + try: + payload = jwt.decode( + token, settings.SECRET_KEY, algorithms=[settings.ALGORITHM] + ) + return payload + except JWTError: + return None diff --git a/.cursor/skills/fastapi/templates/src/config.py b/.cursor/skills/fastapi/templates/src/config.py new file mode 100644 index 0000000..d29b817 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/config.py @@ -0,0 +1,26 @@ +"""Application configuration using Pydantic Settings.""" + +from pydantic_settings import BaseSettings + + +class Settings(BaseSettings): + """Application settings loaded from environment variables.""" + + # Database + DATABASE_URL: str = "sqlite+aiosqlite:///./database.db" + + # JWT Authentication + SECRET_KEY: str = "change-this-secret-key-in-production" + ACCESS_TOKEN_EXPIRE_MINUTES: int = 30 + ALGORITHM: str = "HS256" + + # App + APP_NAME: str = "My API" + DEBUG: bool = False + + class Config: + env_file = ".env" + env_file_encoding = "utf-8" + + +settings = Settings() diff --git a/.cursor/skills/fastapi/templates/src/database.py b/.cursor/skills/fastapi/templates/src/database.py new file mode 100644 index 0000000..9508380 --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/database.py @@ -0,0 +1,36 @@ +"""Database configuration with async SQLAlchemy 2.0.""" + +from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine +from sqlalchemy.orm import DeclarativeBase + +from src.config import settings + +# Create async engine +engine = create_async_engine( + settings.DATABASE_URL, + echo=settings.DEBUG, +) + +# Session factory +async_session = async_sessionmaker( + engine, + class_=AsyncSession, + expire_on_commit=False, +) + + +class Base(DeclarativeBase): + """Base class for all SQLAlchemy models.""" + + pass + + +async def get_db(): + """Dependency that provides a database session.""" + async with async_session() as session: + try: + yield session + await session.commit() + except Exception: + await session.rollback() + raise diff --git a/.cursor/skills/fastapi/templates/src/main.py b/.cursor/skills/fastapi/templates/src/main.py new file mode 100644 index 0000000..a2ce73f --- /dev/null +++ b/.cursor/skills/fastapi/templates/src/main.py @@ -0,0 +1,62 @@ +"""FastAPI application entry point.""" + +from contextlib import asynccontextmanager + +from fastapi import FastAPI +from fastapi.middleware.cors import CORSMiddleware + +from src.config import settings +from src.database import Base, engine + +# Import routers +from src.auth.router import router as auth_router + +# Add more routers as needed: +# from src.items.router import router as items_router + + +@asynccontextmanager +async def lifespan(app: FastAPI): + """Application lifespan handler for startup/shutdown.""" + # Startup: Create database tables + async with engine.begin() as conn: + await conn.run_sync(Base.metadata.create_all) + yield + # Shutdown: Add cleanup here if needed + + +app = FastAPI( + title=settings.APP_NAME, + lifespan=lifespan, +) + +# CORS middleware - configure for your frontend +app.add_middleware( + CORSMiddleware, + allow_origins=[ + "http://localhost:3000", # React dev server + "http://localhost:5173", # Vite dev server + ], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include routers +app.include_router(auth_router) +# app.include_router(items_router) + + +@app.get("/") +async def root(): + """Health check endpoint.""" + return {"status": "ok", "app": settings.APP_NAME} + + +@app.get("/health") +async def health(): + """Detailed health check.""" + return { + "status": "healthy", + "database": "connected", + } diff --git a/.cursor/skills/fastapi/templates/tests/test_main.py b/.cursor/skills/fastapi/templates/tests/test_main.py new file mode 100644 index 0000000..e2ccb01 --- /dev/null +++ b/.cursor/skills/fastapi/templates/tests/test_main.py @@ -0,0 +1,97 @@ +"""Basic API tests.""" + +import pytest +from httpx import ASGITransport, AsyncClient + +from src.main import app + + +@pytest.fixture +async def client(): + """Async test client fixture.""" + async with AsyncClient( + transport=ASGITransport(app=app), + base_url="http://test", + ) as ac: + yield ac + + +@pytest.mark.asyncio +async def test_root(client: AsyncClient): + """Test root endpoint returns ok status.""" + response = await client.get("/") + assert response.status_code == 200 + assert response.json()["status"] == "ok" + + +@pytest.mark.asyncio +async def test_health(client: AsyncClient): + """Test health endpoint.""" + response = await client.get("/health") + assert response.status_code == 200 + assert response.json()["status"] == "healthy" + + +@pytest.mark.asyncio +async def test_register_user(client: AsyncClient): + """Test user registration.""" + response = await client.post( + "/auth/register", + json={"email": "test@example.com", "password": "testpassword123"}, + ) + assert response.status_code == 201 + data = response.json() + assert data["email"] == "test@example.com" + assert "id" in data + + +@pytest.mark.asyncio +async def test_login(client: AsyncClient): + """Test user login.""" + # First register + await client.post( + "/auth/register", + json={"email": "login@example.com", "password": "testpassword123"}, + ) + + # Then login + response = await client.post( + "/auth/login", + data={"username": "login@example.com", "password": "testpassword123"}, + ) + assert response.status_code == 200 + data = response.json() + assert "access_token" in data + assert data["token_type"] == "bearer" + + +@pytest.mark.asyncio +async def test_get_me_unauthorized(client: AsyncClient): + """Test /auth/me without token returns 401.""" + response = await client.get("/auth/me") + assert response.status_code == 401 + + +@pytest.mark.asyncio +async def test_get_me_authorized(client: AsyncClient): + """Test /auth/me with valid token returns user.""" + # Register + await client.post( + "/auth/register", + json={"email": "me@example.com", "password": "testpassword123"}, + ) + + # Login + login_response = await client.post( + "/auth/login", + data={"username": "me@example.com", "password": "testpassword123"}, + ) + token = login_response.json()["access_token"] + + # Get me + response = await client.get( + "/auth/me", + headers={"Authorization": f"Bearer {token}"}, + ) + assert response.status_code == 200 + assert response.json()["email"] == "me@example.com" diff --git a/.cursor/skills/file-organizer/SKILL.md b/.cursor/skills/file-organizer/SKILL.md new file mode 100644 index 0000000..66762b8 --- /dev/null +++ b/.cursor/skills/file-organizer/SKILL.md @@ -0,0 +1,433 @@ +--- +name: file-organizer +description: Intelligently organizes your files and folders across your computer by understanding context, finding duplicates, suggesting better structures, and automating cleanup tasks. Reduces cognitive load and keeps your digital workspace tidy without manual effort. +--- + +# File Organizer + +This skill acts as your personal organization assistant, helping you maintain a clean, logical file structure across your computer without the mental overhead of constant manual organization. + +## When to Use This Skill + +- Your Downloads folder is a chaotic mess +- You can't find files because they're scattered everywhere +- You have duplicate files taking up space +- Your folder structure doesn't make sense anymore +- You want to establish better organization habits +- You're starting a new project and need a good structure +- You're cleaning up before archiving old projects + +## What This Skill Does + +1. **Analyzes Current Structure**: Reviews your folders and files to understand what you have +2. **Finds Duplicates**: Identifies duplicate files across your system +3. **Suggests Organization**: Proposes logical folder structures based on your content +4. **Automates Cleanup**: Moves, renames, and organizes files with your approval +5. **Maintains Context**: Makes smart decisions based on file types, dates, and content +6. **Reduces Clutter**: Identifies old files you probably don't need anymore + +## How to Use + +### From Your Home Directory + +``` +cd ~ +``` + +Then run Claude Code and ask for help: + +``` +Help me organize my Downloads folder +``` + +``` +Find duplicate files in my Documents folder +``` + +``` +Review my project directories and suggest improvements +``` + +### Specific Organization Tasks + +``` +Organize these downloads into proper folders based on what they are +``` + +``` +Find duplicate files and help me decide which to keep +``` + +``` +Clean up old files I haven't touched in 6+ months +``` + +``` +Create a better folder structure for my [work/projects/photos/etc] +``` + +## Instructions + +When a user requests file organization help: + +1. **Understand the Scope** + + Ask clarifying questions: + - Which directory needs organization? (Downloads, Documents, entire home folder?) + - What's the main problem? (Can't find things, duplicates, too messy, no structure?) + - Any files or folders to avoid? (Current projects, sensitive data?) + - How aggressively to organize? (Conservative vs. comprehensive cleanup) + +2. **Analyze Current State** + + Review the target directory: + ```bash + # Get overview of current structure + ls -la [target_directory] + + # Check file types and sizes + find [target_directory] -type f -exec file {} \; | head -20 + + # Identify largest files + du -sh [target_directory]/* | sort -rh | head -20 + + # Count file types + find [target_directory] -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn + ``` + + Summarize findings: + - Total files and folders + - File type breakdown + - Size distribution + - Date ranges + - Obvious organization issues + +3. **Identify Organization Patterns** + + Based on the files, determine logical groupings: + + **By Type**: + - Documents (PDFs, DOCX, TXT) + - Images (JPG, PNG, SVG) + - Videos (MP4, MOV) + - Archives (ZIP, TAR, DMG) + - Code/Projects (directories with code) + - Spreadsheets (XLSX, CSV) + - Presentations (PPTX, KEY) + + **By Purpose**: + - Work vs. Personal + - Active vs. Archive + - Project-specific + - Reference materials + - Temporary/scratch files + + **By Date**: + - Current year/month + - Previous years + - Very old (archive candidates) + +4. **Find Duplicates** + + When requested, search for duplicates: + ```bash + # Find exact duplicates by hash + find [directory] -type f -exec md5 {} \; | sort | uniq -d + + # Find files with same name + find [directory] -type f -printf '%f\n' | sort | uniq -d + + # Find similar-sized files + find [directory] -type f -printf '%s %p\n' | sort -n + ``` + + For each set of duplicates: + - Show all file paths + - Display sizes and modification dates + - Recommend which to keep (usually newest or best-named) + - **Important**: Always ask for confirmation before deleting + +5. **Propose Organization Plan** + + Present a clear plan before making changes: + + ```markdown + # Organization Plan for [Directory] + + ## Current State + - X files across Y folders + - [Size] total + - File types: [breakdown] + - Issues: [list problems] + + ## Proposed Structure + + ``` + [Directory]/ + ├── Work/ + │ ├── Projects/ + │ ├── Documents/ + │ └── Archive/ + ├── Personal/ + │ ├── Photos/ + │ ├── Documents/ + │ └── Media/ + └── Downloads/ + ├── To-Sort/ + └── Archive/ + ``` + + ## Changes I'll Make + + 1. **Create new folders**: [list] + 2. **Move files**: + - X PDFs → Work/Documents/ + - Y images → Personal/Photos/ + - Z old files → Archive/ + 3. **Rename files**: [any renaming patterns] + 4. **Delete**: [duplicates or trash files] + + ## Files Needing Your Decision + + - [List any files you're unsure about] + + Ready to proceed? (yes/no/modify) + ``` + +6. **Execute Organization** + + After approval, organize systematically: + + ```bash + # Create folder structure + mkdir -p "path/to/new/folders" + + # Move files with clear logging + mv "old/path/file.pdf" "new/path/file.pdf" + + # Rename files with consistent patterns + # Example: "YYYY-MM-DD - Description.ext" + ``` + + **Important Rules**: + - Always confirm before deleting anything + - Log all moves for potential undo + - Preserve original modification dates + - Handle filename conflicts gracefully + - Stop and ask if you encounter unexpected situations + +7. **Provide Summary and Maintenance Tips** + + After organizing: + + ```markdown + # Organization Complete! ✨ + + ## What Changed + + - Created [X] new folders + - Organized [Y] files + - Freed [Z] GB by removing duplicates + - Archived [W] old files + + ## New Structure + + [Show the new folder tree] + + ## Maintenance Tips + + To keep this organized: + + 1. **Weekly**: Sort new downloads + 2. **Monthly**: Review and archive completed projects + 3. **Quarterly**: Check for new duplicates + 4. **Yearly**: Archive old files + + ## Quick Commands for You + + ```bash + # Find files modified this week + find . -type f -mtime -7 + + # Sort downloads by type + [custom command for their setup] + + # Find duplicates + [custom command] + ``` + + Want to organize another folder? + ``` + +## Examples + +### Example 1: Organizing Downloads (From Justin Dielmann) + +**User**: "My Downloads folder is a mess with 500+ files. Help me organize it." + +**Process**: +1. Analyzes Downloads folder +2. Finds patterns: work docs, personal photos, installers, random PDFs +3. Proposes structure: + - Downloads/ + - Work/ + - Personal/ + - Installers/ (DMG, PKG files) + - Archive/ + - ToSort/ (things needing decisions) +4. Asks for confirmation +5. Moves files intelligently based on content and names +6. Results: 500 files → 5 organized folders + +### Example 2: Finding and Removing Duplicates + +**User**: "Find duplicate files in my Documents and help me decide which to keep." + +**Output**: +```markdown +# Found 23 Sets of Duplicates (156 MB total) + +## Duplicate Set 1: "proposal.pdf" +- `/Documents/proposal.pdf` (2.3 MB, modified: 2024-03-15) +- `/Documents/old/proposal.pdf` (2.3 MB, modified: 2024-03-15) +- `/Desktop/proposal.pdf` (2.3 MB, modified: 2024-03-10) + +**Recommendation**: Keep `/Documents/proposal.pdf` (most recent in correct location) +Delete the other 2 copies? + +[Continue for all duplicates...] +``` + +### Example 3: Restructuring Projects Folder + +**User**: "Review my ~/Projects directory and suggest improvements." + +**Output**: +```markdown +# Analysis of ~/Projects + +## Current Structure Issues +- Mix of active and archived projects (3+ years old) +- No consistent naming convention +- Some projects at root, others in random subfolders +- Duplicate folders (project-name, project-name-old, project-name-v2) + +## Proposed Structure + +``` +Projects/ +├── Active/ +│ ├── client-work/ +│ ├── side-projects/ +│ └── learning/ +├── Archive/ +│ ├── 2022/ +│ ├── 2023/ +│ └── 2024/ +└── Templates/ +``` + +## Specific Changes +1. Move 12 projects not touched since 2022 → Archive/ +2. Consolidate 4 duplicate project folders +3. Rename projects to consistent format: "client-name-project-name" +4. Create Archive for old work + +Want me to implement this? +``` + +### Example 4: Organizing Photos by Date + +**User**: "Organize my photo folders by year and month." + +**Output**: Creates structure like: +``` +Photos/ +├── 2023/ +│ ├── 01-January/ +│ ├── 02-February/ +│ └── ... +├── 2024/ +│ ├── 01-January/ +│ └── ... +└── Unsorted/ +``` + +Then moves photos based on EXIF data or file modification dates. + +## Common Organization Tasks + +### Downloads Cleanup +``` +Organize my Downloads folder - move documents to Documents, +images to Pictures, keep installers separate, and archive files +older than 3 months. +``` + +### Project Organization +``` +Review my Projects folder structure and help me separate active +projects from old ones I should archive. +``` + +### Duplicate Removal +``` +Find all duplicate files in my Documents folder and help me +decide which ones to keep. +``` + +### Desktop Cleanup +``` +My Desktop is covered in files. Help me organize everything into +my Documents folder properly. +``` + +### Photo Organization +``` +Organize all photos in this folder by date (year/month) based +on when they were taken. +``` + +### Work/Personal Separation +``` +Help me separate my work files from personal files across my +Documents folder. +``` + +## Pro Tips + +1. **Start Small**: Begin with one messy folder (like Downloads) to build trust +2. **Regular Maintenance**: Run weekly cleanup on Downloads +3. **Consistent Naming**: Use "YYYY-MM-DD - Description" format for important files +4. **Archive Aggressively**: Move old projects to Archive instead of deleting +5. **Keep Active Separate**: Maintain clear boundaries between active and archived work +6. **Trust the Process**: Let Claude handle the cognitive load of where things go + +## Best Practices + +### Folder Naming +- Use clear, descriptive names +- Avoid spaces (use hyphens or underscores) +- Be specific: "client-proposals" not "docs" +- Use prefixes for ordering: "01-current", "02-archive" + +### File Naming +- Include dates: "2024-10-17-meeting-notes.md" +- Be descriptive: "q3-financial-report.xlsx" +- Avoid version numbers in names (use version control instead) +- Remove download artifacts: "document-final-v2 (1).pdf" → "document.pdf" + +### When to Archive +- Projects not touched in 6+ months +- Completed work that might be referenced later +- Old versions after migration to new systems +- Files you're hesitant to delete (archive first) + +## Related Use Cases + +- Setting up organization for a new computer +- Preparing files for backup/archiving +- Cleaning up before storage cleanup +- Organizing shared team folders +- Structuring new project directories + diff --git a/.cursor/skills/github-actions-templates/SKILL.md b/.cursor/skills/github-actions-templates/SKILL.md new file mode 100644 index 0000000..691f4bc --- /dev/null +++ b/.cursor/skills/github-actions-templates/SKILL.md @@ -0,0 +1,334 @@ +--- +name: github-actions-templates +description: Create production-ready GitHub Actions workflows for automated testing, building, and deploying applications. Use when setting up CI/CD with GitHub Actions, automating development workflows, or creating reusable workflow templates. +--- + +# GitHub Actions Templates + +Production-ready GitHub Actions workflow patterns for testing, building, and deploying applications. + +## Purpose + +Create efficient, secure GitHub Actions workflows for continuous integration and deployment across various tech stacks. + +## When to Use + +- Automate testing and deployment +- Build Docker images and push to registries +- Deploy to Kubernetes clusters +- Run security scans +- Implement matrix builds for multiple environments + +## Common Workflow Patterns + +### Pattern 1: Test Workflow + +```yaml +name: Test + +on: + push: + branches: [main, develop] + pull_request: + branches: [main] + +jobs: + test: + runs-on: ubuntu-latest + + strategy: + matrix: + node-version: [18.x, 20.x] + + steps: + - uses: actions/checkout@v4 + + - name: Use Node.js ${{ matrix.node-version }} + uses: actions/setup-node@v4 + with: + node-version: ${{ matrix.node-version }} + cache: "npm" + + - name: Install dependencies + run: npm ci + + - name: Run linter + run: npm run lint + + - name: Run tests + run: npm test + + - name: Upload coverage + uses: codecov/codecov-action@v3 + with: + files: ./coverage/lcov.info +``` + +**Reference:** See `assets/test-workflow.yml` + +### Pattern 2: Build and Push Docker Image + +```yaml +name: Build and Push + +on: + push: + branches: [main] + tags: ["v*"] + +env: + REGISTRY: ghcr.io + IMAGE_NAME: ${{ github.repository }} + +jobs: + build: + runs-on: ubuntu-latest + permissions: + contents: read + packages: write + + steps: + - uses: actions/checkout@v4 + + - name: Log in to Container Registry + uses: docker/login-action@v3 + with: + registry: ${{ env.REGISTRY }} + username: ${{ github.actor }} + password: ${{ secrets.GITHUB_TOKEN }} + + - name: Extract metadata + id: meta + uses: docker/metadata-action@v5 + with: + images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} + tags: | + type=ref,event=branch + type=ref,event=pr + type=semver,pattern={{version}} + type=semver,pattern={{major}}.{{minor}} + + - name: Build and push + uses: docker/build-push-action@v5 + with: + context: . + push: true + tags: ${{ steps.meta.outputs.tags }} + labels: ${{ steps.meta.outputs.labels }} + cache-from: type=gha + cache-to: type=gha,mode=max +``` + +**Reference:** See `assets/deploy-workflow.yml` + +### Pattern 3: Deploy to Kubernetes + +```yaml +name: Deploy to Kubernetes + +on: + push: + branches: [main] + +jobs: + deploy: + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Configure AWS credentials + uses: aws-actions/configure-aws-credentials@v4 + with: + aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} + aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + aws-region: us-west-2 + + - name: Update kubeconfig + run: | + aws eks update-kubeconfig --name production-cluster --region us-west-2 + + - name: Deploy to Kubernetes + run: | + kubectl apply -f k8s/ + kubectl rollout status deployment/my-app -n production + kubectl get services -n production + + - name: Verify deployment + run: | + kubectl get pods -n production + kubectl describe deployment my-app -n production +``` + +### Pattern 4: Matrix Build + +```yaml +name: Matrix Build + +on: [push, pull_request] + +jobs: + build: + runs-on: ${{ matrix.os }} + + strategy: + matrix: + os: [ubuntu-latest, macos-latest, windows-latest] + python-version: ["3.9", "3.10", "3.11", "3.12"] + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -r requirements.txt + + - name: Run tests + run: pytest +``` + +**Reference:** See `assets/matrix-build.yml` + +## Workflow Best Practices + +1. **Use specific action versions** (@v4, not @latest) +2. **Cache dependencies** to speed up builds +3. **Use secrets** for sensitive data +4. **Implement status checks** on PRs +5. **Use matrix builds** for multi-version testing +6. **Set appropriate permissions** +7. **Use reusable workflows** for common patterns +8. **Implement approval gates** for production +9. **Add notification steps** for failures +10. **Use self-hosted runners** for sensitive workloads + +## Reusable Workflows + +```yaml +# .github/workflows/reusable-test.yml +name: Reusable Test Workflow + +on: + workflow_call: + inputs: + node-version: + required: true + type: string + secrets: + NPM_TOKEN: + required: true + +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-node@v4 + with: + node-version: ${{ inputs.node-version }} + - run: npm ci + - run: npm test +``` + +**Use reusable workflow:** + +```yaml +jobs: + call-test: + uses: ./.github/workflows/reusable-test.yml + with: + node-version: "20.x" + secrets: + NPM_TOKEN: ${{ secrets.NPM_TOKEN }} +``` + +## Security Scanning + +```yaml +name: Security Scan + +on: + push: + branches: [main] + pull_request: + branches: [main] + +jobs: + security: + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Run Trivy vulnerability scanner + uses: aquasecurity/trivy-action@master + with: + scan-type: "fs" + scan-ref: "." + format: "sarif" + output: "trivy-results.sarif" + + - name: Upload Trivy results to GitHub Security + uses: github/codeql-action/upload-sarif@v2 + with: + sarif_file: "trivy-results.sarif" + + - name: Run Snyk Security Scan + uses: snyk/actions/node@master + env: + SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} +``` + +## Deployment with Approvals + +```yaml +name: Deploy to Production + +on: + push: + tags: ["v*"] + +jobs: + deploy: + runs-on: ubuntu-latest + environment: + name: production + url: https://app.example.com + + steps: + - uses: actions/checkout@v4 + + - name: Deploy application + run: | + echo "Deploying to production..." + # Deployment commands here + + - name: Notify Slack + if: success() + uses: slackapi/slack-github-action@v1 + with: + webhook-url: ${{ secrets.SLACK_WEBHOOK }} + payload: | + { + "text": "Deployment to production completed successfully!" + } +``` + +## Reference Files + +- `assets/test-workflow.yml` - Testing workflow template +- `assets/deploy-workflow.yml` - Deployment workflow template +- `assets/matrix-build.yml` - Matrix build template +- `references/common-workflows.md` - Common workflow patterns + +## Related Skills + +- `gitlab-ci-patterns` - For GitLab CI workflows +- `deployment-pipeline-design` - For pipeline architecture +- `secrets-management` - For secrets handling diff --git a/.cursor/skills/github-issues/SKILL.md b/.cursor/skills/github-issues/SKILL.md new file mode 100644 index 0000000..c2e8865 --- /dev/null +++ b/.cursor/skills/github-issues/SKILL.md @@ -0,0 +1,132 @@ +--- +name: github-issues +description: 'Create, update, and manage GitHub issues using MCP tools. Use this skill when users want to create bug reports, feature requests, or task issues, update existing issues, add labels/assignees/milestones, or manage issue workflows. Triggers on requests like "create an issue", "file a bug", "request a feature", "update issue X", or any GitHub issue management task.' +--- + +# GitHub Issues + +Manage GitHub issues using the `@modelcontextprotocol/server-github` MCP server. + +## Available MCP Tools + +| Tool | Purpose | +|------|---------| +| `mcp__github__create_issue` | Create new issues | +| `mcp__github__update_issue` | Update existing issues | +| `mcp__github__get_issue` | Fetch issue details | +| `mcp__github__search_issues` | Search issues | +| `mcp__github__add_issue_comment` | Add comments | +| `mcp__github__list_issues` | List repository issues | + +## Workflow + +1. **Determine action**: Create, update, or query? +2. **Gather context**: Get repo info, existing labels, milestones if needed +3. **Structure content**: Use appropriate template from [references/templates.md](references/templates.md) +4. **Execute**: Call the appropriate MCP tool +5. **Confirm**: Report the issue URL to user + +## Creating Issues + +### Required Parameters + +``` +owner: repository owner (org or user) +repo: repository name +title: clear, actionable title +body: structured markdown content +``` + +### Optional Parameters + +``` +labels: ["bug", "enhancement", "documentation", ...] +assignees: ["username1", "username2"] +milestone: milestone number (integer) +``` + +### Title Guidelines + +- Start with type prefix when useful: `[Bug]`, `[Feature]`, `[Docs]` +- Be specific and actionable +- Keep under 72 characters +- Examples: + - `[Bug] Login fails with SSO enabled` + - `[Feature] Add dark mode support` + - `Add unit tests for auth module` + +### Body Structure + +Always use the templates in [references/templates.md](references/templates.md). Choose based on issue type: + +| User Request | Template | +|--------------|----------| +| Bug, error, broken, not working | Bug Report | +| Feature, enhancement, add, new | Feature Request | +| Task, chore, refactor, update | Task | + +## Updating Issues + +Use `mcp__github__update_issue` with: + +``` +owner, repo, issue_number (required) +title, body, state, labels, assignees, milestone (optional - only changed fields) +``` + +State values: `open`, `closed` + +## Examples + +### Example 1: Bug Report + +**User**: "Create a bug issue - the login page crashes when using SSO" + +**Action**: Call `mcp__github__create_issue` with: +```json +{ + "owner": "github", + "repo": "awesome-copilot", + "title": "[Bug] Login page crashes when using SSO", + "body": "## Description\nThe login page crashes when users attempt to authenticate using SSO.\n\n## Steps to Reproduce\n1. Navigate to login page\n2. Click 'Sign in with SSO'\n3. Page crashes\n\n## Expected Behavior\nSSO authentication should complete and redirect to dashboard.\n\n## Actual Behavior\nPage becomes unresponsive and displays error.\n\n## Environment\n- Browser: [To be filled]\n- OS: [To be filled]\n\n## Additional Context\nReported by user.", + "labels": ["bug"] +} +``` + +### Example 2: Feature Request + +**User**: "Create a feature request for dark mode with high priority" + +**Action**: Call `mcp__github__create_issue` with: +```json +{ + "owner": "github", + "repo": "awesome-copilot", + "title": "[Feature] Add dark mode support", + "body": "## Summary\nAdd dark mode theme option for improved user experience and accessibility.\n\n## Motivation\n- Reduces eye strain in low-light environments\n- Increasingly expected by users\n- Improves accessibility\n\n## Proposed Solution\nImplement theme toggle with system preference detection.\n\n## Acceptance Criteria\n- [ ] Toggle switch in settings\n- [ ] Persists user preference\n- [ ] Respects system preference by default\n- [ ] All UI components support both themes\n\n## Alternatives Considered\nNone specified.\n\n## Additional Context\nHigh priority request.", + "labels": ["enhancement", "high-priority"] +} +``` + +## Common Labels + +Use these standard labels when applicable: + +| Label | Use For | +|-------|---------| +| `bug` | Something isn't working | +| `enhancement` | New feature or improvement | +| `documentation` | Documentation updates | +| `good first issue` | Good for newcomers | +| `help wanted` | Extra attention needed | +| `question` | Further information requested | +| `wontfix` | Will not be addressed | +| `duplicate` | Already exists | +| `high-priority` | Urgent issues | + +## Tips + +- Always confirm the repository context before creating issues +- Ask for missing critical information rather than guessing +- Link related issues when known: `Related to #123` +- For updates, fetch current issue first to preserve unchanged fields diff --git a/.cursor/skills/github-issues/references/templates.md b/.cursor/skills/github-issues/references/templates.md new file mode 100644 index 0000000..c05b408 --- /dev/null +++ b/.cursor/skills/github-issues/references/templates.md @@ -0,0 +1,90 @@ +# Issue Templates + +Copy and customize these templates for issue bodies. + +## Bug Report Template + +```markdown +## Description +[Clear description of the bug] + +## Steps to Reproduce +1. [First step] +2. [Second step] +3. [And so on...] + +## Expected Behavior +[What should happen] + +## Actual Behavior +[What actually happens] + +## Environment +- Browser: [e.g., Chrome 120] +- OS: [e.g., macOS 14.0] +- Version: [e.g., v1.2.3] + +## Screenshots/Logs +[If applicable] + +## Additional Context +[Any other relevant information] +``` + +## Feature Request Template + +```markdown +## Summary +[One-line description of the feature] + +## Motivation +[Why is this feature needed? What problem does it solve?] + +## Proposed Solution +[How should this feature work?] + +## Acceptance Criteria +- [ ] [Criterion 1] +- [ ] [Criterion 2] +- [ ] [Criterion 3] + +## Alternatives Considered +[Other approaches considered and why they weren't chosen] + +## Additional Context +[Mockups, examples, or related issues] +``` + +## Task Template + +```markdown +## Objective +[What needs to be accomplished] + +## Details +[Detailed description of the work] + +## Checklist +- [ ] [Subtask 1] +- [ ] [Subtask 2] +- [ ] [Subtask 3] + +## Dependencies +[Any blockers or related work] + +## Notes +[Additional context or considerations] +``` + +## Minimal Template + +For simple issues: + +```markdown +## Description +[What and why] + +## Tasks +- [ ] [Task 1] +- [ ] [Task 2] +``` diff --git a/.cursor/skills/github-projects/SKILL.md b/.cursor/skills/github-projects/SKILL.md new file mode 100644 index 0000000..6b42916 --- /dev/null +++ b/.cursor/skills/github-projects/SKILL.md @@ -0,0 +1,224 @@ +--- +name: github-projects +description: GitHub Projects management via gh CLI for creating projects, managing items, fields, and workflows. Use when working with GitHub Projects (v2), adding issues/PRs to projects, creating custom fields, tracking project items, or automating project workflows. Triggers on gh project, project board, kanban, GitHub project, project items. +--- + +# GitHub Projects CLI + +GitHub Projects (v2) management via `gh project` commands. Requires the `project` scope which can be added with `gh auth refresh -s project`. + +## Prerequisites + +Verify authentication includes project scope: + +```bash +gh auth status # Check current scopes +gh auth refresh -s project # Add project scope if missing +``` + +## Quick Reference + +### List & View Projects + +```bash +# List your projects +gh project list + +# List org projects (including closed) +gh project list --owner ORG_NAME --closed + +# View project details +gh project view PROJECT_NUM --owner OWNER + +# Open in browser +gh project view PROJECT_NUM --owner OWNER --web + +# JSON output with jq filtering +gh project list --format json | jq '.projects[] | {number, title}' +``` + +### Create & Edit Projects + +```bash +# Create project +gh project create --owner OWNER --title "Project Title" + +# Edit project +gh project edit PROJECT_NUM --owner OWNER --title "New Title" +gh project edit PROJECT_NUM --owner OWNER --description "New description" +gh project edit PROJECT_NUM --owner OWNER --visibility PUBLIC + +# Close/reopen project +gh project close PROJECT_NUM --owner OWNER +gh project close PROJECT_NUM --owner OWNER --undo # Reopen +``` + +### Link Projects to Repos + +```bash +# Link to repo +gh project link PROJECT_NUM --owner OWNER --repo REPO_NAME + +# Link to team +gh project link PROJECT_NUM --owner ORG --team TEAM_NAME + +# Unlink +gh project unlink PROJECT_NUM --owner OWNER --repo REPO_NAME +``` + +## Project Items + +### Add Existing Issues/PRs + +```bash +# Add issue to project +gh project item-add PROJECT_NUM --owner OWNER --url https://github.com/OWNER/REPO/issues/123 + +# Add PR to project +gh project item-add PROJECT_NUM --owner OWNER --url https://github.com/OWNER/REPO/pull/456 +``` + +### Create Draft Items + +```bash +gh project item-create PROJECT_NUM --owner OWNER --title "Draft item" --body "Description" +``` + +### List Items + +```bash +# List items (default 30) +gh project item-list PROJECT_NUM --owner OWNER + +# List more items +gh project item-list PROJECT_NUM --owner OWNER --limit 100 + +# JSON output +gh project item-list PROJECT_NUM --owner OWNER --format json +``` + +### Edit Items + +Items are edited by their ID (obtained from `item-list --format json`). + +```bash +# Edit draft issue title/body +gh project item-edit --id ITEM_ID --title "New Title" --body "New body" + +# Update field value (requires field-id and project-id) +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --text "value" +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --number 42 +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --date "2024-12-31" +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --single-select-option-id OPTION_ID +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --iteration-id ITER_ID + +# Clear field value +gh project item-edit --id ITEM_ID --project-id PROJECT_ID --field-id FIELD_ID --clear +``` + +### Archive/Delete Items + +```bash +gh project item-archive PROJECT_NUM --owner OWNER --id ITEM_ID +gh project item-delete PROJECT_NUM --owner OWNER --id ITEM_ID +``` + +## Project Fields + +### List Fields + +```bash +gh project field-list PROJECT_NUM --owner OWNER +gh project field-list PROJECT_NUM --owner OWNER --format json +``` + +### Create Fields + +```bash +# Text field +gh project field-create PROJECT_NUM --owner OWNER --name "Notes" --data-type TEXT + +# Number field +gh project field-create PROJECT_NUM --owner OWNER --name "Points" --data-type NUMBER + +# Date field +gh project field-create PROJECT_NUM --owner OWNER --name "Due Date" --data-type DATE + +# Single select with options +gh project field-create PROJECT_NUM --owner OWNER --name "Priority" \ + --data-type SINGLE_SELECT \ + --single-select-options "Low,Medium,High,Critical" +``` + +### Delete Fields + +```bash +gh project field-delete --id FIELD_ID +``` + +## Common Workflows + +### Add Issue and Set Status + +```bash +# 1. Add issue to project +gh project item-add 1 --owner "@me" --url https://github.com/owner/repo/issues/123 + +# 2. Get item ID and field IDs +gh project item-list 1 --owner "@me" --format json | jq '.items[-1]' +gh project field-list 1 --owner "@me" --format json + +# 3. Update status field +gh project item-edit --id ITEM_ID --project-id PROJECT_ID \ + --field-id STATUS_FIELD_ID --single-select-option-id OPTION_ID +``` + +### Bulk Add Issues + +```bash +# Add all open issues from a repo +gh issue list --repo owner/repo --state open --json url -q '.[].url' | \ + xargs -I {} gh project item-add 1 --owner "@me" --url {} +``` + +## JSON Output & jq Patterns + +```bash +# Get project IDs +gh project list --format json | jq '.projects[] | {number, id, title}' + +# Get field IDs and options +gh project field-list 1 --owner "@me" --format json | jq '.fields[] | {id, name, options}' + +# Get item IDs with field values +gh project item-list 1 --owner "@me" --format json | jq '.items[] | {id, title, fieldValues}' + +# Filter items by status +gh project item-list 1 --owner "@me" --format json | \ + jq '.items[] | select(.status == "In Progress")' +``` + +## Reference Files + +- **[items.md](references/items.md)**: Item management, editing field values, bulk operations +- **[fields.md](references/fields.md)**: Field types, creating custom fields, option management + +## Command Summary + +| Command | Purpose | +|---------|---------| +| `project list` | List projects | +| `project view` | View project details | +| `project create` | Create new project | +| `project edit` | Modify project settings | +| `project close` | Close/reopen project | +| `project link/unlink` | Connect to repo/team | +| `project item-add` | Add existing issue/PR | +| `project item-create` | Create draft item | +| `project item-list` | List project items | +| `project item-edit` | Update item fields | +| `project item-archive` | Archive item | +| `project item-delete` | Remove item | +| `project field-list` | List project fields | +| `project field-create` | Add custom field | +| `project field-delete` | Remove field | diff --git a/.cursor/skills/github-projects/references/fields.md b/.cursor/skills/github-projects/references/fields.md new file mode 100644 index 0000000..0ba88f9 --- /dev/null +++ b/.cursor/skills/github-projects/references/fields.md @@ -0,0 +1,247 @@ +# GitHub Project Fields Reference + +Detailed reference for managing project fields via `gh project field-*` commands. + +## Built-in Fields + +Every project includes these system fields: +- **Title** - Item name (from issue/PR title or draft title) +- **Assignees** - Assigned users +- **Status** - Single select workflow status +- **Labels** - Issue/PR labels (read-only in project) +- **Milestone** - Issue/PR milestone (read-only) +- **Repository** - Source repository +- **Reviewers** - PR reviewers + +## Custom Field Types + +| Type | Flag Value | Description | +|------|------------|-------------| +| Text | `TEXT` | Free-form text | +| Number | `NUMBER` | Numeric values | +| Date | `DATE` | Date picker (YYYY-MM-DD) | +| Single Select | `SINGLE_SELECT` | Dropdown with predefined options | +| Iteration | `ITERATION` | Sprint/iteration cycles | + +## Listing Fields + +```bash +gh project field-list PROJECT_NUM --owner OWNER +``` + +Options: +| Flag | Default | Description | +|------|---------|-------------| +| `-L, --limit` | 30 | Max fields to fetch | +| `--format json` | - | JSON output | +| `-q, --jq` | - | jq filter expression | + +### JSON Structure + +```json +{ + "fields": [ + { + "id": "PVTF_xxx", + "name": "Status", + "type": "SINGLE_SELECT", + "options": [ + {"id": "opt1", "name": "Todo"}, + {"id": "opt2", "name": "In Progress"}, + {"id": "opt3", "name": "Done"} + ] + }, + { + "id": "PVTF_yyy", + "name": "Points", + "type": "NUMBER" + } + ] +} +``` + +### Useful jq Filters + +```bash +# List all field names and types +gh project field-list 1 --owner "@me" --format json | \ + jq '.fields[] | {name, type}' + +# Get specific field ID +gh project field-list 1 --owner "@me" --format json | \ + jq -r '.fields[] | select(.name == "Status") | .id' + +# Get single select options +gh project field-list 1 --owner "@me" --format json | \ + jq '.fields[] | select(.type == "SINGLE_SELECT") | {name, options}' + +# Get option ID by name +gh project field-list 1 --owner "@me" --format json | \ + jq -r '.fields[] | select(.name == "Priority") | .options[] | select(.name == "High") | .id' +``` + +## Creating Fields + +### Text Field +```bash +gh project field-create PROJECT_NUM --owner OWNER \ + --name "Notes" \ + --data-type TEXT +``` + +### Number Field +```bash +gh project field-create PROJECT_NUM --owner OWNER \ + --name "Story Points" \ + --data-type NUMBER +``` + +### Date Field +```bash +gh project field-create PROJECT_NUM --owner OWNER \ + --name "Due Date" \ + --data-type DATE +``` + +### Single Select Field + +```bash +gh project field-create PROJECT_NUM --owner OWNER \ + --name "Priority" \ + --data-type SINGLE_SELECT \ + --single-select-options "Low,Medium,High,Critical" +``` + +Options are comma-separated. Field is created with all options. + +## Deleting Fields + +```bash +gh project field-delete --id FIELD_ID +``` + +Get field ID from `field-list --format json`. Deleting removes the field and all its values from items. + +## Working with Single Select Options + +### Get Option IDs + +```bash +# Get all options for a field +gh project field-list 1 --owner "@me" --format json | \ + jq '.fields[] | select(.name == "Status") | .options[] | {id, name}' +``` + +### Set Item to Specific Option + +```bash +# Get IDs +PROJECT_ID=$(gh project list --format json | jq -r '.projects[0].id') +FIELD_ID=$(gh project field-list 1 --owner "@me" --format json | \ + jq -r '.fields[] | select(.name == "Status") | .id') +OPTION_ID=$(gh project field-list 1 --owner "@me" --format json | \ + jq -r '.fields[] | select(.name == "Status") | .options[] | select(.name == "Done") | .id') + +# Update item +gh project item-edit --id ITEM_ID \ + --project-id "$PROJECT_ID" \ + --field-id "$FIELD_ID" \ + --single-select-option-id "$OPTION_ID" +``` + +## Working with Iterations + +Iterations are managed via the web UI. The CLI can: +- List iteration field IDs +- Set items to specific iterations + +```bash +# Get iteration field +gh project field-list 1 --owner "@me" --format json | \ + jq '.fields[] | select(.type == "ITERATION")' + +# Set item to iteration +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id ITERATION_FIELD_ID \ + --iteration-id ITERATION_ID +``` + +## Field Patterns + +### Priority Field +```bash +gh project field-create 1 --owner "@me" \ + --name "Priority" \ + --data-type SINGLE_SELECT \ + --single-select-options "P0 - Critical,P1 - High,P2 - Medium,P3 - Low" +``` + +### Effort/Points Field +```bash +gh project field-create 1 --owner "@me" \ + --name "Points" \ + --data-type NUMBER +``` + +### Due Date Field +```bash +gh project field-create 1 --owner "@me" \ + --name "Due Date" \ + --data-type DATE +``` + +### Team Field +```bash +gh project field-create 1 --owner "@me" \ + --name "Team" \ + --data-type SINGLE_SELECT \ + --single-select-options "Frontend,Backend,DevOps,Design" +``` + +## Complete Setup Example + +```bash +#!/bin/bash +# Set up a new project with common fields + +PROJECT_NUM=1 +OWNER="@me" + +# Create Status field (usually exists by default) +# gh project field-create $PROJECT_NUM --owner "$OWNER" \ +# --name "Status" --data-type SINGLE_SELECT \ +# --single-select-options "Backlog,Todo,In Progress,In Review,Done" + +# Priority +gh project field-create $PROJECT_NUM --owner "$OWNER" \ + --name "Priority" --data-type SINGLE_SELECT \ + --single-select-options "P0,P1,P2,P3" + +# Story Points +gh project field-create $PROJECT_NUM --owner "$OWNER" \ + --name "Points" --data-type NUMBER + +# Due Date +gh project field-create $PROJECT_NUM --owner "$OWNER" \ + --name "Due Date" --data-type DATE + +# Team +gh project field-create $PROJECT_NUM --owner "$OWNER" \ + --name "Team" --data-type SINGLE_SELECT \ + --single-select-options "Engineering,Product,Design" + +# Notes +gh project field-create $PROJECT_NUM --owner "$OWNER" \ + --name "Notes" --data-type TEXT + +echo "Project fields configured" +gh project field-list $PROJECT_NUM --owner "$OWNER" +``` + +## Limitations + +- Cannot modify single select options after creation (must delete and recreate field) +- Cannot create iteration fields via CLI (use web UI) +- Some built-in fields are read-only (Labels, Milestone, Repository) +- Field names must be unique within a project diff --git a/.cursor/skills/github-projects/references/items.md b/.cursor/skills/github-projects/references/items.md new file mode 100644 index 0000000..46ff88a --- /dev/null +++ b/.cursor/skills/github-projects/references/items.md @@ -0,0 +1,242 @@ +# GitHub Project Items Reference + +Detailed reference for managing project items via `gh project item-*` commands. + +## Item Types + +Projects can contain: +- **Issues** - Added via URL +- **Pull Requests** - Added via URL +- **Draft Issues** - Created directly in project + +## Adding Items + +### Add Issue or PR + +```bash +gh project item-add PROJECT_NUM --owner OWNER --url ISSUE_OR_PR_URL +``` + +Options: +| Flag | Description | +|------|-------------| +| `--url` | URL of issue/PR to add | +| `--owner` | Project owner (`@me` for current user) | +| `--format json` | JSON output with item ID | + +Example: +```bash +# Add and capture item ID +ITEM=$(gh project item-add 1 --owner "@me" \ + --url https://github.com/org/repo/issues/42 \ + --format json | jq -r '.id') +echo "Added item: $ITEM" +``` + +### Create Draft Item + +```bash +gh project item-create PROJECT_NUM --owner OWNER --title "Title" --body "Description" +``` + +Draft items exist only within the project (not linked to any issue). + +## Listing Items + +```bash +gh project item-list PROJECT_NUM --owner OWNER [flags] +``` + +| Flag | Default | Description | +|------|---------|-------------| +| `-L, --limit` | 30 | Max items to fetch | +| `--format json` | - | JSON output | +| `-q, --jq` | - | jq filter expression | + +### JSON Structure + +```json +{ + "items": [ + { + "id": "PVTI_xxx", + "title": "Issue title", + "number": 42, + "type": "ISSUE", + "url": "https://github.com/...", + "status": "In Progress", + "repository": "owner/repo" + } + ] +} +``` + +### Useful jq Filters + +```bash +# Get all item IDs +gh project item-list 1 --owner "@me" --format json | jq -r '.items[].id' + +# Filter by status +gh project item-list 1 --owner "@me" --format json | \ + jq '.items[] | select(.status == "Todo")' + +# Get items with specific label (requires API query for labels) +gh project item-list 1 --owner "@me" --format json | \ + jq '.items[] | select(.type == "ISSUE")' +``` + +## Editing Items + +Items are edited using their ID (from `item-list --format json`). + +### Edit Draft Issue Content + +```bash +gh project item-edit --id ITEM_ID --title "New title" +gh project item-edit --id ITEM_ID --body "New description" +gh project item-edit --id ITEM_ID --title "Title" --body "Body" +``` + +### Edit Field Values + +Requires `--project-id` and `--field-id`. Get these from: +```bash +# Get project ID +gh project list --format json | jq '.projects[] | {number, id}' + +# Get field IDs +gh project field-list PROJECT_NUM --owner OWNER --format json | jq '.fields[] | {id, name}' +``` + +#### Text Fields +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --text "Field value" +``` + +#### Number Fields +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --number 42 +``` + +#### Date Fields +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --date "2024-12-31" +``` + +#### Single Select Fields + +Get option IDs first: +```bash +gh project field-list PROJECT_NUM --owner OWNER --format json | \ + jq '.fields[] | select(.name == "Status") | .options' +``` + +Then set: +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --single-select-option-id OPTION_ID +``` + +#### Iteration Fields +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --iteration-id ITERATION_ID +``` + +#### Clear Field Value +```bash +gh project item-edit --id ITEM_ID \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --clear +``` + +## Archive & Delete + +### Archive (Hide) +```bash +gh project item-archive PROJECT_NUM --owner OWNER --id ITEM_ID +``` + +Archived items can be restored via the web UI. + +### Delete (Permanent) +```bash +gh project item-delete PROJECT_NUM --owner OWNER --id ITEM_ID +``` + +Removes item from project. For issues/PRs, the underlying item still exists. + +## Bulk Operations + +### Add All Open Issues +```bash +gh issue list --repo owner/repo --state open --json url -q '.[].url' | \ + while read url; do + gh project item-add PROJECT_NUM --owner OWNER --url "$url" + done +``` + +### Add Issues with Label +```bash +gh issue list --repo owner/repo --label "project-x" --json url -q '.[].url' | \ + xargs -I {} gh project item-add PROJECT_NUM --owner OWNER --url {} +``` + +### Update Multiple Items +```bash +# Get item IDs and update each +gh project item-list PROJECT_NUM --owner OWNER --format json | \ + jq -r '.items[] | select(.status == "Todo") | .id' | \ + while read id; do + gh project item-edit --id "$id" \ + --project-id PROJECT_ID \ + --field-id FIELD_ID \ + --single-select-option-id NEW_STATUS_ID + done +``` + +## Complete Workflow Example + +```bash +#!/bin/bash +# Add issue to project and set initial status + +PROJECT_NUM=1 +OWNER="@me" +ISSUE_URL="https://github.com/org/repo/issues/42" + +# 1. Get project ID +PROJECT_ID=$(gh project list --format json | \ + jq -r --arg num "$PROJECT_NUM" '.projects[] | select(.number == ($num | tonumber)) | .id') + +# 2. Get Status field ID and "In Progress" option ID +FIELD_DATA=$(gh project field-list $PROJECT_NUM --owner "$OWNER" --format json) +STATUS_FIELD=$(echo "$FIELD_DATA" | jq -r '.fields[] | select(.name == "Status") | .id') +IN_PROGRESS_ID=$(echo "$FIELD_DATA" | jq -r '.fields[] | select(.name == "Status") | .options[] | select(.name == "In Progress") | .id') + +# 3. Add issue to project +ITEM_ID=$(gh project item-add $PROJECT_NUM --owner "$OWNER" --url "$ISSUE_URL" --format json | jq -r '.id') + +# 4. Set status to "In Progress" +gh project item-edit --id "$ITEM_ID" \ + --project-id "$PROJECT_ID" \ + --field-id "$STATUS_FIELD" \ + --single-select-option-id "$IN_PROGRESS_ID" + +echo "Added item $ITEM_ID with status 'In Progress'" +``` diff --git a/.cursor/skills/github-workflow-automation/SKILL.md b/.cursor/skills/github-workflow-automation/SKILL.md new file mode 100644 index 0000000..975dfa7 --- /dev/null +++ b/.cursor/skills/github-workflow-automation/SKILL.md @@ -0,0 +1,846 @@ +--- +name: github-workflow-automation +description: "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues." +--- + +# 🔧 GitHub Workflow Automation + +> Patterns for automating GitHub workflows with AI assistance, inspired by [Gemini CLI](https://github.com/google-gemini/gemini-cli) and modern DevOps practices. + +## When to Use This Skill + +Use this skill when: + +- Automating PR reviews with AI +- Setting up issue triage automation +- Creating GitHub Actions workflows +- Integrating AI into CI/CD pipelines +- Automating Git operations (rebases, cherry-picks) + +--- + +## 1. Automated PR Review + +### 1.1 PR Review Action + +```yaml +# .github/workflows/ai-review.yml +name: AI Code Review + +on: + pull_request: + types: [opened, synchronize] + +jobs: + review: + runs-on: ubuntu-latest + permissions: + contents: read + pull-requests: write + + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + - name: Get changed files + id: changed + run: | + files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD) + echo "files<> $GITHUB_OUTPUT + echo "$files" >> $GITHUB_OUTPUT + echo "EOF" >> $GITHUB_OUTPUT + + - name: Get diff + id: diff + run: | + diff=$(git diff origin/${{ github.base_ref }}...HEAD) + echo "diff<> $GITHUB_OUTPUT + echo "$diff" >> $GITHUB_OUTPUT + echo "EOF" >> $GITHUB_OUTPUT + + - name: AI Review + uses: actions/github-script@v7 + with: + script: | + const { Anthropic } = require('@anthropic-ai/sdk'); + const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }); + + const response = await client.messages.create({ + model: "claude-3-sonnet-20240229", + max_tokens: 4096, + messages: [{ + role: "user", + content: `Review this PR diff and provide feedback: + + Changed files: ${{ steps.changed.outputs.files }} + + Diff: + ${{ steps.diff.outputs.diff }} + + Provide: + 1. Summary of changes + 2. Potential issues or bugs + 3. Suggestions for improvement + 4. Security concerns if any + + Format as GitHub markdown.` + }] + }); + + await github.rest.pulls.createReview({ + owner: context.repo.owner, + repo: context.repo.repo, + pull_number: context.issue.number, + body: response.content[0].text, + event: 'COMMENT' + }); + env: + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} +``` + +### 1.2 Review Comment Patterns + +````markdown +# AI Review Structure + +## 📋 Summary + +Brief description of what this PR does. + +## ✅ What looks good + +- Well-structured code +- Good test coverage +- Clear naming conventions + +## ⚠️ Potential Issues + +1. **Line 42**: Possible null pointer exception + ```javascript + // Current + user.profile.name; + // Suggested + user?.profile?.name ?? "Unknown"; + ``` +```` + +2. **Line 78**: Consider error handling + ```javascript + // Add try-catch or .catch() + ``` + +## 💡 Suggestions + +- Consider extracting the validation logic into a separate function +- Add JSDoc comments for public methods + +## 🔒 Security Notes + +- No sensitive data exposure detected +- API key handling looks correct + +```` + +### 1.3 Focused Reviews + +```yaml +# Review only specific file types +- name: Filter code files + run: | + files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | \ + grep -E '\.(ts|tsx|js|jsx|py|go)$' || true) + echo "code_files=$files" >> $GITHUB_OUTPUT + +# Review with context +- name: AI Review with context + run: | + # Include relevant context files + context="" + for file in ${{ steps.changed.outputs.files }}; do + if [[ -f "$file" ]]; then + context+="=== $file ===\n$(cat $file)\n\n" + fi + done + + # Send to AI with full file context +```` + +--- + +## 2. Issue Triage Automation + +### 2.1 Auto-label Issues + +```yaml +# .github/workflows/issue-triage.yml +name: Issue Triage + +on: + issues: + types: [opened] + +jobs: + triage: + runs-on: ubuntu-latest + permissions: + issues: write + + steps: + - name: Analyze issue + uses: actions/github-script@v7 + with: + script: | + const issue = context.payload.issue; + + // Call AI to analyze + const analysis = await analyzeIssue(issue.title, issue.body); + + // Apply labels + const labels = []; + + if (analysis.type === 'bug') { + labels.push('bug'); + if (analysis.severity === 'high') labels.push('priority: high'); + } else if (analysis.type === 'feature') { + labels.push('enhancement'); + } else if (analysis.type === 'question') { + labels.push('question'); + } + + if (analysis.area) { + labels.push(`area: ${analysis.area}`); + } + + await github.rest.issues.addLabels({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: issue.number, + labels: labels + }); + + // Add initial response + if (analysis.type === 'bug' && !analysis.hasReproSteps) { + await github.rest.issues.createComment({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: issue.number, + body: `Thanks for reporting this issue! + +To help us investigate, could you please provide: +- Steps to reproduce the issue +- Expected behavior +- Actual behavior +- Environment (OS, version, etc.) + +This will help us resolve your issue faster. 🙏` + }); + } +``` + +### 2.2 Issue Analysis Prompt + +```typescript +const TRIAGE_PROMPT = ` +Analyze this GitHub issue and classify it: + +Title: {title} +Body: {body} + +Return JSON with: +{ + "type": "bug" | "feature" | "question" | "docs" | "other", + "severity": "low" | "medium" | "high" | "critical", + "area": "frontend" | "backend" | "api" | "docs" | "ci" | "other", + "summary": "one-line summary", + "hasReproSteps": boolean, + "isFirstContribution": boolean, + "suggestedLabels": ["label1", "label2"], + "suggestedAssignees": ["username"] // based on area expertise +} +`; +``` + +### 2.3 Stale Issue Management + +```yaml +# .github/workflows/stale.yml +name: Manage Stale Issues + +on: + schedule: + - cron: "0 0 * * *" # Daily + +jobs: + stale: + runs-on: ubuntu-latest + steps: + - uses: actions/stale@v9 + with: + stale-issue-message: | + This issue has been automatically marked as stale because it has not had + recent activity. It will be closed in 14 days if no further activity occurs. + + If this issue is still relevant: + - Add a comment with an update + - Remove the `stale` label + + Thank you for your contributions! 🙏 + + stale-pr-message: | + This PR has been automatically marked as stale. Please update it or it + will be closed in 14 days. + + days-before-stale: 60 + days-before-close: 14 + stale-issue-label: "stale" + stale-pr-label: "stale" + exempt-issue-labels: "pinned,security,in-progress" + exempt-pr-labels: "pinned,security" +``` + +--- + +## 3. CI/CD Integration + +### 3.1 Smart Test Selection + +```yaml +# .github/workflows/smart-tests.yml +name: Smart Test Selection + +on: + pull_request: + +jobs: + analyze: + runs-on: ubuntu-latest + outputs: + test_suites: ${{ steps.analyze.outputs.suites }} + + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + - name: Analyze changes + id: analyze + run: | + # Get changed files + changed=$(git diff --name-only origin/${{ github.base_ref }}...HEAD) + + # Determine which test suites to run + suites="[]" + + if echo "$changed" | grep -q "^src/api/"; then + suites=$(echo $suites | jq '. + ["api"]') + fi + + if echo "$changed" | grep -q "^src/frontend/"; then + suites=$(echo $suites | jq '. + ["frontend"]') + fi + + if echo "$changed" | grep -q "^src/database/"; then + suites=$(echo $suites | jq '. + ["database", "api"]') + fi + + # If nothing specific, run all + if [ "$suites" = "[]" ]; then + suites='["all"]' + fi + + echo "suites=$suites" >> $GITHUB_OUTPUT + + test: + needs: analyze + runs-on: ubuntu-latest + strategy: + matrix: + suite: ${{ fromJson(needs.analyze.outputs.test_suites) }} + + steps: + - uses: actions/checkout@v4 + + - name: Run tests + run: | + if [ "${{ matrix.suite }}" = "all" ]; then + npm test + else + npm test -- --suite ${{ matrix.suite }} + fi +``` + +### 3.2 Deployment with AI Validation + +```yaml +# .github/workflows/deploy.yml +name: Deploy with AI Validation + +on: + push: + branches: [main] + +jobs: + validate: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Get deployment changes + id: changes + run: | + # Get commits since last deployment + last_deploy=$(git describe --tags --abbrev=0 2>/dev/null || echo "") + if [ -n "$last_deploy" ]; then + changes=$(git log --oneline $last_deploy..HEAD) + else + changes=$(git log --oneline -10) + fi + echo "changes<> $GITHUB_OUTPUT + echo "$changes" >> $GITHUB_OUTPUT + echo "EOF" >> $GITHUB_OUTPUT + + - name: AI Risk Assessment + id: assess + uses: actions/github-script@v7 + with: + script: | + // Analyze changes for deployment risk + const prompt = ` + Analyze these changes for deployment risk: + + ${process.env.CHANGES} + + Return JSON: + { + "riskLevel": "low" | "medium" | "high", + "concerns": ["concern1", "concern2"], + "recommendations": ["rec1", "rec2"], + "requiresManualApproval": boolean + } + `; + + // Call AI and parse response + const analysis = await callAI(prompt); + + if (analysis.riskLevel === 'high') { + core.setFailed('High-risk deployment detected. Manual review required.'); + } + + return analysis; + env: + CHANGES: ${{ steps.changes.outputs.changes }} + + deploy: + needs: validate + runs-on: ubuntu-latest + environment: production + steps: + - name: Deploy + run: | + echo "Deploying to production..." + # Deployment commands here +``` + +### 3.3 Rollback Automation + +```yaml +# .github/workflows/rollback.yml +name: Automated Rollback + +on: + workflow_dispatch: + inputs: + reason: + description: "Reason for rollback" + required: true + +jobs: + rollback: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + - name: Find last stable version + id: stable + run: | + # Find last successful deployment + stable=$(git tag -l 'v*' --sort=-version:refname | head -1) + echo "version=$stable" >> $GITHUB_OUTPUT + + - name: Rollback + run: | + git checkout ${{ steps.stable.outputs.version }} + # Deploy stable version + npm run deploy + + - name: Notify team + uses: slackapi/slack-github-action@v1 + with: + payload: | + { + "text": "🔄 Production rolled back to ${{ steps.stable.outputs.version }}", + "blocks": [ + { + "type": "section", + "text": { + "type": "mrkdwn", + "text": "*Rollback executed*\n• Version: `${{ steps.stable.outputs.version }}`\n• Reason: ${{ inputs.reason }}\n• Triggered by: ${{ github.actor }}" + } + } + ] + } +``` + +--- + +## 4. Git Operations + +### 4.1 Automated Rebasing + +```yaml +# .github/workflows/auto-rebase.yml +name: Auto Rebase + +on: + issue_comment: + types: [created] + +jobs: + rebase: + if: github.event.issue.pull_request && contains(github.event.comment.body, '/rebase') + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + token: ${{ secrets.GITHUB_TOKEN }} + + - name: Setup Git + run: | + git config user.name "github-actions[bot]" + git config user.email "github-actions[bot]@users.noreply.github.com" + + - name: Rebase PR + run: | + # Fetch PR branch + gh pr checkout ${{ github.event.issue.number }} + + # Rebase onto main + git fetch origin main + git rebase origin/main + + # Force push + git push --force-with-lease + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + + - name: Comment result + uses: actions/github-script@v7 + with: + script: | + github.rest.issues.createComment({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: context.issue.number, + body: '✅ Successfully rebased onto main!' + }) +``` + +### 4.2 Smart Cherry-Pick + +```typescript +// AI-assisted cherry-pick that handles conflicts +async function smartCherryPick(commitHash: string, targetBranch: string) { + // Get commit info + const commitInfo = await exec(`git show ${commitHash} --stat`); + + // Check for potential conflicts + const targetDiff = await exec( + `git diff ${targetBranch}...HEAD -- ${affectedFiles}` + ); + + // AI analysis + const analysis = await ai.analyze(` + I need to cherry-pick this commit to ${targetBranch}: + + ${commitInfo} + + Current state of affected files on ${targetBranch}: + ${targetDiff} + + Will there be conflicts? If so, suggest resolution strategy. + `); + + if (analysis.willConflict) { + // Create branch for manual resolution + await exec( + `git checkout -b cherry-pick-${commitHash.slice(0, 7)} ${targetBranch}` + ); + const result = await exec(`git cherry-pick ${commitHash}`, { + allowFail: true, + }); + + if (result.failed) { + // AI-assisted conflict resolution + const conflicts = await getConflicts(); + for (const conflict of conflicts) { + const resolution = await ai.resolveConflict(conflict); + await applyResolution(conflict.file, resolution); + } + } + } else { + await exec(`git checkout ${targetBranch}`); + await exec(`git cherry-pick ${commitHash}`); + } +} +``` + +### 4.3 Branch Cleanup + +```yaml +# .github/workflows/branch-cleanup.yml +name: Branch Cleanup + +on: + schedule: + - cron: '0 0 * * 0' # Weekly + workflow_dispatch: + +jobs: + cleanup: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + - name: Find stale branches + id: stale + run: | + # Branches not updated in 30 days + stale=$(git for-each-ref --sort=-committerdate refs/remotes/origin \ + --format='%(refname:short) %(committerdate:relative)' | \ + grep -E '[3-9][0-9]+ days|[0-9]+ months|[0-9]+ years' | \ + grep -v 'origin/main\|origin/develop' | \ + cut -d' ' -f1 | sed 's|origin/||') + + echo "branches<> $GITHUB_OUTPUT + echo "$stale" >> $GITHUB_OUTPUT + echo "EOF" >> $GITHUB_OUTPUT + + - name: Create cleanup PR + if: steps.stale.outputs.branches != '' + uses: actions/github-script@v7 + with: + script: | + const branches = `${{ steps.stale.outputs.branches }}`.split('\n').filter(Boolean); + + const body = `## 🧹 Stale Branch Cleanup + +The following branches haven't been updated in over 30 days: + +${branches.map(b => `- \`${b}\``).join('\n')} + +### Actions: +- [ ] Review each branch +- [ ] Delete branches that are no longer needed +- Comment \`/keep branch-name\` to preserve specific branches +`; + + await github.rest.issues.create({ + owner: context.repo.owner, + repo: context.repo.repo, + title: 'Stale Branch Cleanup', + body: body, + labels: ['housekeeping'] + }); +``` + +--- + +## 5. On-Demand Assistance + +### 5.1 @mention Bot + +```yaml +# .github/workflows/mention-bot.yml +name: AI Mention Bot + +on: + issue_comment: + types: [created] + pull_request_review_comment: + types: [created] + +jobs: + respond: + if: contains(github.event.comment.body, '@ai-helper') + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Extract question + id: question + run: | + # Extract text after @ai-helper + question=$(echo "${{ github.event.comment.body }}" | sed 's/.*@ai-helper//') + echo "question=$question" >> $GITHUB_OUTPUT + + - name: Get context + id: context + run: | + if [ "${{ github.event.issue.pull_request }}" != "" ]; then + # It's a PR - get diff + gh pr diff ${{ github.event.issue.number }} > context.txt + else + # It's an issue - get description + gh issue view ${{ github.event.issue.number }} --json body -q .body > context.txt + fi + echo "context=$(cat context.txt)" >> $GITHUB_OUTPUT + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + + - name: AI Response + uses: actions/github-script@v7 + with: + script: | + const response = await ai.chat(` + Context: ${process.env.CONTEXT} + + Question: ${process.env.QUESTION} + + Provide a helpful, specific answer. Include code examples if relevant. + `); + + await github.rest.issues.createComment({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: context.issue.number, + body: response + }); + env: + CONTEXT: ${{ steps.context.outputs.context }} + QUESTION: ${{ steps.question.outputs.question }} +``` + +### 5.2 Command Patterns + +```markdown +## Available Commands + +| Command | Description | +| :------------------- | :-------------------------- | +| `@ai-helper explain` | Explain the code in this PR | +| `@ai-helper review` | Request AI code review | +| `@ai-helper fix` | Suggest fixes for issues | +| `@ai-helper test` | Generate test cases | +| `@ai-helper docs` | Generate documentation | +| `/rebase` | Rebase PR onto main | +| `/update` | Update PR branch from main | +| `/approve` | Mark as approved by bot | +| `/label bug` | Add 'bug' label | +| `/assign @user` | Assign to user | +``` + +--- + +## 6. Repository Configuration + +### 6.1 CODEOWNERS + +``` +# .github/CODEOWNERS + +# Global owners +* @org/core-team + +# Frontend +/src/frontend/ @org/frontend-team +*.tsx @org/frontend-team +*.css @org/frontend-team + +# Backend +/src/api/ @org/backend-team +/src/database/ @org/backend-team + +# Infrastructure +/.github/ @org/devops-team +/terraform/ @org/devops-team +Dockerfile @org/devops-team + +# Docs +/docs/ @org/docs-team +*.md @org/docs-team + +# Security-sensitive +/src/auth/ @org/security-team +/src/crypto/ @org/security-team +``` + +### 6.2 Branch Protection + +```yaml +# Set up via GitHub API +- name: Configure branch protection + uses: actions/github-script@v7 + with: + script: | + await github.rest.repos.updateBranchProtection({ + owner: context.repo.owner, + repo: context.repo.repo, + branch: 'main', + required_status_checks: { + strict: true, + contexts: ['test', 'lint', 'ai-review'] + }, + enforce_admins: true, + required_pull_request_reviews: { + required_approving_review_count: 1, + require_code_owner_reviews: true, + dismiss_stale_reviews: true + }, + restrictions: null, + required_linear_history: true, + allow_force_pushes: false, + allow_deletions: false + }); +``` + +--- + +## Best Practices + +### Security + +- [ ] Store API keys in GitHub Secrets +- [ ] Use minimal permissions in workflows +- [ ] Validate all inputs +- [ ] Don't expose sensitive data in logs + +### Performance + +- [ ] Cache dependencies +- [ ] Use matrix builds for parallel testing +- [ ] Skip unnecessary jobs with path filters +- [ ] Use self-hosted runners for heavy workloads + +### Reliability + +- [ ] Add timeouts to jobs +- [ ] Handle rate limits gracefully +- [ ] Implement retry logic +- [ ] Have rollback procedures + +--- + +## Resources + +- [Gemini CLI GitHub Action](https://github.com/google-github-actions/run-gemini-cli) +- [GitHub Actions Documentation](https://docs.github.com/en/actions) +- [GitHub REST API](https://docs.github.com/en/rest) +- [CODEOWNERS Syntax](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners) diff --git a/.cursor/skills/gitops-principles-skill/SKILL.md b/.cursor/skills/gitops-principles-skill/SKILL.md new file mode 100644 index 0000000..09fd17e --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/SKILL.md @@ -0,0 +1,442 @@ +--- +name: gitops-principles-skill +description: Comprehensive GitOps methodology and principles skill for cloud-native operations. Use when (1) Designing GitOps architecture for Kubernetes deployments, (2) Implementing declarative infrastructure with Git as single source of truth, (3) Setting up continuous deployment pipelines with ArgoCD/Flux/Kargo, (4) Establishing branching strategies and repository structures, (5) Troubleshooting drift, sync failures, or reconciliation issues, (6) Evaluating GitOps tooling decisions, (7) Teaching or explaining GitOps concepts and best practices, (8) Deploying ArgoCD on Azure Arc-enabled Kubernetes or AKS with workload identity. Covers the 4 pillars of GitOps (OpenGitOps), patterns, anti-patterns, tooling ecosystem, Azure Arc integration, and operational guidance. +--- + +# GitOps Principles Skill + +Complete guide for implementing GitOps methodology in Kubernetes environments - the operational framework where **Git is the single source of truth** for declarative infrastructure and applications. + +## What is GitOps? + +GitOps is a set of practices that uses Git repositories as the source of truth for defining the desired state of infrastructure and applications. An automated process ensures the production environment matches the state described in the repository. + +### The OpenGitOps Definition (CNCF) + +GitOps is defined by **four core principles** established by the OpenGitOps project (part of CNCF): + +| Principle | Description | +|-----------|-------------| +| **1. Declarative** | The entire system must be described declaratively | +| **2. Versioned and Immutable** | Desired state is stored in a way that enforces immutability, versioning, and retention | +| **3. Pulled Automatically** | Software agents automatically pull desired state from the source | +| **4. Continuously Reconciled** | Agents continuously observe and attempt to apply desired state | + +## Core Concepts Quick Reference + +### Git as Single Source of Truth + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ GIT REPOSITORY │ +│ (Single Source of Truth for Desired State) │ +├─────────────────────────────────────────────────────────────────┤ +│ manifests/ │ +│ ├── base/ # Base configurations │ +│ │ ├── deployment.yaml │ +│ │ ├── service.yaml │ +│ │ └── kustomization.yaml │ +│ └── overlays/ # Environment-specific │ +│ ├── dev/ │ +│ ├── staging/ │ +│ └── production/ │ +└─────────────────────────────────────────────────────────────────┘ + │ + ▼ Pull (not Push) +┌─────────────────────────────────────────────────────────────────┐ +│ GITOPS CONTROLLER │ +│ (ArgoCD / Flux / Kargo) │ +│ - Continuously watches Git repository │ +│ - Compares desired state vs actual state │ +│ - Reconciles differences automatically │ +└─────────────────────────────────────────────────────────────────┘ + │ + ▼ Apply +┌─────────────────────────────────────────────────────────────────┐ +│ KUBERNETES CLUSTER │ +│ (Actual State / Runtime Environment) │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Push vs Pull Model + +| Push Model (Traditional CI/CD) | Pull Model (GitOps) | +|--------------------------------|---------------------| +| CI system pushes changes to cluster | Agent pulls changes from Git | +| Requires cluster credentials in CI | Credentials stay within cluster | +| Point-in-time deployment | Continuous reconciliation | +| Drift goes undetected | Drift automatically corrected | +| Manual rollback process | Rollback = `git revert` | + +### Key GitOps Benefits + +1. **Auditability**: Git history = deployment history +2. **Security**: No external access to cluster required +3. **Reliability**: Automated drift correction +4. **Speed**: Deploy via PR merge +5. **Rollback**: Simple `git revert` +6. **Disaster Recovery**: Redeploy entire cluster from Git + +## Repository Strategies + +### Monorepo vs Polyrepo + +**Monorepo** (Single repository for all environments): + +``` +gitops-repo/ +├── apps/ +│ ├── app-a/ +│ │ ├── base/ +│ │ └── overlays/ +│ │ ├── dev/ +│ │ ├── staging/ +│ │ └── prod/ +│ └── app-b/ +└── infrastructure/ + ├── monitoring/ + └── networking/ +``` + +**Polyrepo** (Separate repositories): + +``` +# Repository per concern +app-a-config/ # App A manifests +app-b-config/ # App B manifests +infrastructure/ # Shared infrastructure +cluster-bootstrap/ # Cluster setup +``` + +### Multi-Repository Pattern (This Project) + +Separates **infrastructure** from **values** for security boundaries: + +``` +infra-team/ # Base configurations, ApplicationSets +├── applications/ # ArgoCD Application definitions +└── helm-base-values/ # Default Helm values + +argo-cd-helm-values/ # Environment-specific overrides +├── dev/ # Development values +├── stg/ # Staging values +└── prd/ # Production values +``` + +**Benefits**: + +- Different access controls per repo +- Separation of concerns +- Environment-specific secrets isolated + +## Branching Strategies + +### Environment Branches + +``` +main ────────────────────────────────────► Production + │ + └──► staging ──────────────────────────► Staging cluster + │ + └──► develop ───────────────────► Development cluster +``` + +### Trunk-Based with Overlays (Recommended) + +``` +main ────────────────────────────────────► All environments + │ + ├── overlays/dev/ → Dev cluster + ├── overlays/staging/ → Staging cluster + └── overlays/prod/ → Prod cluster +``` + +### Release Branches + +``` +main + │ + ├── release/v1.0 ──────► Production (v1.0) + ├── release/v1.1 ──────► Production (v1.1) + └── release/v2.0 ──────► Production (v2.0) +``` + +## Sync Policies and Strategies + +### Automated Sync + +```yaml +syncPolicy: + automated: + prune: true # Delete resources not in Git + selfHeal: true # Revert manual changes +``` + +### Manual Sync (Production Recommended) + +```yaml +syncPolicy: + automated: null # Require explicit sync +``` + +### Sync Options + +| Option | Use Case | +|--------|----------| +| `CreateNamespace=true` | Auto-create missing namespaces | +| `PruneLast=true` | Delete after successful sync | +| `ServerSideApply=true` | Handle large CRDs | +| `ApplyOutOfSyncOnly=true` | Performance optimization | +| `Replace=true` | Force resource replacement | + +## Declarative Configuration Patterns + +### Kustomize Pattern + +```yaml +# base/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - deployment.yaml + - service.yaml + +# overlays/prod/kustomization.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - ../../base +patchesStrategicMerge: + - replica-patch.yaml +images: + - name: myapp + newTag: v1.2.3 +``` + +### Helm Pattern + +```yaml +# Application pointing to Helm chart +spec: + source: + repoURL: https://charts.example.com + chart: my-app + targetRevision: 1.2.3 + helm: + releaseName: my-app + valueFiles: + - values.yaml + - values-prod.yaml +``` + +### Multi-Source Pattern + +```yaml +spec: + sources: + - repoURL: https://charts.bitnami.com/bitnami + chart: nginx + targetRevision: 15.0.0 + helm: + valueFiles: + - $values/nginx/values-prod.yaml + - repoURL: https://github.com/org/values.git + targetRevision: main + ref: values +``` + +## Progressive Delivery Integration + +GitOps enables progressive delivery patterns: + +### Blue-Green Deployments + +```yaml +# Two applications, traffic shift via Ingress/Service +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: app-blue +--- +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: app-green +``` + +### Canary with Argo Rollouts + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Rollout +spec: + strategy: + canary: + steps: + - setWeight: 10 + - pause: {duration: 5m} + - setWeight: 50 + - pause: {duration: 10m} +``` + +### Environment Promotion (Kargo) + +``` +Warehouse → Dev Stage → Staging Stage → Production Stage + │ │ │ │ + └── Freight promotion through environments ───┘ +``` + +## Cloud Provider Integration + +### Azure Arc-enabled Kubernetes & AKS + +Azure provides a managed ArgoCD experience through the **Microsoft.ArgoCD** cluster extension: + +```bash +# Simple installation (single node) +az k8s-extension create \ + --resource-group --cluster-name \ + --cluster-type managedClusters \ + --name argocd \ + --extension-type Microsoft.ArgoCD \ + --release-train preview \ + --config deployWithHighAvailability=false + +# Production with workload identity (recommended) +# Use Bicep template - see references/azure-arc-integration.md +``` + +**Key Benefits:** + +| Feature | Description | +|---------|-------------| +| Managed Installation | Azure handles deployment and upgrades | +| Workload Identity | Azure AD authentication without secrets | +| Multi-Cluster | Consistent GitOps across hybrid environments | +| Azure Integration | Native ACR, Key Vault, Azure AD support | + +**Prerequisites:** + +- Azure Arc-connected cluster OR MSI-based AKS cluster +- `Microsoft.KubernetesConfiguration` provider registered +- `k8s-extension` CLI extension installed + +See `references/azure-arc-integration.md` for complete setup guide. + +--- + +## Security Considerations + +### Secrets Management + +**Never store secrets in Git!** Use: + +| Approach | Tool | +|----------|------| +| External Secrets | External Secrets Operator | +| Sealed Secrets | Bitnami Sealed Secrets | +| SOPS | Mozilla SOPS encryption | +| Vault | HashiCorp Vault + CSI | +| Cloud KMS | AWS/Azure/GCP Key Management | + +### RBAC Best Practices + +```yaml +# Limit ArgoCD to specific namespaces +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +spec: + destinations: + - namespace: 'team-a-*' + server: https://kubernetes.default.svc + sourceRepos: + - 'https://github.com/org/team-a-*' +``` + +### Network Policies + +- GitOps controller should be only component with Git access +- Restrict egress from application namespaces +- Use network policies to isolate environments + +## Observability and Debugging + +### Health Status Interpretation + +| Status | Meaning | Action | +|--------|---------|--------| +| Healthy | All resources running | None | +| Progressing | Deployment in progress | Wait | +| Degraded | Health check failed | Investigate | +| Suspended | Manually paused | Resume when ready | +| Missing | Resource not found | Check manifests | + +### Common Issues Checklist + +1. **Sync Failed**: Check YAML syntax, RBAC permissions +2. **OutOfSync**: Compare diff, check ignoreDifferences +3. **Degraded**: Check Pod logs, resource limits +4. **Missing**: Verify namespace, check pruning settings + +### Drift Detection + +```bash +# Check application diff +argocd app diff myapp + +# Force refresh from Git +argocd app get myapp --refresh +``` + +## Quick Decision Guide + +### When to Use GitOps + +- Kubernetes-native workloads +- Multiple environments (dev/staging/prod) +- Need audit trail for deployments +- Team collaboration on infrastructure +- Disaster recovery requirements + +### When GitOps May Not Fit + +- Rapidly changing development environments +- Legacy systems without declarative configs +- Real-time configuration changes required +- Single developer, single environment + +## References + +For detailed information, see: + +- `references/core-principles.md` - Deep dive into the 4 pillars +- `references/patterns-and-practices.md` - Branching and repo patterns +- `references/tooling-ecosystem.md` - ArgoCD vs Flux vs Kargo +- `references/anti-patterns.md` - Common mistakes to avoid +- `references/troubleshooting.md` - Debugging guide +- `references/azure-arc-integration.md` - Azure Arc & AKS GitOps setup + +## Templates + +Ready-to-use templates in `templates/`: + +- `application.yaml` - ArgoCD Application example +- `applicationset.yaml` - Multi-cluster deployment +- `kustomization.yaml` - Kustomize overlay structure + +## Scripts + +Utility scripts in `scripts/`: + +- `gitops-health-check.sh` - Validate GitOps setup + +## External Resources + +- [OpenGitOps Principles](https://opengitops.dev/) +- [ArgoCD Documentation](https://argo-cd.readthedocs.io/) +- [Flux Documentation](https://fluxcd.io/docs/) +- [Kargo Documentation](https://docs.kargo.io/) +- [GitOps Working Group](https://github.com/gitops-working-group/gitops-working-group) +- [Azure Arc GitOps with ArgoCD](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/tutorial-use-gitops-argocd) +- [Azure Arc-enabled Kubernetes](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/) diff --git a/.cursor/skills/gitops-principles-skill/references/anti-patterns.md b/.cursor/skills/gitops-principles-skill/references/anti-patterns.md new file mode 100644 index 0000000..848af00 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/anti-patterns.md @@ -0,0 +1,565 @@ +# GitOps Anti-Patterns + +Common mistakes and pitfalls to avoid when implementing GitOps, with guidance on proper practices. + +## Configuration Anti-Patterns + +### Anti-Pattern 1: Imperative Commands in Production + +**The Problem:** + +```bash +# DON'T DO THIS +kubectl scale deployment nginx --replicas=5 +kubectl set image deployment/nginx nginx=nginx:1.22 +kubectl edit configmap app-config +``` + +**Why It's Bad:** + +- Changes are not tracked in Git +- Drift occurs between Git and cluster +- No audit trail +- Changes lost on next sync + +**The Fix:** + +```yaml +# DO THIS: Update Git, let GitOps sync +# manifests/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx +spec: + replicas: 5 # Changed from 3 + template: + spec: + containers: + - name: nginx + image: nginx:1.22 # Updated version +``` + +--- + +### Anti-Pattern 2: Mutable Image Tags + +**The Problem:** + +```yaml +# DON'T DO THIS +containers: +- name: app + image: myapp:latest + # or + image: myapp:dev + # or + image: myapp:stable +``` + +**Why It's Bad:** + +- Same tag, different content over time +- No way to track what's actually deployed +- Rollback doesn't work (`:latest` changed) +- Cache issues across nodes + +**The Fix:** + +```yaml +# DO THIS: Use immutable tags or digests +containers: +- name: app + image: myapp:v1.2.3 + # or even better + image: myapp@sha256:abc123def456... +``` + +**Automated Fix with Kargo/Flux:** + +```yaml +# Warehouse subscription +subscriptions: + - image: + repoURL: myregistry/myapp + imageSelectionStrategy: SemVer + constraint: ^1.0.0 +``` + +--- + +### Anti-Pattern 3: Secrets in Git + +**The Problem:** + +```yaml +# DON'T DO THIS - NEVER COMMIT SECRETS +apiVersion: v1 +kind: Secret +metadata: + name: db-credentials +type: Opaque +data: + password: bXlzZWNyZXRwYXNzd29yZA== # base64 != encryption! +``` + +**Why It's Bad:** + +- Git history is forever +- base64 is encoding, not encryption +- Secrets exposed to anyone with repo access +- Compliance violations + +**The Fix:** + +Option 1: External Secrets Operator + +```yaml +apiVersion: external-secrets.io/v1beta1 +kind: ExternalSecret +metadata: + name: db-credentials +spec: + refreshInterval: 1h + secretStoreRef: + kind: ClusterSecretStore + name: azure-keyvault + target: + name: db-credentials + data: + - secretKey: password + remoteRef: + key: database-password +``` + +Option 2: Sealed Secrets + +```yaml +apiVersion: bitnami.com/v1alpha1 +kind: SealedSecret +metadata: + name: db-credentials +spec: + encryptedData: + password: AgBy8hCi... # Actually encrypted +``` + +Option 3: SOPS + +```yaml +# Encrypted with SOPS - safe to commit +password: ENC[AES256_GCM,data:xxx,iv:yyy,tag:zzz,type:str] +``` + +--- + +### Anti-Pattern 4: Hardcoded Environment Values + +**The Problem:** + +```yaml +# DON'T DO THIS - Hardcoded for each environment +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app +spec: + replicas: 3 # What about dev? staging? + template: + spec: + containers: + - name: app + env: + - name: DATABASE_URL + value: "postgres://prod-db:5432/app" # Hardcoded! + resources: + limits: + memory: "2Gi" # Same for all envs? +``` + +**The Fix:** + +```yaml +# base/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app +spec: + replicas: 1 # Overridden per environment + template: + spec: + containers: + - name: app + envFrom: + - configMapRef: + name: app-config + +# overlays/production/kustomization.yaml +replicas: + - name: app + count: 3 + +# overlays/production/configmap.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: app-config +data: + DATABASE_URL: "postgres://prod-db:5432/app" +``` + +--- + +## Workflow Anti-Patterns + +### Anti-Pattern 5: Bypassing Git for "Quick Fixes" + +**The Problem:** + +```bash +# "It's just a quick fix, I'll update Git later" +kubectl apply -f hotfix.yaml +# ... 3 months later, no Git update, forgotten +``` + +**Why It's Bad:** + +- "Later" never comes +- Creates undocumented drift +- Next sync overwrites the fix +- Knowledge lost + +**The Fix:** + +1. **Disable direct kubectl access** to production +2. **Enable self-heal** in ArgoCD: + + ```yaml + syncPolicy: + automated: + selfHeal: true + ``` + +3. **Fast-track PR process** for hotfixes +4. **Emergency runbook** that includes Git steps + +--- + +### Anti-Pattern 6: No Sync Windows + +**The Problem:** + +```yaml +# Automated sync with no restrictions +syncPolicy: + automated: + prune: true + selfHeal: true +# Deploys at 3 AM on Friday before a holiday... +``` + +**Why It's Bad:** + +- Deployments during low-staffing periods +- No change control +- Compliance issues +- Incidents during off-hours + +**The Fix:** + +```yaml +# ArgoCD Project with sync windows +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: production +spec: + syncWindows: + # Allow syncs Monday-Thursday, 9 AM - 5 PM + - kind: allow + schedule: "0 9 * * 1-4" + duration: 8h + applications: ["*"] + # Deny all syncs on weekends + - kind: deny + schedule: "0 0 * * 0,6" + duration: 24h + applications: ["*"] +``` + +--- + +### Anti-Pattern 7: Monolithic Applications + +**The Problem:** + +```yaml +# Single Application for entire platform +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: everything +spec: + source: + path: manifests/ # 500+ resources! +``` + +**Why It's Bad:** + +- Single failure affects everything +- Long sync times +- Difficult to track changes +- No granular rollback +- Complex RBAC + +**The Fix:** + +```yaml +# App of Apps pattern +# Root application +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: platform +spec: + source: + path: apps/ +--- +# Individual applications +# apps/frontend.yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: frontend +spec: + source: + path: manifests/frontend/ +--- +# apps/backend.yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: backend +spec: + source: + path: manifests/backend/ +``` + +--- + +### Anti-Pattern 8: Ignoring Drift Detection + +**The Problem:** + +```yaml +# "OutOfSync is normal, we ignore it" +syncPolicy: + automated: null # No auto-sync + # No alerts configured + # No regular reconciliation +``` + +**Why It's Bad:** + +- Security vulnerabilities unpatched +- Configuration creep +- Disaster recovery compromised +- Git becomes stale + +**The Fix:** + +1. **Configure alerts** for OutOfSync status +2. **Schedule regular syncs** even if manual +3. **Use diff commands** in CI: + + ```bash + argocd app diff myapp --exit-code + if [ $? -ne 0 ]; then + echo "Drift detected!" + # Send alert + fi + ``` + +4. **Review OutOfSync apps** weekly + +--- + +## Architecture Anti-Patterns + +### Anti-Pattern 9: Single Repository for All Environments + +**The Problem:** + +``` +monorepo/ +├── prod-secrets.yaml # Production secrets +├── dev-secrets.yaml # Dev secrets +├── manifests/ # Same access for all +``` + +**Why It's Bad:** + +- Everyone with repo access sees production secrets +- No separation of duties +- Compliance violations +- Accidental production changes + +**The Fix:** + +``` +# Separate repositories with different access +infra-config/ # Platform team only +├── applications/ +└── base-values/ + +prod-values/ # Production team + approvals +├── secrets/ +└── values/ + +dev-values/ # Developers +├── secrets/ +└── values/ +``` + +--- + +### Anti-Pattern 10: No Health Checks + +**The Problem:** + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +spec: + # Syncs and reports "Healthy" immediately + # Doesn't wait for pods to be ready +``` + +**Why It's Bad:** + +- Deployment appears successful when it's not +- Rolling updates continue despite failures +- No automatic rollback trigger + +**The Fix:** + +```yaml +# Proper health checks in Deployment +spec: + template: + spec: + containers: + - name: app + readinessProbe: + httpGet: + path: /health + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 10 + livenessProbe: + httpGet: + path: /health + port: 8080 + initialDelaySeconds: 15 + periodSeconds: 20 +``` + +```yaml +# ArgoCD sync with health check +syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true +# ArgoCD will wait for resources to be healthy +``` + +--- + +### Anti-Pattern 11: No Rollback Strategy + +**The Problem:** + +```bash +# On deployment failure: +# "Let me just push another commit to fix it" +# ... 30 minutes of debugging while production is down +``` + +**Why It's Bad:** + +- Extended downtime +- Panic-driven changes +- More errors from rushed fixes + +**The Fix:** + +**Immediate rollback via Git:** + +```bash +# Option 1: Revert commit +git revert HEAD +git push + +# Option 2: Reset to known good +git reset --hard v1.2.2 +git push --force # If protected, use revert + +# Option 3: ArgoCD CLI +argocd app rollback myapp 2 # Rollback to revision 2 +``` + +**Automated rollback with Argo Rollouts:** + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Rollout +spec: + strategy: + canary: + steps: + - setWeight: 10 + - pause: {duration: 5m} + - analysis: + templates: + - templateName: success-rate + args: + - name: service-name + value: myapp + # Automatic rollback on analysis failure +``` + +--- + +## Anti-Pattern Checklist + +Before deploying, verify: + +- [ ] No `kubectl apply` to production +- [ ] No `:latest` or mutable tags +- [ ] No secrets in Git (plain or base64) +- [ ] Environment-specific values in overlays +- [ ] Sync windows configured for production +- [ ] Applications are granular (not monolithic) +- [ ] Drift alerts configured +- [ ] Health checks defined +- [ ] Rollback procedure documented +- [ ] Repository access properly scoped + +## Summary + +| Anti-Pattern | Impact | Prevention | +|--------------|--------|------------| +| Imperative commands | Drift, no audit | Self-heal, block kubectl | +| Mutable tags | Unknown state | SemVer, digests | +| Secrets in Git | Security breach | External secrets | +| Hardcoded values | Inflexibility | Kustomize overlays | +| Bypassing Git | Lost changes | Self-heal, RBAC | +| No sync windows | Risky deployments | Project policies | +| Monolithic apps | Blast radius | App of Apps | +| Ignoring drift | Security risk | Alerts, audits | +| Single repo | Access issues | Multi-repo pattern | +| No health checks | False success | Probes, sync health | +| No rollback plan | Extended outages | Documented runbook | diff --git a/.cursor/skills/gitops-principles-skill/references/azure-arc-integration.md b/.cursor/skills/gitops-principles-skill/references/azure-arc-integration.md new file mode 100644 index 0000000..ffaa9cd --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/azure-arc-integration.md @@ -0,0 +1,487 @@ +# Azure Arc GitOps Integration with ArgoCD + +Comprehensive guide for deploying and managing ArgoCD via Azure Arc-enabled +Kubernetes and Azure Kubernetes Service (AKS) using Azure's managed GitOps +extension. + +## Overview + +Azure provides a managed ArgoCD experience through the **Microsoft.ArgoCD** +cluster extension. This enables GitOps workflows on: + +- **Azure Arc-enabled Kubernetes**: On-premises, multi-cloud, or edge + Kubernetes clusters connected to Azure +- **Azure Kubernetes Service (AKS)**: Azure's managed Kubernetes offering + +### Key Benefits + +| Benefit | Description | +|---------|-------------| +| Managed Installation | Azure handles ArgoCD deployment and upgrades | +| Workload Identity | Native Azure AD integration without managing secrets | +| Multi-Cluster | Consistent GitOps across hybrid environments | +| Azure Integration | Works with Azure Key Vault, ACR, and Azure AD | +| High Availability | Built-in HA mode with 3-node support | + +--- + +## Prerequisites + +### Azure Arc-enabled Kubernetes + +1. Kubernetes cluster connected to Azure Arc: + + ```bash + # Connect cluster to Azure Arc + az connectedk8s connect --name \ + --resource-group + ``` + +2. Required permissions: + - `Microsoft.Kubernetes/connectedClusters` (read/write) + - `Microsoft.KubernetesConfiguration/extensions` (read/write) + +### Azure Kubernetes Service (AKS) + +1. MSI-based AKS cluster (not SPN): + + ```bash + # Create MSI-based AKS cluster + az aks create --resource-group --name \ + --enable-managed-identity + + # Convert existing SPN cluster to MSI + az aks update -g -n --enable-managed-identity + ``` + +2. Required permissions: + - `Microsoft.ContainerService/managedClusters` (read/write) + - `Microsoft.KubernetesConfiguration/extensions` (read/write) + +### Common Requirements + +```bash +# Register Azure providers +az provider register --namespace Microsoft.Kubernetes +az provider register --namespace Microsoft.ContainerService +az provider register --namespace Microsoft.KubernetesConfiguration + +# Install CLI extensions +az extension add -n k8s-configuration +az extension add -n k8s-extension + +# Verify registration (wait for 'Registered' state) +az provider show -n Microsoft.KubernetesConfiguration -o table +``` + +--- + +## Network Requirements + +The GitOps agents require outbound access to: + +| Endpoint | Purpose | +|----------|---------| +| `management.azure.com` | Azure Resource Manager communication | +| `.dp.kubernetesconfiguration.azure.com` | Configuration data plane | +| `login.microsoftonline.com` | Azure AD token refresh | +| `mcr.microsoft.com` | Container image pulls | +| Git repository (port 22 or 443) | Source code sync | + +--- + +## Installation Methods + +### Method 1: Simple Installation (Single Node) + +For development or single-node clusters: + +```bash +az k8s-extension create \ + --resource-group \ + --cluster-name \ + --cluster-type managedClusters \ + --name argocd \ + --extension-type Microsoft.ArgoCD \ + --release-train preview \ + --config deployWithHighAvailability=false \ + --config namespaceInstall=false \ + --config "config-maps.argocd-cmd-params-cm.data.application\.namespaces=namespace1,namespace2" +``` + +**Parameters:** + +| Parameter | Description | +|-----------|-------------| +| `deployWithHighAvailability=false` | Single-node deployment | +| `namespaceInstall=false` | Cluster-wide ArgoCD access | +| `application.namespaces` | Namespaces where ArgoCD can detect Applications | + +### Method 2: High Availability Installation (Production) + +For production with 3+ nodes: + +```bash +az k8s-extension create \ + --resource-group \ + --cluster-name \ + --cluster-type managedClusters \ + --name argocd \ + --extension-type Microsoft.ArgoCD \ + --release-train preview \ + --config namespaceInstall=false \ + --config "config-maps.argocd-cmd-params-cm.data.application\.namespaces=default,argocd" +``` + +### Method 3: Namespace-Scoped Installation + +For multi-tenant clusters with isolated ArgoCD instances: + +```bash +az k8s-extension create \ + --resource-group \ + --cluster-name \ + --cluster-type managedClusters \ + --name argocd-team-a \ + --extension-type Microsoft.ArgoCD \ + --release-train preview \ + --config namespaceInstall=true \ + --target-namespace team-a-argocd +``` + +--- + +## Workload Identity Integration (Recommended for Production) + +Workload identity enables Azure AD authentication without managing secrets. + +### Bicep Template + +```bicep +var clusterName = '' +var workloadIdentityClientId = '' +var ssoWorkloadIdentityClientId = '' +var url = 'https:///' + +var oidcConfig = ''' +name: Azure +issuer: https://login.microsoftonline.com//v2.0 +clientID: +azure: + useWorkloadIdentity: true +requestedIDTokenClaims: + groups: + essential: true +requestedScopes: + - openid + - profile + - email +''' + +var defaultPolicy = 'role:readonly' +var policy = ''' +p, role:org-admin, applications, *, */*, allow +p, role:org-admin, clusters, get, *, allow +p, role:org-admin, repositories, get, *, allow +p, role:org-admin, repositories, create, *, allow +p, role:org-admin, repositories, update, *, allow +p, role:org-admin, repositories, delete, *, allow +g, , role:org-admin +''' + +resource cluster 'Microsoft.ContainerService/managedClusters@2024-10-01' existing = { + name: clusterName +} + +resource extension 'Microsoft.KubernetesConfiguration/extensions@2023-05-01' = { + name: 'argocd' + scope: cluster + properties: { + extensionType: 'Microsoft.ArgoCD' + releaseTrain: 'preview' + configurationSettings: { + 'workloadIdentity.enable': 'true' + 'workloadIdentity.clientId': workloadIdentityClientId + 'workloadIdentity.entraSSOClientId': ssoWorkloadIdentityClientId + 'config-maps.argocd-cm.data.oidc\\.config': oidcConfig + 'config-maps.argocd-cm.data.url': url + 'config-maps.argocd-rbac-cm.data.policy\\.default': defaultPolicy + 'config-maps.argocd-rbac-cm.data.policy\\.csv': policy + 'config-maps.argocd-cmd-params-cm.data.application\\.namespaces': 'default, argocd' + } + } +} +``` + +### Deploy with Bicep + +```bash +az deployment group create \ + --resource-group \ + --template-file argocd-extension.bicep +``` + +### Setup Workload Identity Credentials + +1. **Retrieve OIDC issuer URL:** + + ```bash + # For AKS + az aks show -n -g --query "oidcIssuerProfile.issuerUrl" -o tsv + + # For Arc-enabled Kubernetes + az connectedk8s show -n -g --query "oidcIssuerProfile.issuerUrl" -o tsv + ``` + +2. **Create managed identity:** + + ```bash + az identity create --name argocd-identity --resource-group + ``` + +3. **Create federated credential:** + + ```bash + az identity federated-credential create \ + --name argocd-federated \ + --identity-name argocd-identity \ + --resource-group \ + --issuer \ + --subject system:serviceaccount:argocd:source-controller \ + --audience api://AzureADTokenExchange + ``` + +4. **Grant ACR permissions (if using Azure Container Registry):** + + ```bash + # For ABAC-enabled registries + az role assignment create \ + --role "Container Registry Repository Reader" \ + --assignee \ + --scope /subscriptions//resourceGroups//providers/Microsoft.ContainerRegistry/registries/ + + # For non-ABAC registries + az role assignment create \ + --role "AcrPull" \ + --assignee \ + --scope /subscriptions//resourceGroups//providers/Microsoft.ContainerRegistry/registries/ + ``` + +--- + +## Accessing ArgoCD UI + +### Option 1: LoadBalancer Service + +```bash +kubectl -n argocd expose service argocd-server \ + --type LoadBalancer \ + --name argocd-server-lb \ + --port 80 \ + --target-port 8080 +``` + +### Option 2: Ingress Controller + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: argocd-server + namespace: argocd + annotations: + nginx.ingress.kubernetes.io/ssl-passthrough: "true" + nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" +spec: + ingressClassName: nginx + rules: + - host: argocd.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: argocd-server + port: + number: 443 +``` + +### Option 3: Port Forward (Development) + +```bash +kubectl port-forward svc/argocd-server -n argocd 8080:443 +``` + +--- + +## Deploying Applications + +### Example: AKS Store Demo + +```bash +kubectl apply -f - <.azurecr.io/helm + chart: myapp + targetRevision: 1.0.0 + helm: + valueFiles: + - $values/overlays/prod/values.yaml + # Values from Git + - repoURL: https://github.com/org/config.git + targetRevision: main + ref: values + destination: + server: https://kubernetes.default.svc + namespace: myapp +``` + +--- + +## Updating Configuration + +Update ArgoCD configmaps through the extension (not directly via kubectl): + +```bash +az k8s-extension update \ + --resource-group \ + --cluster-name \ + --cluster-type managedClusters \ + --name argocd \ + --config "config-maps.argocd-cm.data.url=https:///auth/callback" +``` + +--- + +## Connecting to Private ACR + +For private Azure Container Registry access with workload identity: + +1. Use workload identity (configured above) +2. Add repository in ArgoCD: + + ```bash + argocd repo add .azurecr.io \ + --type helm \ + --name azure-acr \ + --enable-oci + ``` + +--- + +## Deleting the Extension + +```bash +az k8s-extension delete \ + -g \ + -c \ + -n argocd \ + -t managedClusters \ + --yes +``` + +--- + +## Comparison: Azure Extension vs Manual Installation + +| Aspect | Azure Extension | Manual Installation | +|--------|-----------------|---------------------| +| Installation | `az k8s-extension create` | `kubectl apply` or Helm | +| Upgrades | Managed by Azure | Manual | +| Workload Identity | Built-in support | Manual configuration | +| Azure AD SSO | Simplified setup | Complex OIDC config | +| Support | Azure support included | Community support | +| Customization | Limited to extension params | Full control | +| Multi-cluster | Centralized Azure management | Per-cluster management | + +--- + +## Troubleshooting + +### Extension Installation Failed + +```bash +# Check extension status +az k8s-extension show \ + -g -c -t managedClusters \ + -n argocd + +# Check ArgoCD pods +kubectl get pods -n argocd + +# Check extension operator logs +kubectl logs -n azure-arc -l app.kubernetes.io/component=extension-manager +``` + +### Workload Identity Issues + +```bash +# Verify federated credential +az identity federated-credential list \ + --identity-name argocd-identity \ + --resource-group + +# Check service account annotation +kubectl get sa -n argocd source-controller -o yaml +``` + +### Sync Failures + +```bash +# Check ArgoCD application status +argocd app get + +# Check repo server logs +kubectl logs -n argocd -l app.kubernetes.io/name=argocd-repo-server +``` + +--- + +## Best Practices for Azure GitOps + +1. **Use Workload Identity**: Avoid storing secrets; use Azure AD authentication +2. **Private Endpoints**: Use Azure Private Link for ACR and Key Vault +3. **Azure Policy**: Enforce GitOps compliance with Azure Policy +4. **Azure Monitor**: Integrate ArgoCD metrics with Azure Monitor +5. **Separate Environments**: Use different resource groups for dev/staging/prod +6. **RBAC**: Map Azure AD groups to ArgoCD roles + +--- + +## References + +- [Microsoft Learn: GitOps with ArgoCD Tutorial](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/tutorial-use-gitops-argocd) +- [Azure Arc-enabled Kubernetes Documentation](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/) +- [AKS Workload Identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster) +- [ArgoCD Azure AD OIDC Configuration](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/user-management/microsoft.md) diff --git a/.cursor/skills/gitops-principles-skill/references/core-principles.md b/.cursor/skills/gitops-principles-skill/references/core-principles.md new file mode 100644 index 0000000..3ce3085 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/core-principles.md @@ -0,0 +1,366 @@ +# GitOps Core Principles + +Deep dive into the four foundational principles of GitOps as defined by the OpenGitOps project (CNCF). + +## The Four Pillars of GitOps + +### Principle 1: Declarative + +> "A system managed by GitOps must have its desired state expressed declaratively." + +#### What This Means + +- **Declarative**: Describe WHAT you want, not HOW to achieve it +- **Imperative (opposite)**: Step-by-step instructions to reach a state + +#### Examples + +**Declarative (GitOps):** + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + spec: + containers: + - name: nginx + image: nginx:1.21 + resources: + limits: + memory: "128Mi" + cpu: "500m" +``` + +**Imperative (NOT GitOps):** + +```bash +kubectl run nginx --image=nginx:1.21 +kubectl scale deployment nginx --replicas=3 +kubectl set resources deployment nginx -c=nginx --limits=memory=128Mi,cpu=500m +``` + +#### Why Declarative Matters + +| Benefit | Description | +|---------|-------------| +| Reproducibility | Same manifest = same result | +| Auditability | Changes tracked in Git | +| Comparison | Easy diff between desired and actual | +| Recovery | Reapply manifest to restore state | + +#### Declarative Tools + +| Tool | Purpose | +|------|---------| +| Kubernetes YAML | Native resource definitions | +| Kustomize | Overlay-based customization | +| Helm | Templated charts | +| Jsonnet | Data templating language | +| CUE | Configuration unification | +| Terraform | Infrastructure as Code | + +--- + +### Principle 2: Versioned and Immutable + +> "Desired state is stored in a way that enforces immutability, versioning, and retains a complete version history." + +#### Git as the Version Control System + +Git provides: + +- **Immutability**: Commits are content-addressed (SHA) +- **Versioning**: Full history of changes +- **Branching**: Parallel development streams +- **Audit trail**: Who changed what, when, why + +#### Version Control Best Practices + +```bash +# Good commit message structure +git commit -m "feat(nginx): increase replicas to 3 for high availability + +- Scaling nginx deployment from 1 to 3 replicas +- Adding pod anti-affinity for distribution +- Tested in staging environment + +Relates to: TICKET-123" +``` + +#### Immutability Patterns + +**Container Images:** + +```yaml +# GOOD: Immutable tag +image: nginx:1.21.6 + +# BAD: Mutable tag +image: nginx:latest +``` + +**Git References:** + +```yaml +# GOOD: Specific commit or tag +targetRevision: v1.2.3 +targetRevision: abc123def + +# RISKY: Branch (mutable) +targetRevision: main +``` + +#### Version History Benefits + +```bash +# View deployment history +git log --oneline manifests/ + +# Compare versions +git diff v1.0.0..v1.1.0 -- manifests/ + +# Find when change was introduced +git bisect start +git bisect bad HEAD +git bisect good v1.0.0 +``` + +--- + +### Principle 3: Pulled Automatically + +> "Software agents automatically pull the desired state declarations from the source." + +#### Pull vs Push Model + +``` +┌──────────────────────────────────────────────────────────────────┐ +│ PUSH MODEL (Traditional) │ +├──────────────────────────────────────────────────────────────────┤ +│ │ +│ CI/CD Server │ +│ │ │ +│ │ kubectl apply │ +│ │ (requires cluster credentials) │ +│ ▼ │ +│ Kubernetes Cluster │ +│ │ +│ Issues: │ +│ - Credentials exposed in CI │ +│ - No continuous reconciliation │ +│ - Drift goes undetected │ +│ │ +└──────────────────────────────────────────────────────────────────┘ + +┌──────────────────────────────────────────────────────────────────┐ +│ PULL MODEL (GitOps) │ +├──────────────────────────────────────────────────────────────────┤ +│ │ +│ Git Repository ◄──────────────┐ │ +│ │ │ │ +│ │ (pull/watch) │ Developer pushes │ +│ ▼ │ │ +│ GitOps Controller ────────────┘ │ +│ (inside cluster) │ +│ │ │ +│ │ kubectl apply │ +│ │ (internal credentials) │ +│ ▼ │ +│ Kubernetes Cluster │ +│ │ +│ Benefits: │ +│ - Credentials stay in cluster │ +│ - Continuous reconciliation │ +│ - Automatic drift detection │ +│ │ +└──────────────────────────────────────────────────────────────────┘ +``` + +#### Automatic Pull Mechanisms + +**Polling (Default):** + +```yaml +# ArgoCD application controller settings +spec: + syncPolicy: + automated: {} +# Default poll interval: 3 minutes +``` + +**Webhook (Recommended for Production):** + +```yaml +# Configure webhook for instant updates +apiVersion: v1 +kind: Secret +metadata: + name: argocd-webhook + namespace: argocd +data: + github.secret: +``` + +**Git Generators (ApplicationSets):** + +```yaml +spec: + generators: + - git: + repoURL: https://github.com/org/repo.git + revision: HEAD + directories: + - path: apps/* +``` + +#### Security Benefits of Pull + +| Aspect | Push Model | Pull Model | +|--------|------------|------------| +| Credential Location | CI server | Cluster only | +| Attack Surface | External access required | Internal only | +| Audit | CI logs | Git + controller logs | +| Blast Radius | CI compromise = cluster access | Limited to controller | + +--- + +### Principle 4: Continuously Reconciled + +> "Software agents continuously observe actual system state and attempt to apply the desired state." + +#### The Reconciliation Loop + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ CONTINUOUS RECONCILIATION │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────┐ │ +│ │ Git (Desired)│ │ +│ │ State │ │ +│ └──────┬───────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────┐ ┌──────────────┐ │ +│ │ Compare │◄────│ Observe │ │ +│ │ (Diff) │ │ Actual │ │ +│ └──────┬───────┘ └──────┬───────┘ │ +│ │ │ │ +│ │ │ │ +│ ▼ │ │ +│ ┌──────────────┐ │ │ +│ │ Apply │───────────┘ │ +│ │ Changes │ Repeat continuously │ +│ └──────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +#### Self-Healing Capabilities + +```yaml +spec: + syncPolicy: + automated: + selfHeal: true # Automatically revert manual changes + prune: true # Remove resources not in Git +``` + +**Self-Heal Scenarios:** + +| Manual Change | GitOps Response | +|---------------|-----------------| +| `kubectl scale deploy --replicas=1` | Restored to Git-defined replicas | +| `kubectl delete pod` | Pod recreated (normal K8s) | +| `kubectl edit configmap` | Reverted to Git version | +| `kubectl delete deployment` | Deployment recreated | + +#### Drift Detection + +**Types of Drift:** + +1. **Configuration Drift**: Resource spec differs from Git +2. **State Drift**: Resource status unhealthy +3. **Missing Resources**: Resources deleted manually +4. **Extra Resources**: Resources created outside Git + +**Detecting Drift:** + +```bash +# ArgoCD diff command +argocd app diff myapp + +# Flux reconciliation +flux reconcile kustomization myapp --with-source +``` + +#### Reconciliation Intervals + +| Tool | Default Interval | Configurable | +|------|------------------|--------------| +| ArgoCD | 3 minutes | Yes, via `timeout.reconciliation` | +| Flux | 10 minutes | Yes, per Kustomization/HelmRelease | +| Kargo | Event-driven | Webhook-based | + +**Example Configuration:** + +```yaml +# ArgoCD ConfigMap +apiVersion: v1 +kind: ConfigMap +metadata: + name: argocd-cm +data: + timeout.reconciliation: 180s # 3 minutes +``` + +```yaml +# Flux Kustomization +apiVersion: kustomize.toolkit.fluxcd.io/v1 +kind: Kustomization +spec: + interval: 5m # Check every 5 minutes +``` + +--- + +## Putting It All Together + +### The Complete GitOps Workflow + +``` +1. Developer creates PR with DECLARATIVE changes + │ + ▼ +2. Changes reviewed and VERSIONED in Git + │ + ▼ +3. GitOps controller PULLS changes automatically + │ + ▼ +4. Controller CONTINUOUSLY RECONCILES cluster state + │ + ▼ +5. Drift detected? → Auto-heal OR alert +``` + +### Compliance with Principles Checklist + +- [ ] All configurations are YAML/HCL (declarative) +- [ ] All changes go through Git PR (versioned) +- [ ] No `kubectl apply` from laptops (pulled) +- [ ] Self-heal enabled (continuously reconciled) +- [ ] Drift alerts configured (continuously reconciled) + +## References + +- [OpenGitOps Principles](https://opengitops.dev/principles) +- [CNCF GitOps Working Group](https://github.com/cncf/tag-app-delivery/tree/main/gitops-wg) +- [GitOps Principles v1.0.0](https://github.com/open-gitops/documents/blob/main/PRINCIPLES.md) diff --git a/.cursor/skills/gitops-principles-skill/references/patterns-and-practices.md b/.cursor/skills/gitops-principles-skill/references/patterns-and-practices.md new file mode 100644 index 0000000..38b2cc7 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/patterns-and-practices.md @@ -0,0 +1,524 @@ +# GitOps Patterns and Practices + +Comprehensive guide to repository structures, branching strategies, and deployment patterns for GitOps implementations. + +## Repository Structure Patterns + +### Pattern 1: Monorepo + +All applications and environments in a single repository. + +``` +gitops-monorepo/ +├── apps/ +│ ├── frontend/ +│ │ ├── base/ +│ │ │ ├── deployment.yaml +│ │ │ ├── service.yaml +│ │ │ └── kustomization.yaml +│ │ └── overlays/ +│ │ ├── dev/ +│ │ │ ├── kustomization.yaml +│ │ │ └── replica-patch.yaml +│ │ ├── staging/ +│ │ └── production/ +│ ├── backend/ +│ └── database/ +├── infrastructure/ +│ ├── cert-manager/ +│ ├── ingress-nginx/ +│ └── monitoring/ +├── clusters/ +│ ├── dev/ +│ ├── staging/ +│ └── production/ +└── README.md +``` + +**Pros:** + +- Single source of truth +- Easy cross-application changes +- Atomic multi-app deployments +- Simplified tooling + +**Cons:** + +- Large repository over time +- Broad access permissions needed +- CI/CD triggers for all changes +- Potential merge conflicts + +**Best For:** Small-medium teams, tightly coupled applications + +--- + +### Pattern 2: Polyrepo (Multi-Repository) + +Separate repositories per application or concern. + +``` +# Application repositories +app-frontend/ +├── src/ +├── Dockerfile +└── k8s/ + ├── base/ + └── overlays/ + +app-backend/ +├── src/ +├── Dockerfile +└── k8s/ + +# Infrastructure repository +platform-infrastructure/ +├── cert-manager/ +├── ingress/ +└── monitoring/ + +# Cluster configuration +cluster-config/ +├── dev/ +├── staging/ +└── production/ +``` + +**Pros:** + +- Fine-grained access control +- Independent release cycles +- Smaller, focused repositories +- Team autonomy + +**Cons:** + +- Harder to coordinate changes +- More repositories to manage +- Complex dependency tracking +- Potential version drift + +**Best For:** Large organizations, microservices, multiple teams + +--- + +### Pattern 3: App of Apps (Umbrella Pattern) + +A parent Application manages child Applications. + +```yaml +# Root Application (apps/root-app.yaml) +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: root-app + namespace: argocd +spec: + project: default + source: + repoURL: https://github.com/org/gitops-config.git + targetRevision: HEAD + path: apps + destination: + server: https://kubernetes.default.svc + namespace: argocd +``` + +``` +gitops-config/ +├── apps/ +│ ├── frontend.yaml # Application CR +│ ├── backend.yaml # Application CR +│ ├── monitoring.yaml # Application CR +│ └── kustomization.yaml +└── manifests/ + ├── frontend/ + ├── backend/ + └── monitoring/ +``` + +**Benefits:** + +- Hierarchical organization +- Single sync point +- Environment-specific app sets +- Easy to add/remove apps + +--- + +### Pattern 4: Multi-Repository with Values Separation + +Separates infrastructure definitions from environment-specific values. + +``` +# Repository 1: Infrastructure (infra-team/) +infra-team/ +├── applications/ +│ ├── nginx/ +│ │ ├── applicationset.yaml +│ │ └── base-values.yaml +│ └── prometheus/ +└── applicationsets/ + +# Repository 2: Values (argo-cd-helm-values/) +argo-cd-helm-values/ +├── dev/ +│ ├── nginx/ +│ │ └── values.yaml +│ └── prometheus/ +├── staging/ +└── production/ +``` + +**ApplicationSet with Multi-Source:** + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: nginx +spec: + generators: + - list: + elements: + - cluster: dev + url: https://dev-cluster + - cluster: prod + url: https://prod-cluster + template: + spec: + sources: + - repoURL: https://charts.bitnami.com/bitnami + chart: nginx + targetRevision: 15.0.0 + helm: + valueFiles: + - $values/{{cluster}}/nginx/values.yaml + - repoURL: https://github.com/org/argo-cd-helm-values.git + targetRevision: main + ref: values +``` + +**Benefits:** + +- Security boundary between repos +- Different RBAC per environment +- Separation of concerns +- Easier secret management + +--- + +## Branching Strategies + +### Strategy 1: Environment Branches + +``` +main ──────────────────────────────────────► Production + │ + └── staging ─────────────────────────────► Staging + │ + └── develop ───────────────────────► Development +``` + +**Workflow:** + +1. Develop on `develop` branch +2. Merge to `staging` for testing +3. Merge to `main` for production + +**Pros:** Clear environment mapping +**Cons:** Merge conflicts, branch maintenance + +--- + +### Strategy 2: Trunk-Based with Directory Overlays + +``` +main (single branch) +├── base/ # Shared configuration +├── overlays/ +│ ├── dev/ # Dev-specific patches +│ ├── staging/ # Staging-specific patches +│ └── production/ # Prod-specific patches +``` + +**Workflow:** + +1. All changes go to `main` +2. Kustomize overlays handle environment differences +3. GitOps controller watches specific paths + +**Pros:** Simple, fewer branches, atomic changes +**Cons:** Requires good overlay discipline + +--- + +### Strategy 3: Release Branches + +``` +main + │ + ├── release/v1.0.0 ──► Production (v1.0) + ├── release/v1.1.0 ──► Production (v1.1) + └── release/v2.0.0 ──► Production (v2.0) +``` + +**Workflow:** + +1. Develop on `main` +2. Create release branch for deployment +3. Hotfixes on release branches +4. Cherry-pick to main + +**Pros:** Clear versioning, rollback by switching branches +**Cons:** Branch proliferation, merge complexity + +--- + +### Strategy 4: GitFlow for GitOps + +``` +main ────────────────────────────────────────► Production + ▲ + │ + │ merge + │ +develop ─────────────────────────────────────► Development + ▲ ▲ + │ │ +feature/ release/ +branches branches +``` + +**Best For:** Complex release processes, multiple parallel versions + +--- + +## Deployment Patterns + +### Progressive Delivery Patterns + +#### Blue-Green Deployment + +```yaml +# Blue deployment (current production) +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: myapp-blue +spec: + source: + path: manifests/blue + destination: + namespace: myapp-blue + +--- +# Green deployment (new version) +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: myapp-green +spec: + source: + path: manifests/green + destination: + namespace: myapp-green +``` + +**Traffic Switch:** + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: myapp +spec: + rules: + - host: myapp.example.com + http: + paths: + - path: / + backend: + service: + name: myapp-green # Switch from blue to green + port: + number: 80 +``` + +#### Canary Deployment + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Rollout +metadata: + name: myapp +spec: + replicas: 10 + strategy: + canary: + steps: + - setWeight: 10 # 10% traffic to canary + - pause: {duration: 5m} + - setWeight: 30 + - pause: {duration: 10m} + - setWeight: 50 + - pause: {duration: 10m} + trafficRouting: + nginx: + stableIngress: myapp-stable +``` + +#### Wave-Based Deployment + +```yaml +# Wave 1: Infrastructure +metadata: + annotations: + argocd.argoproj.io/sync-wave: "-1" + +# Wave 2: Database migrations +metadata: + annotations: + argocd.argoproj.io/sync-wave: "0" + +# Wave 3: Application +metadata: + annotations: + argocd.argoproj.io/sync-wave: "1" + +# Wave 4: Post-deployment jobs +metadata: + annotations: + argocd.argoproj.io/sync-wave: "2" +``` + +--- + +### Multi-Cluster Patterns + +#### Hub and Spoke + +``` + ┌─────────────────┐ + │ Hub Cluster │ + │ (ArgoCD) │ + └────────┬────────┘ + │ + ┌────────────────┼────────────────┐ + │ │ │ + ▼ ▼ ▼ + ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ Spoke 1 │ │ Spoke 2 │ │ Spoke 3 │ + │ (Dev) │ │ (Staging)│ │ (Prod) │ + └───────────┘ └───────────┘ └───────────┘ +``` + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: myapp + namespace: argocd +spec: + generators: + - clusters: + selector: + matchLabels: + environment: production + template: + spec: + destination: + server: '{{server}}' + namespace: myapp +``` + +#### Pull-Based Multi-Cluster + +Each cluster has its own GitOps controller: + +``` +Git Repository + │ + ├──────────────────┬──────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌───────────┐ ┌───────────┐ ┌───────────┐ +│ Cluster 1 │ │ Cluster 2 │ │ Cluster 3 │ +│ ArgoCD │ │ Flux │ │ ArgoCD │ +└───────────┘ └───────────┘ └───────────┘ +``` + +--- + +### Environment Promotion Pattern + +``` +┌──────────────┐ ┌──────────────┐ ┌──────────────┐ +│ Dev │────►│ Staging │────►│ Production │ +│ │ │ │ │ │ +│ Auto-deploy │ │ Auto-deploy │ │ Manual gate │ +│ from main │ │ after dev │ │ approval │ +└──────────────┘ └──────────────┘ └──────────────┘ +``` + +**Kargo Implementation:** + +```yaml +apiVersion: kargo.akuity.io/v1alpha1 +kind: Stage +metadata: + name: production +spec: + requestedFreight: + - origin: + kind: Warehouse + name: main-warehouse + sources: + stages: + - staging + requiredSoakTime: 24h # Must be stable in staging for 24h +``` + +--- + +## Best Practices Summary + +### Repository Best Practices + +| Practice | Description | +|----------|-------------| +| README in every directory | Document purpose and ownership | +| CODEOWNERS file | Define approval requirements | +| Branch protection | Require PR reviews | +| Semantic versioning | For releases and tags | +| Gitignore | Exclude generated files | + +### Branching Best Practices + +| Practice | Description | +|----------|-------------| +| Keep main deployable | Always production-ready | +| Short-lived branches | Reduce merge conflicts | +| Descriptive branch names | `feature/add-redis`, `fix/memory-leak` | +| Squash on merge | Clean history | + +### Deployment Best Practices + +| Practice | Description | +|----------|-------------| +| Progressive rollouts | Never deploy 100% immediately | +| Automated rollback | On health check failure | +| Sync windows | Control when deployments happen | +| Resource quotas | Prevent runaway deployments | + +## Quick Reference + +### Choose Your Pattern + +| Scenario | Recommended Pattern | +|----------|-------------------| +| Small team, few apps | Monorepo + trunk-based | +| Large org, many teams | Polyrepo + App of Apps | +| Strict compliance | Multi-repo with values separation | +| Rapid iteration | Trunk-based + overlays | +| Complex releases | GitFlow or release branches | diff --git a/.cursor/skills/gitops-principles-skill/references/tooling-ecosystem.md b/.cursor/skills/gitops-principles-skill/references/tooling-ecosystem.md new file mode 100644 index 0000000..4fc274e --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/tooling-ecosystem.md @@ -0,0 +1,510 @@ +# GitOps Tooling Ecosystem + +Comprehensive comparison of GitOps tools, their architectures, and use cases. + +## Tool Comparison Overview + +| Feature | ArgoCD | Flux | Kargo | +|---------|--------|------|-------| +| **Primary Focus** | Application deployment | Full GitOps toolkit | Progressive delivery | +| **UI** | Built-in web UI | Third-party (Weave GitOps) | Built-in web UI | +| **Multi-tenancy** | Projects, RBAC | Namespaced controllers | Projects, RBAC | +| **Helm Support** | Native | Native | Via ArgoCD integration | +| **Kustomize Support** | Native | Native | Via ArgoCD integration | +| **Multi-Cluster** | Centralized hub | Agent per cluster | Centralized | +| **GitOps Model** | Pull | Pull | Pull + Promotion | +| **CNCF Status** | Graduated | Graduated | Incubating | +| **Best For** | Visibility, multi-cluster | Lightweight, automation | Environment promotion | + +--- + +## ArgoCD + +### Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ ArgoCD Components │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ API Server │◄──►│ Web UI │ │ +│ │ (gRPC/REST) │ │ │ │ +│ └────────┬─────────┘ └──────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Repo Server │ │ Redis │ │ +│ │ (Git operations) │ │ (Caching) │ │ +│ └────────┬─────────┘ └──────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Application │ │ Dex │ │ +│ │ Controller │ │ (SSO/OIDC) │ │ +│ │ (Reconciliation) │ │ │ │ +│ └──────────────────┘ └──────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Key Features + +**Application CRD:** + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: myapp + namespace: argocd +spec: + project: default + source: + repoURL: https://github.com/org/repo.git + targetRevision: HEAD + path: manifests + destination: + server: https://kubernetes.default.svc + namespace: myapp + syncPolicy: + automated: + prune: true + selfHeal: true +``` + +**ApplicationSet (Multi-App Generation):** + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: cluster-apps +spec: + generators: + - clusters: {} # All registered clusters + template: + spec: + destination: + server: '{{server}}' +``` + +### Strengths + +- Rich web UI with visualization +- Multi-cluster management from single control plane +- SSO integration (OIDC, SAML, LDAP) +- ApplicationSets for templated deployments +- Extensive sync options and hooks +- Large community and ecosystem + +### Considerations + +- Resource-intensive for large deployments +- Single point of failure (hub cluster) +- Requires dedicated namespace + +### Installation + +**Standard Installation:** + +```bash +kubectl create namespace argocd +kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml +``` + +**Azure Arc / AKS Managed Extension:** + +```bash +# Register providers +az provider register --namespace Microsoft.KubernetesConfiguration +az extension add -n k8s-extension + +# Install ArgoCD as managed extension +az k8s-extension create \ + --resource-group --cluster-name \ + --cluster-type managedClusters \ + --name argocd \ + --extension-type Microsoft.ArgoCD \ + --release-train preview \ + --config deployWithHighAvailability=false +``` + +Benefits of Azure managed extension: + +- Managed upgrades and maintenance +- Native Azure AD workload identity integration +- Consistent multi-cluster management via Azure Arc +- See `azure-arc-integration.md` for complete guide + +--- + +## Flux + +### Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Flux Components │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Source Controller│ │ Kustomize │ │ +│ │ (Git, Helm, OCI) │ │ Controller │ │ +│ └────────┬─────────┘ └────────┬─────────┘ │ +│ │ │ │ +│ └───────────┬───────────┘ │ +│ ▼ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Helm Controller │ │ Notification │ │ +│ │ │ │ Controller │ │ +│ └──────────────────┘ └──────────────────┘ │ +│ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Image Automation │ │ Image Reflector │ │ +│ │ Controller │ │ Controller │ │ +│ └──────────────────┘ └──────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Key Features + +**GitRepository Source:** + +```yaml +apiVersion: source.toolkit.fluxcd.io/v1 +kind: GitRepository +metadata: + name: myapp + namespace: flux-system +spec: + interval: 1m + url: https://github.com/org/repo.git + ref: + branch: main + secretRef: + name: git-credentials +``` + +**Kustomization (Flux CRD, not Kustomize):** + +```yaml +apiVersion: kustomize.toolkit.fluxcd.io/v1 +kind: Kustomization +metadata: + name: myapp + namespace: flux-system +spec: + interval: 10m + sourceRef: + kind: GitRepository + name: myapp + path: ./manifests + prune: true + healthChecks: + - apiVersion: apps/v1 + kind: Deployment + name: myapp + namespace: default +``` + +**HelmRelease:** + +```yaml +apiVersion: helm.toolkit.fluxcd.io/v2beta1 +kind: HelmRelease +metadata: + name: nginx + namespace: default +spec: + interval: 5m + chart: + spec: + chart: nginx + version: '15.x' + sourceRef: + kind: HelmRepository + name: bitnami + namespace: flux-system + values: + replicaCount: 2 +``` + +### Strengths + +- Lightweight, modular architecture +- No single point of failure +- Native image automation (update manifests on new image) +- OCI registry support for storing configs +- Multi-tenancy via namespaces +- Terraform Controller integration + +### Considerations + +- No built-in UI (requires Weave GitOps or similar) +- Steeper learning curve for CRD relationships +- Each cluster needs its own Flux instance + +### Installation + +```bash +flux bootstrap github \ + --owner=my-org \ + --repository=fleet-infra \ + --branch=main \ + --path=clusters/my-cluster \ + --personal +``` + +--- + +## Kargo + +### Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Kargo Components │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Warehouse │ │ Stages │ │ +│ │ (Artifact │ │ (Promotion │ │ +│ │ Discovery) │ │ Targets) │ │ +│ └────────┬─────────┘ └────────┬─────────┘ │ +│ │ │ │ +│ └───────────┬───────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────────────────────────────┐ │ +│ │ Freight │ │ +│ │ (Versioned artifact collection) │ │ +│ └──────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────────────────────────────┐ │ +│ │ Promotions │ │ +│ │ (Move Freight through Stages) │ │ +│ └──────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Key Features + +**Warehouse (Artifact Discovery):** + +```yaml +apiVersion: kargo.akuity.io/v1alpha1 +kind: Warehouse +metadata: + name: main-warehouse + namespace: myproject +spec: + subscriptions: + - image: + repoURL: ghcr.io/org/myapp + imageSelectionStrategy: SemVer + - git: + repoURL: https://github.com/org/config.git + branch: main +``` + +**Stage (Promotion Target):** + +```yaml +apiVersion: kargo.akuity.io/v1alpha1 +kind: Stage +metadata: + name: staging + namespace: myproject +spec: + requestedFreight: + - origin: + kind: Warehouse + name: main-warehouse + sources: + direct: true + promotionTemplate: + spec: + steps: + - uses: git-clone + - uses: kustomize-set-image + - uses: git-commit + - uses: git-push + - uses: argocd-update +``` + +### Strengths + +- Purpose-built for environment promotion +- Coordinates image + config updates +- Verification (testing) between stages +- Soak time requirements +- Works alongside ArgoCD +- Progressive delivery native + +### Considerations + +- Newer project (less mature) +- Requires ArgoCD or Flux for actual deployment +- Additional complexity layer + +### Installation + +```bash +helm install kargo \ + oci://ghcr.io/akuity/kargo-charts/kargo \ + --namespace kargo \ + --create-namespace +``` + +--- + +## Tool Selection Guide + +### Decision Matrix + +| Requirement | Best Tool | +|-------------|-----------| +| Need rich UI for developers | ArgoCD | +| Lightweight, minimal footprint | Flux | +| Multi-cluster from single pane | ArgoCD | +| Image automation (auto-update on new image) | Flux | +| Environment promotion workflows | Kargo | +| SSO/OIDC integration | ArgoCD | +| GitOps for Terraform | Flux (TF Controller) | +| Large enterprise with compliance | ArgoCD + Kargo | + +### Architecture Recommendations + +**Small Team (< 5 developers):** + +``` +Single cluster + │ + └── Flux or ArgoCD (standalone) +``` + +**Medium Team (5-20 developers):** + +``` +ArgoCD (hub cluster) + │ + ├── Dev cluster + ├── Staging cluster + └── Prod cluster +``` + +**Large Organization (20+ developers):** + +``` +Kargo + ArgoCD + │ + ├── ArgoCD manages deployments + └── Kargo manages promotions + │ + ├── Dev stages + ├── Staging stages (with verification) + └── Prod stages (with approval gates) +``` + +--- + +## Complementary Tools + +### Secrets Management + +| Tool | Integration | +|------|-------------| +| External Secrets Operator | ArgoCD, Flux | +| Sealed Secrets | ArgoCD, Flux | +| SOPS | Flux native, ArgoCD plugin | +| HashiCorp Vault | Both via CSI or injector | + +### Progressive Delivery + +| Tool | Use Case | +|------|----------| +| Argo Rollouts | Canary, Blue-Green | +| Flagger | Works with Flux | +| Kargo | Multi-environment promotion | + +### Policy Enforcement + +| Tool | Purpose | +|------|---------| +| Kyverno | Kubernetes-native policies | +| OPA Gatekeeper | Rego-based policies | +| Datree | Pre-commit validation | + +### Observability + +| Tool | Purpose | +|------|---------| +| Prometheus | Metrics | +| Grafana | Dashboards | +| ArgoCD Notifications | Alerts | +| Flux Notification Controller | Alerts | + +--- + +## Migration Paths + +### From Helm/kubectl to ArgoCD + +1. Export existing Helm releases as values files +2. Create ArgoCD Applications pointing to charts +3. Disable Helm Tiller/manual deployments +4. Enable automated sync + +### From ArgoCD to Flux + +1. Export Applications as Flux Kustomizations +2. Deploy Flux controllers +3. Migrate repo credentials +4. Decommission ArgoCD + +### Adding Kargo to Existing ArgoCD + +1. Install Kargo alongside ArgoCD +2. Create Warehouses for artifact sources +3. Create Stages matching your environments +4. Add `kargo.akuity.io/authorized-stage` annotations to ArgoCD Applications +5. Define promotion templates + +--- + +## Quick CLI Reference + +### ArgoCD + +```bash +argocd app list +argocd app sync myapp +argocd app get myapp +argocd app diff myapp +argocd app history myapp +argocd app rollback myapp 2 +``` + +### Flux + +```bash +flux get kustomizations +flux reconcile kustomization myapp +flux get sources git +flux logs --kind=Kustomization --name=myapp +flux suspend kustomization myapp +flux resume kustomization myapp +``` + +### Kargo + +```bash +kargo get stages --project myproject +kargo get freight --project myproject +kargo promote --project myproject --freight --stage prod +kargo approve --project myproject --freight --stage prod +``` diff --git a/.cursor/skills/gitops-principles-skill/references/troubleshooting.md b/.cursor/skills/gitops-principles-skill/references/troubleshooting.md new file mode 100644 index 0000000..6fd0455 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/references/troubleshooting.md @@ -0,0 +1,545 @@ +# GitOps Troubleshooting Guide + +Comprehensive debugging guide for common GitOps issues with ArgoCD, Flux, and Kubernetes. + +## Quick Diagnostic Commands + +### ArgoCD Quick Checks + +```bash +# Application status overview +argocd app list + +# Detailed app info +argocd app get myapp + +# Show diff between Git and cluster +argocd app diff myapp + +# Force refresh from Git +argocd app get myapp --refresh + +# View sync history +argocd app history myapp + +# Check ArgoCD components health +kubectl get pods -n argocd +``` + +### Flux Quick Checks + +```bash +# Overall Flux status +flux check + +# Kustomization status +flux get kustomizations -A + +# Source status +flux get sources git -A + +# Reconcile immediately +flux reconcile kustomization myapp --with-source + +# View logs +flux logs --kind=Kustomization --name=myapp +``` + +### Kubernetes Quick Checks + +```bash +# Pod status +kubectl get pods -n myapp + +# Recent events +kubectl get events -n myapp --sort-by='.lastTimestamp' + +# Describe problematic resource +kubectl describe deployment myapp -n myapp + +# Pod logs +kubectl logs -l app=myapp -n myapp --tail=100 +``` + +--- + +## Common Issues and Solutions + +### Issue 1: Application Stuck in "OutOfSync" + +**Symptoms:** + +- Application shows `OutOfSync` status +- Sync button doesn't resolve the issue +- Diff shows unexpected differences + +**Diagnostic Steps:** + +```bash +# Step 1: View the diff +argocd app diff myapp + +# Step 2: Check for ignored differences +argocd app get myapp -o yaml | grep -A 20 ignoreDifferences + +# Step 3: Force a hard refresh +argocd app get myapp --hard-refresh +``` + +**Common Causes and Fixes:** + +**Cause A: Server-side modifications (mutating webhooks, controllers)** + +```yaml +# Fix: Add ignoreDifferences to Application +spec: + ignoreDifferences: + - group: apps + kind: Deployment + jsonPointers: + - /spec/replicas # Ignored by HPA + - group: "" + kind: Service + jsonPointers: + - /spec/clusterIP # Auto-assigned +``` + +**Cause B: Defaulting by Kubernetes API** + +```yaml +# Fix: Use Server-Side Apply +spec: + syncPolicy: + syncOptions: + - ServerSideApply=true +``` + +**Cause C: Resource created outside Git** + +```bash +# Identify extra resources +argocd app resources myapp + +# Either: +# 1. Add to Git +# 2. Enable pruning +# 3. Add to exclude patterns +``` + +--- + +### Issue 2: Sync Failed + +**Symptoms:** + +- Application shows `Sync Failed` +- Error message in sync operation + +**Diagnostic Steps:** + +```bash +# Step 1: Get sync status details +argocd app get myapp + +# Step 2: View sync operation result +argocd app sync myapp --dry-run + +# Step 3: Check ArgoCD controller logs +kubectl logs -n argocd -l app.kubernetes.io/name=argocd-application-controller +``` + +**Common Causes and Fixes:** + +**Cause A: Invalid YAML/Manifest** + +```bash +# Validate manifests locally +kustomize build ./overlays/prod | kubectl apply --dry-run=client -f - + +# Or for Helm +helm template myrelease ./chart --validate +``` + +**Cause B: RBAC Permissions** + +```bash +# Check ArgoCD service account permissions +kubectl auth can-i create deployments --as=system:serviceaccount:argocd:argocd-application-controller -n myapp +``` + +```yaml +# Fix: Add namespace to AppProject destinations +spec: + destinations: + - namespace: myapp + server: https://kubernetes.default.svc +``` + +**Cause C: Resource Conflict** + +```bash +# Check if resource exists with different manager +kubectl get deployment myapp -n myapp -o yaml | grep -A 5 managedFields +``` + +```yaml +# Fix: Force replace +spec: + syncPolicy: + syncOptions: + - Replace=true +``` + +--- + +### Issue 3: Application Degraded + +**Symptoms:** + +- Application shows `Degraded` health status +- Pods not running correctly + +**Diagnostic Steps:** + +```bash +# Step 1: Get health details +argocd app get myapp + +# Step 2: Check pod status +kubectl get pods -n myapp +kubectl describe pod -n myapp + +# Step 3: Check pod logs +kubectl logs -n myapp --previous # For crashed containers +``` + +**Common Causes and Fixes:** + +**Cause A: Image Pull Failure** + +```bash +# Check events +kubectl get events -n myapp | grep -i pull + +# Verify image exists +docker pull myregistry/myapp:v1.0.0 + +# Check imagePullSecrets +kubectl get deployment myapp -n myapp -o yaml | grep -A 5 imagePullSecrets +``` + +**Cause B: Resource Limits** + +```bash +# Check for OOMKilled +kubectl get pods -n myapp -o jsonpath='{.items[*].status.containerStatuses[*].lastState.terminated.reason}' + +# Check resource usage +kubectl top pods -n myapp +``` + +**Cause C: Readiness Probe Failure** + +```bash +# Check probe configuration +kubectl get deployment myapp -n myapp -o yaml | grep -A 10 readinessProbe + +# Test endpoint manually +kubectl exec -it -n myapp -- curl localhost:8080/health +``` + +--- + +### Issue 4: Repository Connection Failed + +**Symptoms:** + +- ArgoCD can't connect to Git repository +- `ComparisonError` or `Unable to fetch repository` + +**Diagnostic Steps:** + +```bash +# Step 1: Check repository status +argocd repo list + +# Step 2: Test connection +argocd repo get https://github.com/org/repo.git + +# Step 3: Check repo-server logs +kubectl logs -n argocd -l app.kubernetes.io/name=argocd-repo-server +``` + +**Common Causes and Fixes:** + +**Cause A: Authentication Failure** + +```bash +# Update credentials +argocd repo add https://github.com/org/repo.git \ + --username git \ + --password $GITHUB_TOKEN \ + --upsert +``` + +**Cause B: SSH Key Issues** + +```bash +# Check known hosts +argocd cert list + +# Add SSH key +argocd repo add git@github.com:org/repo.git \ + --ssh-private-key-path ~/.ssh/id_rsa +``` + +**Cause C: Network/Firewall** + +```bash +# Test from repo-server pod +kubectl exec -it -n argocd -- \ + git ls-remote https://github.com/org/repo.git +``` + +--- + +### Issue 5: Webhook Not Triggering + +**Symptoms:** + +- Changes pushed to Git but no sync +- Waiting for poll interval + +**Diagnostic Steps:** + +```bash +# Step 1: Check webhook configuration in Git provider + +# Step 2: Verify ArgoCD webhook endpoint +curl -X POST https://argocd.example.com/api/webhook + +# Step 3: Check API server logs +kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server | grep webhook +``` + +**Common Causes and Fixes:** + +**Cause A: Wrong Webhook URL** + +``` +# Correct URL format +https://argocd.example.com/api/webhook + +# NOT +https://argocd.example.com/webhook +https://argocd.example.com/api/v1/webhook +``` + +**Cause B: Secret Mismatch** + +```yaml +# Verify webhook secret in argocd-secret +kubectl get secret argocd-secret -n argocd -o yaml | grep webhook +``` + +**Cause C: Ingress/Load Balancer Issue** + +```bash +# Check if webhook endpoint is reachable +curl -v https://argocd.example.com/api/webhook +``` + +--- + +### Issue 6: Slow Sync Performance + +**Symptoms:** + +- Syncs take a long time +- Timeouts during sync +- High resource usage on ArgoCD + +**Diagnostic Steps:** + +```bash +# Step 1: Check resource usage +kubectl top pods -n argocd + +# Step 2: Check number of resources +argocd app resources myapp | wc -l + +# Step 3: Check controller metrics +kubectl port-forward -n argocd svc/argocd-metrics 8082:8082 +curl localhost:8082/metrics | grep argocd_app +``` + +**Optimization Steps:** + +**1. Increase Controller Resources:** + +```yaml +# argocd-application-controller deployment +resources: + limits: + cpu: "2" + memory: "2Gi" + requests: + cpu: "500m" + memory: "512Mi" +``` + +**2. Split Large Applications:** + +```yaml +# Instead of one app with 500 resources +# Create multiple smaller apps +``` + +**3. Optimize Sync Options:** + +```yaml +spec: + syncPolicy: + syncOptions: + - ApplyOutOfSyncOnly=true # Only sync changed resources +``` + +**4. Adjust Reconciliation Timeout:** + +```yaml +# In argocd-cm ConfigMap +data: + timeout.reconciliation: 300s +``` + +--- + +### Issue 7: Multi-Cluster Connection Issues + +**Symptoms:** + +- External cluster shows as disconnected +- Applications targeting external cluster fail + +**Diagnostic Steps:** + +```bash +# Step 1: List clusters +argocd cluster list + +# Step 2: Check cluster status +argocd cluster get https://external-cluster:6443 + +# Step 3: Verify cluster secret +kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster +``` + +**Common Causes and Fixes:** + +**Cause A: Expired Credentials** + +```bash +# Rotate cluster credentials +argocd cluster rotate-auth https://external-cluster:6443 +``` + +**Cause B: Network Connectivity** + +```bash +# Test from ArgoCD pod +kubectl exec -it -n argocd -- \ + curl -k https://external-cluster:6443/healthz +``` + +**Cause C: Certificate Issues** + +```bash +# Re-add cluster with updated certs +argocd cluster add external-context --name external-cluster +``` + +--- + +## Debugging Checklist + +### Pre-Sync Checklist + +- [ ] Manifests are valid YAML +- [ ] Image tags exist and are pullable +- [ ] Secrets/ConfigMaps referenced exist +- [ ] Namespace exists or `CreateNamespace=true` +- [ ] RBAC allows ArgoCD to create resources +- [ ] Resource quotas won't block creation + +### Post-Failure Checklist + +- [ ] Check ArgoCD UI for error messages +- [ ] Review `argocd app diff` output +- [ ] Check Kubernetes events in target namespace +- [ ] Review ArgoCD controller logs +- [ ] Verify Git repository is accessible +- [ ] Check for webhook delivery failures + +### Performance Checklist + +- [ ] Applications are appropriately sized +- [ ] `ApplyOutOfSyncOnly` enabled where appropriate +- [ ] Controller resources adequate +- [ ] Redis cache functioning +- [ ] Repository server not overloaded + +--- + +## Log Locations + +| Component | How to Access | +|-----------|---------------| +| ArgoCD API Server | `kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server` | +| ArgoCD Controller | `kubectl logs -n argocd -l app.kubernetes.io/name=argocd-application-controller` | +| ArgoCD Repo Server | `kubectl logs -n argocd -l app.kubernetes.io/name=argocd-repo-server` | +| Flux Source Controller | `kubectl logs -n flux-system -l app=source-controller` | +| Flux Kustomize Controller | `kubectl logs -n flux-system -l app=kustomize-controller` | + +--- + +## Emergency Procedures + +### Force Sync (Override Errors) + +```bash +# ArgoCD +argocd app sync myapp --force --prune + +# Flux +flux reconcile kustomization myapp --force +``` + +### Disable Auto-Sync (Stop Reconciliation) + +```bash +# ArgoCD - patch application +argocd app set myapp --sync-policy none + +# Flux - suspend kustomization +flux suspend kustomization myapp +``` + +### Emergency Rollback + +```bash +# ArgoCD +argocd app rollback myapp + +# Git-based (works for any tool) +git revert HEAD +git push +``` + +### Nuclear Option (Delete and Recreate) + +```bash +# WARNING: Causes downtime +argocd app delete myapp --cascade=false # Keep resources +# Fix configuration +argocd app create myapp ... # Recreate application +``` diff --git a/.cursor/skills/gitops-principles-skill/scripts/gitops-health-check.sh b/.cursor/skills/gitops-principles-skill/scripts/gitops-health-check.sh new file mode 100644 index 0000000..773f60e --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/scripts/gitops-health-check.sh @@ -0,0 +1,455 @@ +#!/usr/bin/env bash +# +# GitOps Health Check Script +# Validates GitOps setup and checks for common issues +# +# Usage: +# ./gitops-health-check.sh # Full check +# ./gitops-health-check.sh --argocd # ArgoCD only +# ./gitops-health-check.sh --flux # Flux only +# ./gitops-health-check.sh --manifests ./path # Validate manifests +# +# Requirements: +# - kubectl configured with cluster access +# - argocd CLI (for ArgoCD checks) +# - flux CLI (for Flux checks) +# - kustomize (for manifest validation) +# - helm (for chart validation) + +set -euo pipefail + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Counters +PASSED=0 +FAILED=0 +WARNINGS=0 + +# Functions +print_header() { + echo -e "\n${BLUE}═══════════════════════════════════════════════════════════════${NC}" + echo -e "${BLUE} $1${NC}" + echo -e "${BLUE}═══════════════════════════════════════════════════════════════${NC}\n" +} + +print_check() { + echo -ne " Checking: $1... " +} + +pass() { + echo -e "${GREEN}✓ PASS${NC}" + ((PASSED++)) +} + +fail() { + echo -e "${RED}✗ FAIL${NC}" + echo -e " ${RED}→ $1${NC}" + ((FAILED++)) +} + +warn() { + echo -e "${YELLOW}⚠ WARN${NC}" + echo -e " ${YELLOW}→ $1${NC}" + ((WARNINGS++)) +} + +skip() { + echo -e "${YELLOW}○ SKIP${NC}" + echo -e " ${YELLOW}→ $1${NC}" +} + +# Check if command exists +command_exists() { + command -v "$1" >/dev/null 2>&1 +} + +# ============================================ +# KUBECTL / CLUSTER CHECKS +# ============================================ + +check_cluster_connectivity() { + print_header "CLUSTER CONNECTIVITY" + + print_check "kubectl available" + if command_exists kubectl; then + pass + else + fail "kubectl not found in PATH" + return 1 + fi + + print_check "Cluster connection" + if kubectl cluster-info >/dev/null 2>&1; then + pass + CLUSTER_CONTEXT=$(kubectl config current-context) + echo -e " ${BLUE}→ Context: ${CLUSTER_CONTEXT}${NC}" + else + fail "Cannot connect to cluster" + return 1 + fi + + print_check "Cluster version" + if VERSION=$(kubectl version --short 2>/dev/null | grep "Server" | awk '{print $3}'); then + pass + echo -e " ${BLUE}→ Server: ${VERSION}${NC}" + else + warn "Could not determine cluster version" + fi +} + +# ============================================ +# ARGOCD CHECKS +# ============================================ + +check_argocd() { + print_header "ARGOCD HEALTH" + + # Check if ArgoCD is installed + print_check "ArgoCD namespace exists" + if kubectl get namespace argocd >/dev/null 2>&1; then + pass + else + skip "ArgoCD not installed" + return 0 + fi + + # Check ArgoCD pods + print_check "ArgoCD pods running" + NOT_RUNNING=$(kubectl get pods -n argocd -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}' 2>/dev/null) + if [ -z "$NOT_RUNNING" ]; then + pass + else + fail "Pods not running: $NOT_RUNNING" + fi + + # Check ArgoCD CLI + print_check "ArgoCD CLI available" + if command_exists argocd; then + pass + ARGOCD_VERSION=$(argocd version --client --short 2>/dev/null || echo "unknown") + echo -e " ${BLUE}→ CLI Version: ${ARGOCD_VERSION}${NC}" + else + warn "argocd CLI not installed" + fi + + # Check ArgoCD server version + print_check "ArgoCD server version" + if SERVER_VERSION=$(kubectl get deployment argocd-server -n argocd -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null); then + pass + echo -e " ${BLUE}→ Server: ${SERVER_VERSION}${NC}" + else + warn "Could not determine server version" + fi + + # Check Applications status + print_check "Application health" + if command_exists argocd && argocd app list >/dev/null 2>&1; then + DEGRADED=$(argocd app list -o json 2>/dev/null | jq -r '.[] | select(.status.health.status != "Healthy") | .metadata.name' 2>/dev/null || true) + if [ -z "$DEGRADED" ]; then + pass + APP_COUNT=$(argocd app list -o json 2>/dev/null | jq length 2>/dev/null || echo "?") + echo -e " ${BLUE}→ All ${APP_COUNT} applications healthy${NC}" + else + warn "Degraded apps: $DEGRADED" + fi + else + skip "Cannot check apps (not logged in or CLI unavailable)" + fi + + # Check for OutOfSync applications + print_check "Application sync status" + if command_exists argocd && argocd app list >/dev/null 2>&1; then + OUTOFSYNC=$(argocd app list -o json 2>/dev/null | jq -r '.[] | select(.status.sync.status != "Synced") | .metadata.name' 2>/dev/null || true) + if [ -z "$OUTOFSYNC" ]; then + pass + else + warn "OutOfSync apps: $OUTOFSYNC" + fi + else + skip "Cannot check sync status" + fi + + # Check repo server cache + print_check "Repository server health" + if kubectl exec -n argocd deploy/argocd-repo-server -- curl -s localhost:8084/healthz >/dev/null 2>&1; then + pass + else + warn "Cannot verify repo-server health" + fi +} + +# ============================================ +# FLUX CHECKS +# ============================================ + +check_flux() { + print_header "FLUX HEALTH" + + # Check if Flux is installed + print_check "Flux namespace exists" + if kubectl get namespace flux-system >/dev/null 2>&1; then + pass + else + skip "Flux not installed" + return 0 + fi + + # Check Flux CLI + print_check "Flux CLI available" + if command_exists flux; then + pass + FLUX_VERSION=$(flux version --client 2>/dev/null | head -1 || echo "unknown") + echo -e " ${BLUE}→ CLI: ${FLUX_VERSION}${NC}" + else + warn "flux CLI not installed" + fi + + # Check Flux components + print_check "Flux controllers running" + if command_exists flux; then + FLUX_STATUS=$(flux check 2>&1 || true) + if echo "$FLUX_STATUS" | grep -q "all checks passed"; then + pass + else + warn "Some Flux checks failed" + echo "$FLUX_STATUS" | head -5 | sed 's/^/ /' + fi + else + # Fallback to kubectl check + NOT_RUNNING=$(kubectl get pods -n flux-system -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}' 2>/dev/null) + if [ -z "$NOT_RUNNING" ]; then + pass + else + fail "Pods not running: $NOT_RUNNING" + fi + fi + + # Check Kustomizations + print_check "Kustomization reconciliation" + if kubectl get kustomizations.kustomize.toolkit.fluxcd.io -A >/dev/null 2>&1; then + FAILED_KS=$(kubectl get kustomizations.kustomize.toolkit.fluxcd.io -A -o json 2>/dev/null | \ + jq -r '.items[] | select(.status.conditions[-1].status != "True") | .metadata.namespace + "/" + .metadata.name' 2>/dev/null || true) + if [ -z "$FAILED_KS" ]; then + pass + else + warn "Failed Kustomizations: $FAILED_KS" + fi + else + skip "No Kustomizations found" + fi + + # Check Git sources + print_check "Git sources ready" + if kubectl get gitrepositories.source.toolkit.fluxcd.io -A >/dev/null 2>&1; then + FAILED_GIT=$(kubectl get gitrepositories.source.toolkit.fluxcd.io -A -o json 2>/dev/null | \ + jq -r '.items[] | select(.status.conditions[-1].status != "True") | .metadata.namespace + "/" + .metadata.name' 2>/dev/null || true) + if [ -z "$FAILED_GIT" ]; then + pass + else + warn "Failed Git sources: $FAILED_GIT" + fi + else + skip "No GitRepositories found" + fi +} + +# ============================================ +# MANIFEST VALIDATION +# ============================================ + +validate_manifests() { + local MANIFEST_PATH="${1:-.}" + + print_header "MANIFEST VALIDATION" + + print_check "Manifest path exists" + if [ -d "$MANIFEST_PATH" ]; then + pass + echo -e " ${BLUE}→ Path: ${MANIFEST_PATH}${NC}" + else + fail "Path does not exist: $MANIFEST_PATH" + return 1 + fi + + # Check for kustomization.yaml + print_check "Kustomize structure" + if find "$MANIFEST_PATH" -name "kustomization.yaml" -o -name "kustomization.yml" | grep -q .; then + pass + KUSTOMIZE_COUNT=$(find "$MANIFEST_PATH" -name "kustomization.yaml" -o -name "kustomization.yml" | wc -l | tr -d ' ') + echo -e " ${BLUE}→ Found ${KUSTOMIZE_COUNT} kustomization files${NC}" + else + skip "No kustomization files found" + fi + + # Validate kustomize build + if command_exists kustomize; then + print_check "Kustomize build validation" + local BUILD_FAILED=false + while IFS= read -r -d '' KS_FILE; do + KS_DIR=$(dirname "$KS_FILE") + if ! kustomize build "$KS_DIR" >/dev/null 2>&1; then + fail "Kustomize build failed for: $KS_DIR" + kustomize build "$KS_DIR" 2>&1 | head -5 | sed 's/^/ /' + BUILD_FAILED=true + fi + done < <(find "$MANIFEST_PATH" \( -name "kustomization.yaml" -o -name "kustomization.yml" \) -print0 2>/dev/null) + if [ "$BUILD_FAILED" = false ]; then + pass + fi + else + skip "kustomize CLI not installed" + fi + + # Check for mutable image tags + print_check "Image tag immutability" + MUTABLE_TAGS=$(grep -rh "image:" "$MANIFEST_PATH" 2>/dev/null | grep -E ":(latest|dev|staging|master|main)$" || true) + if [ -z "$MUTABLE_TAGS" ]; then + pass + else + warn "Mutable image tags found" + echo "$MUTABLE_TAGS" | head -3 | sed 's/^/ /' + fi + + # Check for secrets in plain text + print_check "No plaintext secrets" + SECRETS_IN_GIT=$(grep -rl "kind: Secret" "$MANIFEST_PATH" 2>/dev/null | \ + xargs -I {} grep -l "^ [a-zA-Z]*:" {} 2>/dev/null | \ + grep -v "SealedSecret\|ExternalSecret" || true) + if [ -z "$SECRETS_IN_GIT" ]; then + pass + else + warn "Plain secrets found (use SealedSecrets or ExternalSecrets)" + echo "$SECRETS_IN_GIT" | head -3 | sed 's/^/ /' + fi + + # Validate Helm charts if present + if command_exists helm; then + print_check "Helm chart validation" + CHARTS=$(find "$MANIFEST_PATH" -name "Chart.yaml" 2>/dev/null || true) + if [ -n "$CHARTS" ]; then + for CHART in $CHARTS; do + CHART_DIR=$(dirname "$CHART") + if helm lint "$CHART_DIR" >/dev/null 2>&1; then + : # success + else + fail "Helm lint failed for: $CHART_DIR" + fi + done + pass + else + skip "No Helm charts found" + fi + fi +} + +# ============================================ +# GITOPS BEST PRACTICES +# ============================================ + +check_best_practices() { + print_header "GITOPS BEST PRACTICES" + + # Check for automated sync with self-heal + print_check "Self-healing enabled" + if command_exists argocd && argocd app list >/dev/null 2>&1; then + NO_SELFHEAL=$(argocd app list -o json 2>/dev/null | \ + jq -r '.[] | select(.spec.syncPolicy.automated.selfHeal != true) | .metadata.name' 2>/dev/null || true) + if [ -z "$NO_SELFHEAL" ]; then + pass + else + warn "Apps without self-heal: $(echo "$NO_SELFHEAL" | wc -l | tr -d ' ')" + fi + else + skip "Cannot check ArgoCD apps" + fi + + # Check for prune enabled + print_check "Prune enabled" + if command_exists argocd && argocd app list >/dev/null 2>&1; then + NO_PRUNE=$(argocd app list -o json 2>/dev/null | \ + jq -r '.[] | select(.spec.syncPolicy.automated.prune != true) | .metadata.name' 2>/dev/null || true) + if [ -z "$NO_PRUNE" ]; then + pass + else + warn "Apps without prune: $(echo "$NO_PRUNE" | wc -l | tr -d ' ')" + fi + else + skip "Cannot check ArgoCD apps" + fi + + # Check for sync windows in production + print_check "Sync windows configured" + if kubectl get appprojects.argoproj.io -n argocd -o json 2>/dev/null | jq -e '.items[] | select(.metadata.name == "production") | .spec.syncWindows' >/dev/null 2>&1; then + pass + else + warn "No sync windows on production project" + fi +} + +# ============================================ +# SUMMARY +# ============================================ + +print_summary() { + print_header "SUMMARY" + + echo -e " ${GREEN}Passed:${NC} $PASSED" + echo -e " ${RED}Failed:${NC} $FAILED" + echo -e " ${YELLOW}Warnings:${NC} $WARNINGS" + echo "" + + if [ $FAILED -gt 0 ]; then + echo -e " ${RED}Status: UNHEALTHY - Fix failed checks above${NC}" + exit 1 + elif [ $WARNINGS -gt 0 ]; then + echo -e " ${YELLOW}Status: WARNING - Review warnings above${NC}" + exit 0 + else + echo -e " ${GREEN}Status: HEALTHY - All checks passed!${NC}" + exit 0 + fi +} + +# ============================================ +# MAIN +# ============================================ + +main() { + echo "" + echo "╔═══════════════════════════════════════════════════════════════╗" + echo "║ GitOps Health Check ║" + echo "╚═══════════════════════════════════════════════════════════════╝" + + case "${1:-all}" in + --argocd) + check_cluster_connectivity + check_argocd + ;; + --flux) + check_cluster_connectivity + check_flux + ;; + --manifests) + validate_manifests "${2:-.}" + ;; + --practices) + check_cluster_connectivity + check_best_practices + ;; + all|*) + check_cluster_connectivity + check_argocd + check_flux + check_best_practices + ;; + esac + + print_summary +} + +main "$@" diff --git a/.cursor/skills/gitops-principles-skill/templates/application.yaml b/.cursor/skills/gitops-principles-skill/templates/application.yaml new file mode 100644 index 0000000..474f0b1 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/templates/application.yaml @@ -0,0 +1,214 @@ +# yamllint disable rule:document-start rule:comments-indentation +# ArgoCD Application Template +# Complete example with all common configurations +# +# Usage: +# 1. Copy this file +# 2. Replace placeholders (marked with <...>) +# 3. Apply to ArgoCD namespace or commit to Git +# +# Reference: https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/ +--- +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + # Application name - must be unique within ArgoCD namespace + name: + namespace: argocd + + # Finalizer ensures resources are deleted when Application is deleted + # Remove if you want to keep resources after Application deletion + finalizers: + - resources-finalizer.argocd.argoproj.io + + # Labels for organization and filtering + labels: + app.kubernetes.io/name: + app.kubernetes.io/part-of: + environment: + team: + + # Annotations for integrations + annotations: + # Notifications (if configured) + notifications.argoproj.io/subscribe.on-sync-succeeded.slack: + notifications.argoproj.io/subscribe.on-health-degraded.slack: + + # Kargo authorization (if using Kargo for promotions) + # kargo.akuity.io/authorized-stage: ":" + +spec: + # Project defines RBAC and allowed resources + # Use 'default' for simple setups or create dedicated project + project: default + + # ============================================ + # SOURCE CONFIGURATION + # ============================================ + + # Single source (simple) + source: + # Git repository URL + repoURL: https://github.com//.git + + # Branch, tag, or commit SHA + targetRevision: HEAD # or 'main', 'v1.0.0', 'abc123' + + # Path to manifests within repository + path: manifests/ + + # --- KUSTOMIZE OPTIONS (if using Kustomize) --- + # kustomize: + # namePrefix: - + # nameSuffix: - + # commonLabels: + # app: + # commonAnnotations: + # team: + # images: + # - =: + + # --- HELM OPTIONS (if using Helm) --- + # helm: + # releaseName: + # valueFiles: + # - values.yaml + # - values-.yaml + # parameters: + # - name: image.tag + # value: + # - name: replicaCount + # value: "3" + # # For sensitive values, use valueFiles from a Secret + # # valuesObject can be used for inline values + + # Multi-source (advanced - Helm with external values) + # sources: + # - repoURL: https://charts.bitnami.com/bitnami + # chart: nginx + # targetRevision: 15.0.0 + # helm: + # valueFiles: + # - $values//nginx/values.yaml + # - repoURL: https://github.com//helm-values.git + # targetRevision: main + # ref: values + + # ============================================ + # DESTINATION CONFIGURATION + # ============================================ + + destination: + # Target cluster (use cluster name or URL) + # For in-cluster: https://kubernetes.default.svc + server: https://kubernetes.default.svc + # OR use cluster name registered in ArgoCD: + # name: production-cluster + + # Target namespace (will be created if CreateNamespace=true) + namespace: + + # ============================================ + # SYNC POLICY + # ============================================ + + syncPolicy: + # Automated sync (recommended for non-production) + automated: + # Automatically delete resources not in Git + prune: true + # Automatically revert manual changes + selfHeal: true + # Allow sync when only app spec changes (not desired state) + allowEmpty: false + + # Sync options + syncOptions: + # Create namespace if it doesn't exist + - CreateNamespace=true + # Use server-side apply (better for CRDs) + - ServerSideApply=true + # Prune resources after sync completes + - PruneLast=true + # Only sync resources that are out of sync (performance) + - ApplyOutOfSyncOnly=true + # Respect ignoreDifferences during sync + - RespectIgnoreDifferences=true + # Skip validation (use with caution) + # - Validate=false + + # Retry policy for failed syncs + retry: + limit: 5 + backoff: + duration: 5s + factor: 2 + maxDuration: 3m + + # ============================================ + # IGNORE DIFFERENCES + # ============================================ + + # Ignore specific fields that are modified by controllers + ignoreDifferences: + # Ignore replicas managed by HPA + - group: apps + kind: Deployment + jsonPointers: + - /spec/replicas + + # Ignore auto-generated fields + - group: "" + kind: Service + jsonPointers: + - /spec/clusterIP + - /spec/clusterIPs + + # Ignore webhook CA bundles (managed by cert-manager) + - group: admissionregistration.k8s.io + kind: MutatingWebhookConfiguration + jsonPointers: + - /webhooks/0/clientConfig/caBundle + + # Ignore specific annotation + # - group: apps + # kind: Deployment + # jqPathExpressions: + # - .metadata.annotations["kubectl.kubernetes.io/last-applied-configuration"] + + # ============================================ + # HEALTH CHECKS (Custom) + # ============================================ + + # Override default health assessment + # ignoreDifferences and health checks work together + # revisionHistoryLimit: 10 # Number of ReplicaSets to keep + +--- +# Production-ready example with manual sync +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: -production + namespace: argocd + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + project: production # Use dedicated project with restrictions + + source: + repoURL: https://github.com//.git + targetRevision: v1.0.0 # Pin to specific version + path: manifests//overlays/production + + destination: + server: https://kubernetes.default.svc + namespace: -prod + + # Manual sync for production (no automated) + syncPolicy: + syncOptions: + - CreateNamespace=true + - ServerSideApply=true + - PruneLast=true + # Note: No 'automated' block - requires manual sync diff --git a/.cursor/skills/gitops-principles-skill/templates/applicationset.yaml b/.cursor/skills/gitops-principles-skill/templates/applicationset.yaml new file mode 100644 index 0000000..ff273cd --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/templates/applicationset.yaml @@ -0,0 +1,295 @@ +# yamllint disable rule:document-start rule:quoted-strings rule:comments-indentation +# ArgoCD ApplicationSet Templates +# Generates multiple Applications from a single template +# +# Usage: +# 1. Choose the generator pattern that fits your use case +# 2. Replace placeholders (marked with <...>) +# 3. Apply to ArgoCD namespace +# +# Reference: https://argo-cd.readthedocs.io/en/stable/user-guide/applicationset/ +--- +# ============================================ +# PATTERN 1: LIST GENERATOR +# Deploy same app to multiple environments +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: -environments + namespace: argocd +spec: + generators: + - list: + elements: + - env: dev + cluster: https://dev-cluster.example.com + namespace: -dev + values_repo: dev + - env: staging + cluster: https://staging-cluster.example.com + namespace: -staging + values_repo: staging + - env: production + cluster: https://prod-cluster.example.com + namespace: -prod + values_repo: production + + template: + metadata: + name: '-{{env}}' + labels: + environment: '{{env}}' + spec: + project: '{{env}}' + + # Multi-source: Helm chart + environment values + sources: + - repoURL: https://charts.example.com + chart: + targetRevision: + helm: + valueFiles: + - $values/{{values_repo}}/values.yaml + - repoURL: https://github.com//helm-values.git + targetRevision: main + ref: values + + destination: + server: '{{cluster}}' + namespace: '{{namespace}}' + + syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true + +--- +# ============================================ +# PATTERN 2: GIT DIRECTORY GENERATOR +# Auto-discover apps from Git repository structure +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: apps-discovery + namespace: argocd +spec: + generators: + - git: + repoURL: https://github.com//gitops-repo.git + revision: HEAD + directories: + # Include all directories under apps/ + - path: apps/* + # Exclude specific directories + - path: apps/excluded-app + exclude: true + + template: + metadata: + # {{path.basename}} = directory name (e.g., "frontend", "backend") + name: '{{path.basename}}' + spec: + project: default + source: + repoURL: https://github.com//gitops-repo.git + targetRevision: HEAD + path: '{{path}}' + destination: + server: https://kubernetes.default.svc + namespace: '{{path.basename}}' + syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true + +--- +# ============================================ +# PATTERN 3: CLUSTER GENERATOR +# Deploy to all registered clusters +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: cluster-addons + namespace: argocd +spec: + generators: + - clusters: + # Select clusters by label + selector: + matchLabels: + environment: production + # Or match all clusters: + # selector: {} + + template: + metadata: + name: 'monitoring-{{name}}' + spec: + project: infrastructure + source: + repoURL: https://github.com//cluster-addons.git + targetRevision: HEAD + path: monitoring + destination: + # {{server}} = cluster API URL + # {{name}} = cluster name + server: '{{server}}' + namespace: monitoring + syncPolicy: + automated: + prune: true + selfHeal: true + +--- +# ============================================ +# PATTERN 4: MATRIX GENERATOR +# Combine two generators (e.g., clusters x apps) +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: platform-apps + namespace: argocd +spec: + generators: + - matrix: + generators: + # Generator 1: All production clusters + - clusters: + selector: + matchLabels: + environment: production + # Generator 2: All apps in apps/ directory + - git: + repoURL: https://github.com//platform-apps.git + revision: HEAD + directories: + - path: apps/* + + template: + metadata: + # Combination of cluster name and app name + name: '{{name}}-{{path.basename}}' + spec: + project: platform + source: + repoURL: https://github.com//platform-apps.git + targetRevision: HEAD + path: '{{path}}' + destination: + server: '{{server}}' + namespace: '{{path.basename}}' + syncPolicy: + automated: + prune: true + selfHeal: true + +--- +# ============================================ +# PATTERN 5: PULL REQUEST GENERATOR +# Deploy preview environments for PRs +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: pr-previews + namespace: argocd +spec: + generators: + - pullRequest: + github: + owner: + repo: + tokenRef: + secretName: github-token + key: token + labels: + - preview + requeueAfterSeconds: 60 + + template: + metadata: + name: 'preview-{{branch_slug}}' + labels: + preview: "true" + pr: '{{number}}' + spec: + project: previews + source: + repoURL: 'https://github.com//.git' + targetRevision: '{{head_sha}}' + path: manifests + kustomize: + nameSuffix: '-pr-{{number}}' + destination: + server: https://kubernetes.default.svc + namespace: 'preview-{{number}}' + syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true + +--- +# ============================================ +# PATTERN 6: MERGE GENERATOR +# Combine generators with override logic +# ============================================ + +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: team-apps + namespace: argocd +spec: + generators: + - merge: + mergeKeys: + - env + generators: + # Base configuration from list + - list: + elements: + - env: dev + replicas: "1" + autosync: "true" + - env: staging + replicas: "2" + autosync: "true" + - env: production + replicas: "3" + autosync: "false" + # Override specific environments + - list: + elements: + - env: production + cluster: https://prod.example.com + + template: + metadata: + name: 'myapp-{{env}}' + spec: + project: '{{env}}' + source: + repoURL: https://github.com//myapp.git + targetRevision: HEAD + path: overlays/{{env}} + kustomize: + images: + - myapp:v1.0.0 + destination: + server: '{{cluster}}' + namespace: myapp diff --git a/.cursor/skills/gitops-principles-skill/templates/kustomization.yaml b/.cursor/skills/gitops-principles-skill/templates/kustomization.yaml new file mode 100644 index 0000000..f610cc9 --- /dev/null +++ b/.cursor/skills/gitops-principles-skill/templates/kustomization.yaml @@ -0,0 +1,380 @@ +# yamllint disable rule:document-start rule:quoted-strings rule:comments-indentation +# Kustomize Templates for GitOps +# Overlay-based configuration management +# +# Directory Structure: +# manifests/ +# ├── base/ # Shared base configuration +# │ ├── kustomization.yaml +# │ ├── deployment.yaml +# │ ├── service.yaml +# │ └── configmap.yaml +# └── overlays/ +# ├── dev/ +# │ └── kustomization.yaml +# ├── staging/ +# │ └── kustomization.yaml +# └── production/ +# └── kustomization.yaml +--- +# ============================================ +# BASE KUSTOMIZATION +# manifests/base/kustomization.yaml +# ============================================ + +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Metadata applied to all resources +metadata: + name: -base + +# Common labels applied to all resources +commonLabels: + app.kubernetes.io/name: + app.kubernetes.io/managed-by: kustomize + +# Common annotations +commonAnnotations: + app.kubernetes.io/part-of: + +# Resources to include +resources: + - deployment.yaml + - service.yaml + - configmap.yaml + - serviceaccount.yaml + # - ingress.yaml + # - hpa.yaml + # - pdb.yaml + +# ConfigMap generator (creates ConfigMap from files/literals) +# configMapGenerator: +# - name: app-config +# files: +# - config.json +# literals: +# - LOG_LEVEL=info + +# Secret generator (creates Secret from files/literals) +# secretGenerator: +# - name: app-secrets +# files: +# - secrets/api-key.txt +# type: Opaque + +--- +# ============================================ +# DEVELOPMENT OVERLAY +# manifests/overlays/dev/kustomization.yaml +# ============================================ + +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +# Reference the base +resources: + - ../../base + +# Namespace for all resources +namespace: -dev + +# Name prefix/suffix +namePrefix: dev- +# nameSuffix: -dev + +# Additional labels for this environment +commonLabels: + environment: development + +# Image override +images: + - name: + newName: / + newTag: latest # Dev can use latest + +# Replica count override +replicas: + - name: + count: 1 + +# Resource patches (strategic merge) +patches: + # Reduce resources for dev + - target: + kind: Deployment + name: + patch: |- + - op: replace + path: /spec/template/spec/containers/0/resources + value: + limits: + cpu: "200m" + memory: "256Mi" + requests: + cpu: "100m" + memory: "128Mi" + +# ConfigMap patches +configMapGenerator: + - name: app-config + behavior: merge + literals: + - LOG_LEVEL=debug + - ENV=development + +--- +# ============================================ +# STAGING OVERLAY +# manifests/overlays/staging/kustomization.yaml +# ============================================ + +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - ../../base + +namespace: -staging + +namePrefix: stg- + +commonLabels: + environment: staging + +images: + - name: + newName: / + newTag: v1.0.0-rc1 # Release candidate + +replicas: + - name: + count: 2 + +patches: + # Enable spot tolerations for staging + - target: + kind: Deployment + name: + patch: |- + - op: add + path: /spec/template/spec/tolerations + value: + - key: "kubernetes.azure.com/scalesetpriority" + operator: "Equal" + value: "spot" + effect: "NoSchedule" + - op: add + path: /spec/template/spec/affinity + value: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: "kubernetes.azure.com/scalesetpriority" + operator: In + values: + - "spot" + +configMapGenerator: + - name: app-config + behavior: merge + literals: + - LOG_LEVEL=info + - ENV=staging + +--- +# ============================================ +# PRODUCTION OVERLAY +# manifests/overlays/production/kustomization.yaml +# ============================================ + +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - ../../base + # Production-specific resources + - pdb.yaml # Pod Disruption Budget + - hpa.yaml # Horizontal Pod Autoscaler + - network-policy.yaml + +namespace: -prod + +namePrefix: prod- + +commonLabels: + environment: production + +commonAnnotations: + owner: platform-team + cost-center: engineering + +# Pin to specific immutable tag +images: + - name: + newName: / + # Use digest for production + # digest: sha256:abc123... + newTag: v1.0.0 + +replicas: + - name: + count: 3 + +patches: + # Production-grade resources + - target: + kind: Deployment + name: + patch: |- + - op: replace + path: /spec/template/spec/containers/0/resources + value: + limits: + cpu: "1000m" + memory: "1Gi" + requests: + cpu: "500m" + memory: "512Mi" + - op: add + path: /spec/template/spec/topologySpreadConstraints + value: + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + app.kubernetes.io/name: + +configMapGenerator: + - name: app-config + behavior: merge + literals: + - LOG_LEVEL=warn + - ENV=production + +--- +# ============================================ +# SAMPLE BASE DEPLOYMENT +# manifests/base/deployment.yaml +# ============================================ + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: +spec: + selector: + matchLabels: + app.kubernetes.io/name: + template: + metadata: + labels: + app.kubernetes.io/name: + spec: + serviceAccountName: + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + containers: + - name: + image: # Replaced by kustomize + ports: + - name: http + containerPort: 8080 + protocol: TCP + envFrom: + - configMapRef: + name: app-config + resources: + limits: + cpu: "500m" + memory: "512Mi" + requests: + cpu: "250m" + memory: "256Mi" + livenessProbe: + httpGet: + path: /healthz + port: http + initialDelaySeconds: 15 + periodSeconds: 20 + readinessProbe: + httpGet: + path: /ready + port: http + initialDelaySeconds: 5 + periodSeconds: 10 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + +--- +# ============================================ +# SAMPLE PRODUCTION PDB +# manifests/overlays/production/pdb.yaml +# ============================================ + +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: +spec: + minAvailable: 2 # Or use maxUnavailable: 1 + selector: + matchLabels: + app.kubernetes.io/name: + +--- +# ============================================ +# SAMPLE PRODUCTION HPA +# manifests/overlays/production/hpa.yaml +# ============================================ + +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: + minReplicas: 3 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 70 + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: 80 + behavior: + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 10 + periodSeconds: 60 + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + - type: Pods + value: 4 + periodSeconds: 15 + selectPolicy: Max diff --git a/.cursor/skills/gitops-workflow/SKILL.md b/.cursor/skills/gitops-workflow/SKILL.md new file mode 100644 index 0000000..5db039b --- /dev/null +++ b/.cursor/skills/gitops-workflow/SKILL.md @@ -0,0 +1,289 @@ +--- +name: gitops-workflow +description: Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOps practices, automating Kubernetes deployments, or setting up declarative infrastructure management. +--- + +# GitOps Workflow + +Complete guide to implementing GitOps workflows with ArgoCD and Flux for automated Kubernetes deployments. + +## Purpose + +Implement declarative, Git-based continuous delivery for Kubernetes using ArgoCD or Flux CD, following OpenGitOps principles. + +## When to Use This Skill + +- Set up GitOps for Kubernetes clusters +- Automate application deployments from Git +- Implement progressive delivery strategies +- Manage multi-cluster deployments +- Configure automated sync policies +- Set up secret management in GitOps + +## OpenGitOps Principles + +1. **Declarative** - Entire system described declaratively +2. **Versioned and Immutable** - Desired state stored in Git +3. **Pulled Automatically** - Software agents pull desired state +4. **Continuously Reconciled** - Agents reconcile actual vs desired state + +## ArgoCD Setup + +### 1. Installation + +```bash +# Create namespace +kubectl create namespace argocd + +# Install ArgoCD +kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml + +# Get admin password +kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d +``` + +**Reference:** See `references/argocd-setup.md` for detailed setup + +### 2. Repository Structure + +``` +gitops-repo/ +├── apps/ +│ ├── production/ +│ │ ├── app1/ +│ │ │ ├── kustomization.yaml +│ │ │ └── deployment.yaml +│ │ └── app2/ +│ └── staging/ +├── infrastructure/ +│ ├── ingress-nginx/ +│ ├── cert-manager/ +│ └── monitoring/ +└── argocd/ + ├── applications/ + └── projects/ +``` + +### 3. Create Application + +```yaml +# argocd/applications/my-app.yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: my-app + namespace: argocd +spec: + project: default + source: + repoURL: https://github.com/org/gitops-repo + targetRevision: main + path: apps/production/my-app + destination: + server: https://kubernetes.default.svc + namespace: production + syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true +``` + +### 4. App of Apps Pattern + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: applications + namespace: argocd +spec: + project: default + source: + repoURL: https://github.com/org/gitops-repo + targetRevision: main + path: argocd/applications + destination: + server: https://kubernetes.default.svc + namespace: argocd + syncPolicy: + automated: {} +``` + +## Flux CD Setup + +### 1. Installation + +```bash +# Install Flux CLI +curl -s https://fluxcd.io/install.sh | sudo bash + +# Bootstrap Flux +flux bootstrap github \ + --owner=org \ + --repository=gitops-repo \ + --branch=main \ + --path=clusters/production \ + --personal +``` + +### 2. Create GitRepository + +```yaml +apiVersion: source.toolkit.fluxcd.io/v1 +kind: GitRepository +metadata: + name: my-app + namespace: flux-system +spec: + interval: 1m + url: https://github.com/org/my-app + ref: + branch: main +``` + +### 3. Create Kustomization + +```yaml +apiVersion: kustomize.toolkit.fluxcd.io/v1 +kind: Kustomization +metadata: + name: my-app + namespace: flux-system +spec: + interval: 5m + path: ./deploy + prune: true + sourceRef: + kind: GitRepository + name: my-app +``` + +## Sync Policies + +### Auto-Sync Configuration + +**ArgoCD:** + +```yaml +syncPolicy: + automated: + prune: true # Delete resources not in Git + selfHeal: true # Reconcile manual changes + allowEmpty: false + retry: + limit: 5 + backoff: + duration: 5s + factor: 2 + maxDuration: 3m +``` + +**Flux:** + +```yaml +spec: + interval: 1m + prune: true + wait: true + timeout: 5m +``` + +**Reference:** See `references/sync-policies.md` + +## Progressive Delivery + +### Canary Deployment with ArgoCD Rollouts + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Rollout +metadata: + name: my-app +spec: + replicas: 5 + strategy: + canary: + steps: + - setWeight: 20 + - pause: { duration: 1m } + - setWeight: 50 + - pause: { duration: 2m } + - setWeight: 100 +``` + +### Blue-Green Deployment + +```yaml +strategy: + blueGreen: + activeService: my-app + previewService: my-app-preview + autoPromotionEnabled: false +``` + +## Secret Management + +### External Secrets Operator + +```yaml +apiVersion: external-secrets.io/v1beta1 +kind: ExternalSecret +metadata: + name: db-credentials +spec: + refreshInterval: 1h + secretStoreRef: + name: aws-secrets-manager + kind: SecretStore + target: + name: db-credentials + data: + - secretKey: password + remoteRef: + key: prod/db/password +``` + +### Sealed Secrets + +```bash +# Encrypt secret +kubeseal --format yaml < secret.yaml > sealed-secret.yaml + +# Commit sealed-secret.yaml to Git +``` + +## Best Practices + +1. **Use separate repos or branches** for different environments +2. **Implement RBAC** for Git repositories +3. **Enable notifications** for sync failures +4. **Use health checks** for custom resources +5. **Implement approval gates** for production +6. **Keep secrets out of Git** (use External Secrets) +7. **Use App of Apps pattern** for organization +8. **Tag releases** for easy rollback +9. **Monitor sync status** with alerts +10. **Test changes** in staging first + +## Troubleshooting + +**Sync failures:** + +```bash +argocd app get my-app +argocd app sync my-app --prune +``` + +**Out of sync status:** + +```bash +argocd app diff my-app +argocd app sync my-app --force +``` + +## Related Skills + +- `k8s-manifest-generator` - For creating manifests +- `helm-chart-scaffolding` - For packaging applications diff --git a/.cursor/skills/gitops-workflow/references/argocd-setup.md b/.cursor/skills/gitops-workflow/references/argocd-setup.md new file mode 100644 index 0000000..6f9b572 --- /dev/null +++ b/.cursor/skills/gitops-workflow/references/argocd-setup.md @@ -0,0 +1,144 @@ +# ArgoCD Setup and Configuration + +## Installation Methods + +### 1. Standard Installation + +```bash +kubectl create namespace argocd +kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml +``` + +### 2. High Availability Installation + +```bash +kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml +``` + +### 3. Helm Installation + +```bash +helm repo add argo https://argoproj.github.io/argo-helm +helm install argocd argo/argo-cd -n argocd --create-namespace +``` + +## Initial Configuration + +### Access ArgoCD UI + +```bash +# Port forward +kubectl port-forward svc/argocd-server -n argocd 8080:443 + +# Get initial admin password +argocd admin initial-password -n argocd +``` + +### Configure Ingress + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: argocd-server-ingress + namespace: argocd + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/ssl-passthrough: "true" + nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" +spec: + ingressClassName: nginx + rules: + - host: argocd.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: argocd-server + port: + number: 443 + tls: + - hosts: + - argocd.example.com + secretName: argocd-secret +``` + +## CLI Configuration + +### Login + +```bash +argocd login argocd.example.com --username admin +``` + +### Add Repository + +```bash +argocd repo add https://github.com/org/repo --username user --password token +``` + +### Create Application + +```bash +argocd app create my-app \ + --repo https://github.com/org/repo \ + --path apps/my-app \ + --dest-server https://kubernetes.default.svc \ + --dest-namespace production +``` + +## SSO Configuration + +### GitHub OAuth + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: argocd-cm + namespace: argocd +data: + url: https://argocd.example.com + dex.config: | + connectors: + - type: github + id: github + name: GitHub + config: + clientID: $GITHUB_CLIENT_ID + clientSecret: $GITHUB_CLIENT_SECRET + orgs: + - name: my-org +``` + +## RBAC Configuration + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: argocd-rbac-cm + namespace: argocd +data: + policy.default: role:readonly + policy.csv: | + p, role:developers, applications, *, */dev, allow + p, role:operators, applications, *, */*, allow + g, my-org:devs, role:developers + g, my-org:ops, role:operators +``` + +## Best Practices + +1. Enable SSO for production +2. Implement RBAC policies +3. Use separate projects for teams +4. Enable audit logging +5. Configure notifications +6. Use ApplicationSets for multi-cluster +7. Implement resource hooks +8. Configure health checks +9. Use sync windows for maintenance +10. Monitor with Prometheus metrics diff --git a/.cursor/skills/gitops-workflow/references/sync-policies.md b/.cursor/skills/gitops-workflow/references/sync-policies.md new file mode 100644 index 0000000..2ad45b7 --- /dev/null +++ b/.cursor/skills/gitops-workflow/references/sync-policies.md @@ -0,0 +1,139 @@ +# GitOps Sync Policies + +## ArgoCD Sync Policies + +### Automated Sync + +```yaml +syncPolicy: + automated: + prune: true # Delete resources removed from Git + selfHeal: true # Reconcile manual changes + allowEmpty: false # Prevent empty sync +``` + +### Manual Sync + +```yaml +syncPolicy: + syncOptions: + - PrunePropagationPolicy=foreground + - CreateNamespace=true +``` + +### Sync Windows + +```yaml +syncWindows: + - kind: allow + schedule: "0 8 * * *" + duration: 1h + applications: + - my-app + - kind: deny + schedule: "0 22 * * *" + duration: 8h + applications: + - "*" +``` + +### Retry Policy + +```yaml +syncPolicy: + retry: + limit: 5 + backoff: + duration: 5s + factor: 2 + maxDuration: 3m +``` + +## Flux Sync Policies + +### Kustomization Sync + +```yaml +apiVersion: kustomize.toolkit.fluxcd.io/v1 +kind: Kustomization +metadata: + name: my-app +spec: + interval: 5m + prune: true + wait: true + timeout: 5m + retryInterval: 1m + force: false +``` + +### Source Sync Interval + +```yaml +apiVersion: source.toolkit.fluxcd.io/v1 +kind: GitRepository +metadata: + name: my-app +spec: + interval: 1m + timeout: 60s +``` + +## Health Assessment + +### Custom Health Checks + +```yaml +# ArgoCD +apiVersion: v1 +kind: ConfigMap +metadata: + name: argocd-cm + namespace: argocd +data: + resource.customizations.health.MyCustomResource: | + hs = {} + if obj.status ~= nil then + if obj.status.conditions ~= nil then + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "Ready" and condition.status == "False" then + hs.status = "Degraded" + hs.message = condition.message + return hs + end + if condition.type == "Ready" and condition.status == "True" then + hs.status = "Healthy" + hs.message = condition.message + return hs + end + end + end + end + hs.status = "Progressing" + hs.message = "Waiting for status" + return hs +``` + +## Sync Options + +### Common Sync Options + +- `PrunePropagationPolicy=foreground` - Wait for pruned resources to be deleted +- `CreateNamespace=true` - Auto-create namespace +- `Validate=false` - Skip kubectl validation +- `PruneLast=true` - Prune resources after sync +- `RespectIgnoreDifferences=true` - Honor ignore differences +- `ApplyOutOfSyncOnly=true` - Only apply out-of-sync resources + +## Best Practices + +1. Use automated sync for non-production +2. Require manual approval for production +3. Configure sync windows for maintenance +4. Implement health checks for custom resources +5. Use selective sync for large applications +6. Configure appropriate retry policies +7. Monitor sync failures with alerts +8. Use prune with caution in production +9. Test sync policies in staging +10. Document sync behavior for teams diff --git a/.cursor/skills/langchain-architecture/SKILL.md b/.cursor/skills/langchain-architecture/SKILL.md new file mode 100644 index 0000000..c509e40 --- /dev/null +++ b/.cursor/skills/langchain-architecture/SKILL.md @@ -0,0 +1,666 @@ +--- +name: langchain-architecture +description: Design LLM applications using LangChain 1.x and LangGraph for agents, memory, and tool integration. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows. +--- + +# LangChain & LangGraph Architecture + +Master modern LangChain 1.x and LangGraph for building sophisticated LLM applications with agents, state management, memory, and tool integration. + +## When to Use This Skill + +- Building autonomous AI agents with tool access +- Implementing complex multi-step LLM workflows +- Managing conversation memory and state +- Integrating LLMs with external data sources and APIs +- Creating modular, reusable LLM application components +- Implementing document processing pipelines +- Building production-grade LLM applications + +## Package Structure (LangChain 1.x) + +``` +langchain (1.2.x) # High-level orchestration +langchain-core (1.2.x) # Core abstractions (messages, prompts, tools) +langchain-community # Third-party integrations +langgraph # Agent orchestration and state management +langchain-openai # OpenAI integrations +langchain-anthropic # Anthropic/Claude integrations +langchain-voyageai # Voyage AI embeddings +langchain-pinecone # Pinecone vector store +``` + +## Core Concepts + +### 1. LangGraph Agents + +LangGraph is the standard for building agents in 2026. It provides: + +**Key Features:** + +- **StateGraph**: Explicit state management with typed state +- **Durable Execution**: Agents persist through failures +- **Human-in-the-Loop**: Inspect and modify state at any point +- **Memory**: Short-term and long-term memory across sessions +- **Checkpointing**: Save and resume agent state + +**Agent Patterns:** + +- **ReAct**: Reasoning + Acting with `create_react_agent` +- **Plan-and-Execute**: Separate planning and execution nodes +- **Multi-Agent**: Supervisor routing between specialized agents +- **Tool-Calling**: Structured tool invocation with Pydantic schemas + +### 2. State Management + +LangGraph uses TypedDict for explicit state: + +```python +from typing import Annotated, TypedDict +from langgraph.graph import MessagesState + +# Simple message-based state +class AgentState(MessagesState): + """Extends MessagesState with custom fields.""" + context: Annotated[list, "retrieved documents"] + +# Custom state for complex agents +class CustomState(TypedDict): + messages: Annotated[list, "conversation history"] + context: Annotated[dict, "retrieved context"] + current_step: str + results: list +``` + +### 3. Memory Systems + +Modern memory implementations: + +- **ConversationBufferMemory**: Stores all messages (short conversations) +- **ConversationSummaryMemory**: Summarizes older messages (long conversations) +- **ConversationTokenBufferMemory**: Token-based windowing +- **VectorStoreRetrieverMemory**: Semantic similarity retrieval +- **LangGraph Checkpointers**: Persistent state across sessions + +### 4. Document Processing + +Loading, transforming, and storing documents: + +**Components:** + +- **Document Loaders**: Load from various sources +- **Text Splitters**: Chunk documents intelligently +- **Vector Stores**: Store and retrieve embeddings +- **Retrievers**: Fetch relevant documents + +### 5. Callbacks & Tracing + +LangSmith is the standard for observability: + +- Request/response logging +- Token usage tracking +- Latency monitoring +- Error tracking +- Trace visualization + +## Quick Start + +### Modern ReAct Agent with LangGraph + +```python +from langgraph.prebuilt import create_react_agent +from langgraph.checkpoint.memory import MemorySaver +from langchain_anthropic import ChatAnthropic +from langchain_core.tools import tool +import ast +import operator + +# Initialize LLM (Claude Sonnet 4.5 recommended) +llm = ChatAnthropic(model="claude-sonnet-4-5", temperature=0) + +# Define tools with Pydantic schemas +@tool +def search_database(query: str) -> str: + """Search internal database for information.""" + # Your database search logic + return f"Results for: {query}" + +@tool +def calculate(expression: str) -> str: + """Safely evaluate a mathematical expression. + + Supports: +, -, *, /, **, %, parentheses + Example: '(2 + 3) * 4' returns '20' + """ + # Safe math evaluation using ast + allowed_operators = { + ast.Add: operator.add, + ast.Sub: operator.sub, + ast.Mult: operator.mul, + ast.Div: operator.truediv, + ast.Pow: operator.pow, + ast.Mod: operator.mod, + ast.USub: operator.neg, + } + + def _eval(node): + if isinstance(node, ast.Constant): + return node.value + elif isinstance(node, ast.BinOp): + left = _eval(node.left) + right = _eval(node.right) + return allowed_operators[type(node.op)](left, right) + elif isinstance(node, ast.UnaryOp): + operand = _eval(node.operand) + return allowed_operators[type(node.op)](operand) + else: + raise ValueError(f"Unsupported operation: {type(node)}") + + try: + tree = ast.parse(expression, mode='eval') + return str(_eval(tree.body)) + except Exception as e: + return f"Error: {e}" + +tools = [search_database, calculate] + +# Create checkpointer for memory persistence +checkpointer = MemorySaver() + +# Create ReAct agent +agent = create_react_agent( + llm, + tools, + checkpointer=checkpointer +) + +# Run agent with thread ID for memory +config = {"configurable": {"thread_id": "user-123"}} +result = await agent.ainvoke( + {"messages": [("user", "Search for Python tutorials and calculate 25 * 4")]}, + config=config +) +``` + +## Architecture Patterns + +### Pattern 1: RAG with LangGraph + +```python +from langgraph.graph import StateGraph, START, END +from langchain_anthropic import ChatAnthropic +from langchain_voyageai import VoyageAIEmbeddings +from langchain_pinecone import PineconeVectorStore +from langchain_core.documents import Document +from langchain_core.prompts import ChatPromptTemplate +from typing import TypedDict, Annotated + +class RAGState(TypedDict): + question: str + context: Annotated[list[Document], "retrieved documents"] + answer: str + +# Initialize components +llm = ChatAnthropic(model="claude-sonnet-4-5") +embeddings = VoyageAIEmbeddings(model="voyage-3-large") +vectorstore = PineconeVectorStore(index_name="docs", embedding=embeddings) +retriever = vectorstore.as_retriever(search_kwargs={"k": 4}) + +# Define nodes +async def retrieve(state: RAGState) -> RAGState: + """Retrieve relevant documents.""" + docs = await retriever.ainvoke(state["question"]) + return {"context": docs} + +async def generate(state: RAGState) -> RAGState: + """Generate answer from context.""" + prompt = ChatPromptTemplate.from_template( + """Answer based on the context below. If you cannot answer, say so. + + Context: {context} + + Question: {question} + + Answer:""" + ) + context_text = "\n\n".join(doc.page_content for doc in state["context"]) + response = await llm.ainvoke( + prompt.format(context=context_text, question=state["question"]) + ) + return {"answer": response.content} + +# Build graph +builder = StateGraph(RAGState) +builder.add_node("retrieve", retrieve) +builder.add_node("generate", generate) +builder.add_edge(START, "retrieve") +builder.add_edge("retrieve", "generate") +builder.add_edge("generate", END) + +rag_chain = builder.compile() + +# Use the chain +result = await rag_chain.ainvoke({"question": "What is the main topic?"}) +``` + +### Pattern 2: Custom Agent with Structured Tools + +```python +from langchain_core.tools import StructuredTool +from pydantic import BaseModel, Field + +class SearchInput(BaseModel): + """Input for database search.""" + query: str = Field(description="Search query") + filters: dict = Field(default={}, description="Optional filters") + +class EmailInput(BaseModel): + """Input for sending email.""" + recipient: str = Field(description="Email recipient") + subject: str = Field(description="Email subject") + content: str = Field(description="Email body") + +async def search_database(query: str, filters: dict = {}) -> str: + """Search internal database for information.""" + # Your database search logic + return f"Results for '{query}' with filters {filters}" + +async def send_email(recipient: str, subject: str, content: str) -> str: + """Send an email to specified recipient.""" + # Email sending logic + return f"Email sent to {recipient}" + +tools = [ + StructuredTool.from_function( + coroutine=search_database, + name="search_database", + description="Search internal database", + args_schema=SearchInput + ), + StructuredTool.from_function( + coroutine=send_email, + name="send_email", + description="Send an email", + args_schema=EmailInput + ) +] + +agent = create_react_agent(llm, tools) +``` + +### Pattern 3: Multi-Step Workflow with StateGraph + +```python +from langgraph.graph import StateGraph, START, END +from typing import TypedDict, Literal + +class WorkflowState(TypedDict): + text: str + entities: list + analysis: str + summary: str + current_step: str + +async def extract_entities(state: WorkflowState) -> WorkflowState: + """Extract key entities from text.""" + prompt = f"Extract key entities from: {state['text']}\n\nReturn as JSON list." + response = await llm.ainvoke(prompt) + return {"entities": response.content, "current_step": "analyze"} + +async def analyze_entities(state: WorkflowState) -> WorkflowState: + """Analyze extracted entities.""" + prompt = f"Analyze these entities: {state['entities']}\n\nProvide insights." + response = await llm.ainvoke(prompt) + return {"analysis": response.content, "current_step": "summarize"} + +async def generate_summary(state: WorkflowState) -> WorkflowState: + """Generate final summary.""" + prompt = f"""Summarize: + Entities: {state['entities']} + Analysis: {state['analysis']} + + Provide a concise summary.""" + response = await llm.ainvoke(prompt) + return {"summary": response.content, "current_step": "complete"} + +def route_step(state: WorkflowState) -> Literal["analyze", "summarize", "end"]: + """Route to next step based on current state.""" + step = state.get("current_step", "extract") + if step == "analyze": + return "analyze" + elif step == "summarize": + return "summarize" + return "end" + +# Build workflow +builder = StateGraph(WorkflowState) +builder.add_node("extract", extract_entities) +builder.add_node("analyze", analyze_entities) +builder.add_node("summarize", generate_summary) + +builder.add_edge(START, "extract") +builder.add_conditional_edges("extract", route_step, { + "analyze": "analyze", + "summarize": "summarize", + "end": END +}) +builder.add_conditional_edges("analyze", route_step, { + "summarize": "summarize", + "end": END +}) +builder.add_edge("summarize", END) + +workflow = builder.compile() +``` + +### Pattern 4: Multi-Agent Orchestration + +```python +from langgraph.graph import StateGraph, START, END +from langgraph.prebuilt import create_react_agent +from langchain_core.messages import HumanMessage +from typing import Literal + +class MultiAgentState(TypedDict): + messages: list + next_agent: str + +# Create specialized agents +researcher = create_react_agent(llm, research_tools) +writer = create_react_agent(llm, writing_tools) +reviewer = create_react_agent(llm, review_tools) + +async def supervisor(state: MultiAgentState) -> MultiAgentState: + """Route to appropriate agent based on task.""" + prompt = f"""Based on the conversation, which agent should handle this? + + Options: + - researcher: For finding information + - writer: For creating content + - reviewer: For reviewing and editing + - FINISH: Task is complete + + Messages: {state['messages']} + + Respond with just the agent name.""" + + response = await llm.ainvoke(prompt) + return {"next_agent": response.content.strip().lower()} + +def route_to_agent(state: MultiAgentState) -> Literal["researcher", "writer", "reviewer", "end"]: + """Route based on supervisor decision.""" + next_agent = state.get("next_agent", "").lower() + if next_agent == "finish": + return "end" + return next_agent if next_agent in ["researcher", "writer", "reviewer"] else "end" + +# Build multi-agent graph +builder = StateGraph(MultiAgentState) +builder.add_node("supervisor", supervisor) +builder.add_node("researcher", researcher) +builder.add_node("writer", writer) +builder.add_node("reviewer", reviewer) + +builder.add_edge(START, "supervisor") +builder.add_conditional_edges("supervisor", route_to_agent, { + "researcher": "researcher", + "writer": "writer", + "reviewer": "reviewer", + "end": END +}) + +# Each agent returns to supervisor +for agent in ["researcher", "writer", "reviewer"]: + builder.add_edge(agent, "supervisor") + +multi_agent = builder.compile() +``` + +## Memory Management + +### Token-Based Memory with LangGraph + +```python +from langgraph.checkpoint.memory import MemorySaver +from langgraph.prebuilt import create_react_agent + +# In-memory checkpointer (development) +checkpointer = MemorySaver() + +# Create agent with persistent memory +agent = create_react_agent(llm, tools, checkpointer=checkpointer) + +# Each thread_id maintains separate conversation +config = {"configurable": {"thread_id": "session-abc123"}} + +# Messages persist across invocations with same thread_id +result1 = await agent.ainvoke({"messages": [("user", "My name is Alice")]}, config) +result2 = await agent.ainvoke({"messages": [("user", "What's my name?")]}, config) +# Agent remembers: "Your name is Alice" +``` + +### Production Memory with PostgreSQL + +```python +from langgraph.checkpoint.postgres import PostgresSaver + +# Production checkpointer +checkpointer = PostgresSaver.from_conn_string( + "postgresql://user:pass@localhost/langgraph" +) + +agent = create_react_agent(llm, tools, checkpointer=checkpointer) +``` + +### Vector Store Memory for Long-Term Context + +```python +from langchain_community.vectorstores import Chroma +from langchain_voyageai import VoyageAIEmbeddings + +embeddings = VoyageAIEmbeddings(model="voyage-3-large") +memory_store = Chroma( + collection_name="conversation_memory", + embedding_function=embeddings, + persist_directory="./memory_db" +) + +async def retrieve_relevant_memory(query: str, k: int = 5) -> list: + """Retrieve relevant past conversations.""" + docs = await memory_store.asimilarity_search(query, k=k) + return [doc.page_content for doc in docs] + +async def store_memory(content: str, metadata: dict = {}): + """Store conversation in long-term memory.""" + await memory_store.aadd_texts([content], metadatas=[metadata]) +``` + +## Callback System & LangSmith + +### LangSmith Tracing + +```python +import os +from langchain_anthropic import ChatAnthropic + +# Enable LangSmith tracing +os.environ["LANGCHAIN_TRACING_V2"] = "true" +os.environ["LANGCHAIN_API_KEY"] = "your-api-key" +os.environ["LANGCHAIN_PROJECT"] = "my-project" + +# All LangChain/LangGraph operations are automatically traced +llm = ChatAnthropic(model="claude-sonnet-4-5") +``` + +### Custom Callback Handler + +```python +from langchain_core.callbacks import BaseCallbackHandler +from typing import Any, Dict, List + +class CustomCallbackHandler(BaseCallbackHandler): + def on_llm_start( + self, serialized: Dict[str, Any], prompts: List[str], **kwargs + ) -> None: + print(f"LLM started with {len(prompts)} prompts") + + def on_llm_end(self, response, **kwargs) -> None: + print(f"LLM completed: {len(response.generations)} generations") + + def on_llm_error(self, error: Exception, **kwargs) -> None: + print(f"LLM error: {error}") + + def on_tool_start( + self, serialized: Dict[str, Any], input_str: str, **kwargs + ) -> None: + print(f"Tool started: {serialized.get('name')}") + + def on_tool_end(self, output: str, **kwargs) -> None: + print(f"Tool completed: {output[:100]}...") + +# Use callbacks +result = await agent.ainvoke( + {"messages": [("user", "query")]}, + config={"callbacks": [CustomCallbackHandler()]} +) +``` + +## Streaming Responses + +```python +from langchain_anthropic import ChatAnthropic + +llm = ChatAnthropic(model="claude-sonnet-4-5", streaming=True) + +# Stream tokens +async for chunk in llm.astream("Tell me a story"): + print(chunk.content, end="", flush=True) + +# Stream agent events +async for event in agent.astream_events( + {"messages": [("user", "Search and summarize")]}, + version="v2" +): + if event["event"] == "on_chat_model_stream": + print(event["data"]["chunk"].content, end="") + elif event["event"] == "on_tool_start": + print(f"\n[Using tool: {event['name']}]") +``` + +## Testing Strategies + +```python +import pytest +from unittest.mock import AsyncMock, patch + +@pytest.mark.asyncio +async def test_agent_tool_selection(): + """Test agent selects correct tool.""" + with patch.object(llm, 'ainvoke') as mock_llm: + mock_llm.return_value = AsyncMock(content="Using search_database") + + result = await agent.ainvoke({ + "messages": [("user", "search for documents")] + }) + + # Verify tool was called + assert "search_database" in str(result) + +@pytest.mark.asyncio +async def test_memory_persistence(): + """Test memory persists across invocations.""" + config = {"configurable": {"thread_id": "test-thread"}} + + # First message + await agent.ainvoke( + {"messages": [("user", "Remember: the code is 12345")]}, + config + ) + + # Second message should remember + result = await agent.ainvoke( + {"messages": [("user", "What was the code?")]}, + config + ) + + assert "12345" in result["messages"][-1].content +``` + +## Performance Optimization + +### 1. Caching with Redis + +```python +from langchain_community.cache import RedisCache +from langchain_core.globals import set_llm_cache +import redis + +redis_client = redis.Redis.from_url("redis://localhost:6379") +set_llm_cache(RedisCache(redis_client)) +``` + +### 2. Async Batch Processing + +```python +import asyncio +from langchain_core.documents import Document + +async def process_documents(documents: list[Document]) -> list: + """Process documents in parallel.""" + tasks = [process_single(doc) for doc in documents] + return await asyncio.gather(*tasks) + +async def process_single(doc: Document) -> dict: + """Process a single document.""" + chunks = text_splitter.split_documents([doc]) + embeddings = await embeddings_model.aembed_documents( + [c.page_content for c in chunks] + ) + return {"doc_id": doc.metadata.get("id"), "embeddings": embeddings} +``` + +### 3. Connection Pooling + +```python +from langchain_pinecone import PineconeVectorStore +from pinecone import Pinecone + +# Reuse Pinecone client +pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"]) +index = pc.Index("my-index") + +# Create vector store with existing index +vectorstore = PineconeVectorStore(index=index, embedding=embeddings) +``` + +## Resources + +- [LangChain Documentation](https://python.langchain.com/docs/) +- [LangGraph Documentation](https://langchain-ai.github.io/langgraph/) +- [LangSmith Platform](https://smith.langchain.com/) +- [LangChain GitHub](https://github.com/langchain-ai/langchain) +- [LangGraph GitHub](https://github.com/langchain-ai/langgraph) + +## Common Pitfalls + +1. **Using Deprecated APIs**: Use LangGraph for agents, not `initialize_agent` +2. **Memory Overflow**: Use checkpointers with TTL for long-running agents +3. **Poor Tool Descriptions**: Clear descriptions help LLM select correct tools +4. **Context Window Exceeded**: Use summarization or sliding window memory +5. **No Error Handling**: Wrap tool functions with try/except +6. **Blocking Operations**: Use async methods (`ainvoke`, `astream`) +7. **Missing Observability**: Always enable LangSmith tracing in production + +## Production Checklist + +- [ ] Use LangGraph StateGraph for agent orchestration +- [ ] Implement async patterns throughout (`ainvoke`, `astream`) +- [ ] Add production checkpointer (PostgreSQL, Redis) +- [ ] Enable LangSmith tracing +- [ ] Implement structured tools with Pydantic schemas +- [ ] Add timeout limits for agent execution +- [ ] Implement rate limiting +- [ ] Add comprehensive error handling +- [ ] Set up health checks +- [ ] Version control prompts and configurations +- [ ] Write integration tests for agent workflows diff --git a/.cursor/skills/langgraph-docs/SKILL.md b/.cursor/skills/langgraph-docs/SKILL.md new file mode 100644 index 0000000..93937fe --- /dev/null +++ b/.cursor/skills/langgraph-docs/SKILL.md @@ -0,0 +1,35 @@ +--- +name: langgraph-docs +description: Use this skill for requests related to LangGraph in order to fetch relevant documentation to provide accurate, up-to-date guidance. +--- + +# langgraph-docs + +## Overview + +This skill explains how to access LangGraph Python documentation to help answer questions and guide implementation. + +## Instructions + +### 1. Fetch the Documentation Index + +Use the fetch_url tool to read the following URL: +https://docs.langchain.com/llms.txt + +This provides a structured list of all available documentation with descriptions. + +### 2. Select Relevant Documentation + +Based on the question, identify 2-4 most relevant documentation URLs from the index. Prioritize: +- Specific how-to guides for implementation questions +- Core concept pages for understanding questions +- Tutorials for end-to-end examples +- Reference docs for API details + +### 3. Fetch Selected Documentation + +Use the fetch_url tool to read the selected documentation URLs. + +### 4. Provide Accurate Guidance + +After reading the documentation, complete the users request. diff --git a/.cursor/skills/langgraph/SKILL.md b/.cursor/skills/langgraph/SKILL.md new file mode 100644 index 0000000..de595e2 --- /dev/null +++ b/.cursor/skills/langgraph/SKILL.md @@ -0,0 +1,287 @@ +--- +name: langgraph +description: "Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles and branches, persistence with checkpointers, human-in-the-loop patterns, and the ReAct agent pattern. Used in production at LinkedIn, Uber, and 400+ companies. This is LangChain's recommended approach for building agents. Use when: langgraph, langchain agent, stateful agent, agent graph, react agent." +source: vibeship-spawner-skills (Apache 2.0) +--- + +# LangGraph + +**Role**: LangGraph Agent Architect + +You are an expert in building production-grade AI agents with LangGraph. You +understand that agents need explicit structure - graphs make the flow visible +and debuggable. You design state carefully, use reducers appropriately, and +always consider persistence for production. You know when cycles are needed +and how to prevent infinite loops. + +## Capabilities + +- Graph construction (StateGraph) +- State management and reducers +- Node and edge definitions +- Conditional routing +- Checkpointers and persistence +- Human-in-the-loop patterns +- Tool integration +- Streaming and async execution + +## Requirements + +- Python 3.9+ +- langgraph package +- LLM API access (OpenAI, Anthropic, etc.) +- Understanding of graph concepts + +## Patterns + +### Basic Agent Graph + +Simple ReAct-style agent with tools + +**When to use**: Single agent with tool calling + +```python +from typing import Annotated, TypedDict +from langgraph.graph import StateGraph, START, END +from langgraph.graph.message import add_messages +from langgraph.prebuilt import ToolNode +from langchain_openai import ChatOpenAI +from langchain_core.tools import tool + +# 1. Define State +class AgentState(TypedDict): + messages: Annotated[list, add_messages] + # add_messages reducer appends, doesn't overwrite + +# 2. Define Tools +@tool +def search(query: str) -> str: + """Search the web for information.""" + # Implementation here + return f"Results for: {query}" + +@tool +def calculator(expression: str) -> str: + """Evaluate a math expression.""" + return str(eval(expression)) + +tools = [search, calculator] + +# 3. Create LLM with tools +llm = ChatOpenAI(model="gpt-4o").bind_tools(tools) + +# 4. Define Nodes +def agent(state: AgentState) -> dict: + """The agent node - calls LLM.""" + response = llm.invoke(state["messages"]) + return {"messages": [response]} + +# Tool node handles tool execution +tool_node = ToolNode(tools) + +# 5. Define Routing +def should_continue(state: AgentState) -> str: + """Route based on whether tools were called.""" + last_message = state["messages"][-1] + if last_message.tool_calls: + return "tools" + return END + +# 6. Build Graph +graph = StateGraph(AgentState) + +# Add nodes +graph.add_node("agent", agent) +graph.add_node("tools", tool_node) + +# Add edges +graph.add_edge(START, "agent") +graph.add_conditional_edges("agent", should_continue, ["tools", END]) +graph.add_edge("tools", "agent") # Loop back + +# Compile +app = graph.compile() + +# 7. Run +result = app.invoke({ + "messages": [("user", "What is 25 * 4?")] +}) +``` + +### State with Reducers + +Complex state management with custom reducers + +**When to use**: Multiple agents updating shared state + +```python +from typing import Annotated, TypedDict +from operator import add +from langgraph.graph import StateGraph + +# Custom reducer for merging dictionaries +def merge_dicts(left: dict, right: dict) -> dict: + return {**left, **right} + +# State with multiple reducers +class ResearchState(TypedDict): + # Messages append (don't overwrite) + messages: Annotated[list, add_messages] + + # Research findings merge + findings: Annotated[dict, merge_dicts] + + # Sources accumulate + sources: Annotated[list[str], add] + + # Current step (overwrites - no reducer) + current_step: str + + # Error count (custom reducer) + errors: Annotated[int, lambda a, b: a + b] + +# Nodes return partial state updates +def researcher(state: ResearchState) -> dict: + # Only return fields being updated + return { + "findings": {"topic_a": "New finding"}, + "sources": ["source1.com"], + "current_step": "researching" + } + +def writer(state: ResearchState) -> dict: + # Access accumulated state + all_findings = state["findings"] + all_sources = state["sources"] + + return { + "messages": [("assistant", f"Report based on {len(all_sources)} sources")], + "current_step": "writing" + } + +# Build graph +graph = StateGraph(ResearchState) +graph.add_node("researcher", researcher) +graph.add_node("writer", writer) +# ... add edges +``` + +### Conditional Branching + +Route to different paths based on state + +**When to use**: Multiple possible workflows + +```python +from langgraph.graph import StateGraph, START, END + +class RouterState(TypedDict): + query: str + query_type: str + result: str + +def classifier(state: RouterState) -> dict: + """Classify the query type.""" + query = state["query"].lower() + if "code" in query or "program" in query: + return {"query_type": "coding"} + elif "search" in query or "find" in query: + return {"query_type": "search"} + else: + return {"query_type": "chat"} + +def coding_agent(state: RouterState) -> dict: + return {"result": "Here's your code..."} + +def search_agent(state: RouterState) -> dict: + return {"result": "Search results..."} + +def chat_agent(state: RouterState) -> dict: + return {"result": "Let me help..."} + +# Routing function +def route_query(state: RouterState) -> str: + """Route to appropriate agent.""" + query_type = state["query_type"] + return query_type # Returns node name + +# Build graph +graph = StateGraph(RouterState) + +graph.add_node("classifier", classifier) +graph.add_node("coding", coding_agent) +graph.add_node("search", search_agent) +graph.add_node("chat", chat_agent) + +graph.add_edge(START, "classifier") + +# Conditional edges from classifier +graph.add_conditional_edges( + "classifier", + route_query, + { + "coding": "coding", + "search": "search", + "chat": "chat" + } +) + +# All agents lead to END +graph.add_edge("coding", END) +graph.add_edge("search", END) +graph.add_edge("chat", END) + +app = graph.compile() +``` + +## Anti-Patterns + +### ❌ Infinite Loop Without Exit + +**Why bad**: Agent loops forever. +Burns tokens and costs. +Eventually errors out. + +**Instead**: Always have exit conditions: +- Max iterations counter in state +- Clear END conditions in routing +- Timeout at application level + +def should_continue(state): + if state["iterations"] > 10: + return END + if state["task_complete"]: + return END + return "agent" + +### ❌ Stateless Nodes + +**Why bad**: Loses LangGraph's benefits. +State not persisted. +Can't resume conversations. + +**Instead**: Always use state for data flow. +Return state updates from nodes. +Use reducers for accumulation. +Let LangGraph manage state. + +### ❌ Giant Monolithic State + +**Why bad**: Hard to reason about. +Unnecessary data in context. +Serialization overhead. + +**Instead**: Use input/output schemas for clean interfaces. +Private state for internal data. +Clear separation of concerns. + +## Limitations + +- Python-only (TypeScript in early stages) +- Learning curve for graph concepts +- State management complexity +- Debugging can be challenging + +## Related Skills + +Works well with: `crewai`, `autonomous-agents`, `langfuse`, `structured-output` diff --git a/.cursor/skills/planning-with-files/SKILL.md b/.cursor/skills/planning-with-files/SKILL.md new file mode 100644 index 0000000..7f4024a --- /dev/null +++ b/.cursor/skills/planning-with-files/SKILL.md @@ -0,0 +1,248 @@ +--- +name: planning-with-files +version: "2.10.0" +description: Implements Manus-style file-based planning for complex tasks. Creates task_plan.md, findings.md, and progress.md. Use when starting complex multi-step tasks, research projects, or any task requiring >5 tool calls. Now with automatic session recovery after /clear. +user-invocable: true +allowed-tools: + - Read + - Write + - Edit + - Bash + - Glob + - Grep + - WebFetch + - WebSearch +hooks: + PreToolUse: + - matcher: "Write|Edit|Bash|Read|Glob|Grep" + hooks: + - type: command + command: "cat task_plan.md 2>/dev/null | head -30 || true" + PostToolUse: + - matcher: "Write|Edit" + hooks: + - type: command + command: "echo '[planning-with-files] File updated. If this completes a phase, update task_plan.md status.'" + Stop: + - hooks: + - type: command + command: | + SCRIPT_DIR="${CLAUDE_PLUGIN_ROOT:-$HOME/.claude/plugins/planning-with-files}/scripts" + + IS_WINDOWS=0 + if [ "${OS-}" = "Windows_NT" ]; then + IS_WINDOWS=1 + else + UNAME_S="$(uname -s 2>/dev/null || echo '')" + case "$UNAME_S" in + CYGWIN*|MINGW*|MSYS*) IS_WINDOWS=1 ;; + esac + fi + + if [ "$IS_WINDOWS" -eq 1 ]; then + if command -v pwsh >/dev/null 2>&1; then + pwsh -ExecutionPolicy Bypass -File "$SCRIPT_DIR/check-complete.ps1" 2>/dev/null || + powershell -ExecutionPolicy Bypass -File "$SCRIPT_DIR/check-complete.ps1" 2>/dev/null || + sh "$SCRIPT_DIR/check-complete.sh" + else + powershell -ExecutionPolicy Bypass -File "$SCRIPT_DIR/check-complete.ps1" 2>/dev/null || + sh "$SCRIPT_DIR/check-complete.sh" + fi + else + sh "$SCRIPT_DIR/check-complete.sh" + fi +--- + +# Planning with Files + +Work like Manus: Use persistent markdown files as your "working memory on disk." + +## FIRST: Check for Previous Session (v2.2.0) + +**Before starting work**, check for unsynced context from a previous session: + +```bash +# Linux/macOS +$(command -v python3 || command -v python) ${CLAUDE_PLUGIN_ROOT}/scripts/session-catchup.py "$(pwd)" +``` + +```powershell +# Windows PowerShell +& (Get-Command python -ErrorAction SilentlyContinue).Source "$env:USERPROFILE\.claude\skills\planning-with-files\scripts\session-catchup.py" (Get-Location) +``` + +If catchup report shows unsynced context: +1. Run `git diff --stat` to see actual code changes +2. Read current planning files +3. Update planning files based on catchup + git diff +4. Then proceed with task + +## Important: Where Files Go + +- **Templates** are in `${CLAUDE_PLUGIN_ROOT}/templates/` +- **Your planning files** go in **your project directory** + +| Location | What Goes There | +|----------|-----------------| +| Skill directory (`${CLAUDE_PLUGIN_ROOT}/`) | Templates, scripts, reference docs | +| Your project directory | `task_plan.md`, `findings.md`, `progress.md` | + +## Quick Start + +Before ANY complex task: + +1. **Create `task_plan.md`** — Use [templates/task_plan.md](templates/task_plan.md) as reference +2. **Create `findings.md`** — Use [templates/findings.md](templates/findings.md) as reference +3. **Create `progress.md`** — Use [templates/progress.md](templates/progress.md) as reference +4. **Re-read plan before decisions** — Refreshes goals in attention window +5. **Update after each phase** — Mark complete, log errors + +> **Note:** Planning files go in your project root, not the skill installation folder. + +## The Core Pattern + +``` +Context Window = RAM (volatile, limited) +Filesystem = Disk (persistent, unlimited) + +→ Anything important gets written to disk. +``` + +## File Purposes + +| File | Purpose | When to Update | +|------|---------|----------------| +| `task_plan.md` | Phases, progress, decisions | After each phase | +| `findings.md` | Research, discoveries | After ANY discovery | +| `progress.md` | Session log, test results | Throughout session | + +## Critical Rules + +### 1. Create Plan First +Never start a complex task without `task_plan.md`. Non-negotiable. + +### 2. The 2-Action Rule +> "After every 2 view/browser/search operations, IMMEDIATELY save key findings to text files." + +This prevents visual/multimodal information from being lost. + +### 3. Read Before Decide +Before major decisions, read the plan file. This keeps goals in your attention window. + +### 4. Update After Act +After completing any phase: +- Mark phase status: `in_progress` → `complete` +- Log any errors encountered +- Note files created/modified + +### 5. Log ALL Errors +Every error goes in the plan file. This builds knowledge and prevents repetition. + +```markdown +## Errors Encountered +| Error | Attempt | Resolution | +|-------|---------|------------| +| FileNotFoundError | 1 | Created default config | +| API timeout | 2 | Added retry logic | +``` + +### 6. Never Repeat Failures +``` +if action_failed: + next_action != same_action +``` +Track what you tried. Mutate the approach. + +## The 3-Strike Error Protocol + +``` +ATTEMPT 1: Diagnose & Fix + → Read error carefully + → Identify root cause + → Apply targeted fix + +ATTEMPT 2: Alternative Approach + → Same error? Try different method + → Different tool? Different library? + → NEVER repeat exact same failing action + +ATTEMPT 3: Broader Rethink + → Question assumptions + → Search for solutions + → Consider updating the plan + +AFTER 3 FAILURES: Escalate to User + → Explain what you tried + → Share the specific error + → Ask for guidance +``` + +## Read vs Write Decision Matrix + +| Situation | Action | Reason | +|-----------|--------|--------| +| Just wrote a file | DON'T read | Content still in context | +| Viewed image/PDF | Write findings NOW | Multimodal → text before lost | +| Browser returned data | Write to file | Screenshots don't persist | +| Starting new phase | Read plan/findings | Re-orient if context stale | +| Error occurred | Read relevant file | Need current state to fix | +| Resuming after gap | Read all planning files | Recover state | + +## The 5-Question Reboot Test + +If you can answer these, your context management is solid: + +| Question | Answer Source | +|----------|---------------| +| Where am I? | Current phase in task_plan.md | +| Where am I going? | Remaining phases | +| What's the goal? | Goal statement in plan | +| What have I learned? | findings.md | +| What have I done? | progress.md | + +## When to Use This Pattern + +**Use for:** +- Multi-step tasks (3+ steps) +- Research tasks +- Building/creating projects +- Tasks spanning many tool calls +- Anything requiring organization + +**Skip for:** +- Simple questions +- Single-file edits +- Quick lookups + +## Templates + +Copy these templates to start: + +- [templates/task_plan.md](templates/task_plan.md) — Phase tracking +- [templates/findings.md](templates/findings.md) — Research storage +- [templates/progress.md](templates/progress.md) — Session logging + +## Scripts + +Helper scripts for automation: + +- `scripts/init-session.sh` — Initialize all planning files +- `scripts/check-complete.sh` — Verify all phases complete +- `scripts/session-catchup.py` — Recover context from previous session (v2.2.0) + +## Advanced Topics + +- **Manus Principles:** See [reference.md](reference.md) +- **Real Examples:** See [examples.md](examples.md) + +## Anti-Patterns + +| Don't | Do Instead | +|-------|------------| +| Use TodoWrite for persistence | Create task_plan.md file | +| State goals once and forget | Re-read plan before decisions | +| Hide errors and retry silently | Log errors to plan file | +| Stuff everything in context | Store large content in files | +| Start executing immediately | Create plan file FIRST | +| Repeat failed actions | Track attempts, mutate approach | +| Create files in skill directory | Create files in your project | diff --git a/.cursor/skills/planning-with-files/examples.md b/.cursor/skills/planning-with-files/examples.md new file mode 100644 index 0000000..51b5f3a --- /dev/null +++ b/.cursor/skills/planning-with-files/examples.md @@ -0,0 +1,202 @@ +# Examples: Planning with Files in Action + +## Example 1: Research Task + +**User Request:** "Research the benefits of morning exercise and write a summary" + +### Loop 1: Create Plan +```bash +Write task_plan.md +``` + +```markdown +# Task Plan: Morning Exercise Benefits Research + +## Goal +Create a research summary on the benefits of morning exercise. + +## Phases +- [ ] Phase 1: Create this plan ✓ +- [ ] Phase 2: Search and gather sources +- [ ] Phase 3: Synthesize findings +- [ ] Phase 4: Deliver summary + +## Key Questions +1. What are the physical health benefits? +2. What are the mental health benefits? +3. What scientific studies support this? + +## Status +**Currently in Phase 1** - Creating plan +``` + +### Loop 2: Research +```bash +Read task_plan.md # Refresh goals +WebSearch "morning exercise benefits" +Write notes.md # Store findings +Edit task_plan.md # Mark Phase 2 complete +``` + +### Loop 3: Synthesize +```bash +Read task_plan.md # Refresh goals +Read notes.md # Get findings +Write morning_exercise_summary.md +Edit task_plan.md # Mark Phase 3 complete +``` + +### Loop 4: Deliver +```bash +Read task_plan.md # Verify complete +Deliver morning_exercise_summary.md +``` + +--- + +## Example 2: Bug Fix Task + +**User Request:** "Fix the login bug in the authentication module" + +### task_plan.md +```markdown +# Task Plan: Fix Login Bug + +## Goal +Identify and fix the bug preventing successful login. + +## Phases +- [x] Phase 1: Understand the bug report ✓ +- [x] Phase 2: Locate relevant code ✓ +- [ ] Phase 3: Identify root cause (CURRENT) +- [ ] Phase 4: Implement fix +- [ ] Phase 5: Test and verify + +## Key Questions +1. What error message appears? +2. Which file handles authentication? +3. What changed recently? + +## Decisions Made +- Auth handler is in src/auth/login.ts +- Error occurs in validateToken() function + +## Errors Encountered +- [Initial] TypeError: Cannot read property 'token' of undefined + → Root cause: user object not awaited properly + +## Status +**Currently in Phase 3** - Found root cause, preparing fix +``` + +--- + +## Example 3: Feature Development + +**User Request:** "Add a dark mode toggle to the settings page" + +### The 3-File Pattern in Action + +**task_plan.md:** +```markdown +# Task Plan: Dark Mode Toggle + +## Goal +Add functional dark mode toggle to settings. + +## Phases +- [x] Phase 1: Research existing theme system ✓ +- [x] Phase 2: Design implementation approach ✓ +- [ ] Phase 3: Implement toggle component (CURRENT) +- [ ] Phase 4: Add theme switching logic +- [ ] Phase 5: Test and polish + +## Decisions Made +- Using CSS custom properties for theme +- Storing preference in localStorage +- Toggle component in SettingsPage.tsx + +## Status +**Currently in Phase 3** - Building toggle component +``` + +**notes.md:** +```markdown +# Notes: Dark Mode Implementation + +## Existing Theme System +- Located in: src/styles/theme.ts +- Uses: CSS custom properties +- Current themes: light only + +## Files to Modify +1. src/styles/theme.ts - Add dark theme colors +2. src/components/SettingsPage.tsx - Add toggle +3. src/hooks/useTheme.ts - Create new hook +4. src/App.tsx - Wrap with ThemeProvider + +## Color Decisions +- Dark background: #1a1a2e +- Dark surface: #16213e +- Dark text: #eaeaea +``` + +**dark_mode_implementation.md:** (deliverable) +```markdown +# Dark Mode Implementation + +## Changes Made + +### 1. Added dark theme colors +File: src/styles/theme.ts +... + +### 2. Created useTheme hook +File: src/hooks/useTheme.ts +... +``` + +--- + +## Example 4: Error Recovery Pattern + +When something fails, DON'T hide it: + +### Before (Wrong) +``` +Action: Read config.json +Error: File not found +Action: Read config.json # Silent retry +Action: Read config.json # Another retry +``` + +### After (Correct) +``` +Action: Read config.json +Error: File not found + +# Update task_plan.md: +## Errors Encountered +- config.json not found → Will create default config + +Action: Write config.json (default config) +Action: Read config.json +Success! +``` + +--- + +## The Read-Before-Decide Pattern + +**Always read your plan before major decisions:** + +``` +[Many tool calls have happened...] +[Context is getting long...] +[Original goal might be forgotten...] + +→ Read task_plan.md # This brings goals back into attention! +→ Now make the decision # Goals are fresh in context +``` + +This is why Manus can handle ~50 tool calls without losing track. The plan file acts as a "goal refresh" mechanism. diff --git a/.cursor/skills/planning-with-files/reference.md b/.cursor/skills/planning-with-files/reference.md new file mode 100644 index 0000000..1380fbb --- /dev/null +++ b/.cursor/skills/planning-with-files/reference.md @@ -0,0 +1,218 @@ +# Reference: Manus Context Engineering Principles + +This skill is based on context engineering principles from Manus, the AI agent company acquired by Meta for $2 billion in December 2025. + +## The 6 Manus Principles + +### Principle 1: Design Around KV-Cache + +> "KV-cache hit rate is THE single most important metric for production AI agents." + +**Statistics:** +- ~100:1 input-to-output token ratio +- Cached tokens: $0.30/MTok vs Uncached: $3/MTok +- 10x cost difference! + +**Implementation:** +- Keep prompt prefixes STABLE (single-token change invalidates cache) +- NO timestamps in system prompts +- Make context APPEND-ONLY with deterministic serialization + +### Principle 2: Mask, Don't Remove + +Don't dynamically remove tools (breaks KV-cache). Use logit masking instead. + +**Best Practice:** Use consistent action prefixes (e.g., `browser_`, `shell_`, `file_`) for easier masking. + +### Principle 3: Filesystem as External Memory + +> "Markdown is my 'working memory' on disk." + +**The Formula:** +``` +Context Window = RAM (volatile, limited) +Filesystem = Disk (persistent, unlimited) +``` + +**Compression Must Be Restorable:** +- Keep URLs even if web content is dropped +- Keep file paths when dropping document contents +- Never lose the pointer to full data + +### Principle 4: Manipulate Attention Through Recitation + +> "Creates and updates todo.md throughout tasks to push global plan into model's recent attention span." + +**Problem:** After ~50 tool calls, models forget original goals ("lost in the middle" effect). + +**Solution:** Re-read `task_plan.md` before each decision. Goals appear in the attention window. + +``` +Start of context: [Original goal - far away, forgotten] +...many tool calls... +End of context: [Recently read task_plan.md - gets ATTENTION!] +``` + +### Principle 5: Keep the Wrong Stuff In + +> "Leave the wrong turns in the context." + +**Why:** +- Failed actions with stack traces let model implicitly update beliefs +- Reduces mistake repetition +- Error recovery is "one of the clearest signals of TRUE agentic behavior" + +### Principle 6: Don't Get Few-Shotted + +> "Uniformity breeds fragility." + +**Problem:** Repetitive action-observation pairs cause drift and hallucination. + +**Solution:** Introduce controlled variation: +- Vary phrasings slightly +- Don't copy-paste patterns blindly +- Recalibrate on repetitive tasks + +--- + +## The 3 Context Engineering Strategies + +Based on Lance Martin's analysis of Manus architecture. + +### Strategy 1: Context Reduction + +**Compaction:** +``` +Tool calls have TWO representations: +├── FULL: Raw tool content (stored in filesystem) +└── COMPACT: Reference/file path only + +RULES: +- Apply compaction to STALE (older) tool results +- Keep RECENT results FULL (to guide next decision) +``` + +**Summarization:** +- Applied when compaction reaches diminishing returns +- Generated using full tool results +- Creates standardized summary objects + +### Strategy 2: Context Isolation (Multi-Agent) + +**Architecture:** +``` +┌─────────────────────────────────┐ +│ PLANNER AGENT │ +│ └─ Assigns tasks to sub-agents │ +├─────────────────────────────────┤ +│ KNOWLEDGE MANAGER │ +│ └─ Reviews conversations │ +│ └─ Determines filesystem store │ +├─────────────────────────────────┤ +│ EXECUTOR SUB-AGENTS │ +│ └─ Perform assigned tasks │ +│ └─ Have own context windows │ +└─────────────────────────────────┘ +``` + +**Key Insight:** Manus originally used `todo.md` for task planning but found ~33% of actions were spent updating it. Shifted to dedicated planner agent calling executor sub-agents. + +### Strategy 3: Context Offloading + +**Tool Design:** +- Use <20 atomic functions total +- Store full results in filesystem, not context +- Use `glob` and `grep` for searching +- Progressive disclosure: load information only as needed + +--- + +## The Agent Loop + +Manus operates in a continuous 7-step loop: + +``` +┌─────────────────────────────────────────┐ +│ 1. ANALYZE CONTEXT │ +│ - Understand user intent │ +│ - Assess current state │ +│ - Review recent observations │ +├─────────────────────────────────────────┤ +│ 2. THINK │ +│ - Should I update the plan? │ +│ - What's the next logical action? │ +│ - Are there blockers? │ +├─────────────────────────────────────────┤ +│ 3. SELECT TOOL │ +│ - Choose ONE tool │ +│ - Ensure parameters available │ +├─────────────────────────────────────────┤ +│ 4. EXECUTE ACTION │ +│ - Tool runs in sandbox │ +├─────────────────────────────────────────┤ +│ 5. RECEIVE OBSERVATION │ +│ - Result appended to context │ +├─────────────────────────────────────────┤ +│ 6. ITERATE │ +│ - Return to step 1 │ +│ - Continue until complete │ +├─────────────────────────────────────────┤ +│ 7. DELIVER OUTCOME │ +│ - Send results to user │ +│ - Attach all relevant files │ +└─────────────────────────────────────────┘ +``` + +--- + +## File Types Manus Creates + +| File | Purpose | When Created | When Updated | +|------|---------|--------------|--------------| +| `task_plan.md` | Phase tracking, progress | Task start | After completing phases | +| `findings.md` | Discoveries, decisions | After ANY discovery | After viewing images/PDFs | +| `progress.md` | Session log, what's done | At breakpoints | Throughout session | +| Code files | Implementation | Before execution | After errors | + +--- + +## Critical Constraints + +- **Single-Action Execution:** ONE tool call per turn. No parallel execution. +- **Plan is Required:** Agent must ALWAYS know: goal, current phase, remaining phases +- **Files are Memory:** Context = volatile. Filesystem = persistent. +- **Never Repeat Failures:** If action failed, next action MUST be different +- **Communication is a Tool:** Message types: `info` (progress), `ask` (blocking), `result` (terminal) + +--- + +## Manus Statistics + +| Metric | Value | +|--------|-------| +| Average tool calls per task | ~50 | +| Input-to-output token ratio | 100:1 | +| Acquisition price | $2 billion | +| Time to $100M revenue | 8 months | +| Framework refactors since launch | 5 times | + +--- + +## Key Quotes + +> "Context window = RAM (volatile, limited). Filesystem = Disk (persistent, unlimited). Anything important gets written to disk." + +> "if action_failed: next_action != same_action. Track what you tried. Mutate the approach." + +> "Error recovery is one of the clearest signals of TRUE agentic behavior." + +> "KV-cache hit rate is the single most important metric for a production-stage AI agent." + +> "Leave the wrong turns in the context." + +--- + +## Source + +Based on Manus's official context engineering documentation: +https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus diff --git a/.cursor/skills/planning-with-files/scripts/check-complete.ps1 b/.cursor/skills/planning-with-files/scripts/check-complete.ps1 new file mode 100644 index 0000000..f3795e3 --- /dev/null +++ b/.cursor/skills/planning-with-files/scripts/check-complete.ps1 @@ -0,0 +1,44 @@ +# Check if all phases in task_plan.md are complete +# Always exits 0 — uses stdout for status reporting +# Used by Stop hook to report task completion status + +param( + [string]$PlanFile = "task_plan.md" +) + +if (-not (Test-Path $PlanFile)) { + Write-Host "[planning-with-files] No task_plan.md found — no active planning session." + exit 0 +} + +# Read file content +$content = Get-Content $PlanFile -Raw + +# Count total phases +$TOTAL = ([regex]::Matches($content, "### Phase")).Count + +# Check for **Status:** format first +$COMPLETE = ([regex]::Matches($content, "\*\*Status:\*\* complete")).Count +$IN_PROGRESS = ([regex]::Matches($content, "\*\*Status:\*\* in_progress")).Count +$PENDING = ([regex]::Matches($content, "\*\*Status:\*\* pending")).Count + +# Fallback: check for [complete] inline format if **Status:** not found +if ($COMPLETE -eq 0 -and $IN_PROGRESS -eq 0 -and $PENDING -eq 0) { + $COMPLETE = ([regex]::Matches($content, "\[complete\]")).Count + $IN_PROGRESS = ([regex]::Matches($content, "\[in_progress\]")).Count + $PENDING = ([regex]::Matches($content, "\[pending\]")).Count +} + +# Report status (always exit 0 — incomplete task is a normal state) +if ($COMPLETE -eq $TOTAL -and $TOTAL -gt 0) { + Write-Host "[planning-with-files] ALL PHASES COMPLETE ($COMPLETE/$TOTAL)" +} else { + Write-Host "[planning-with-files] Task in progress ($COMPLETE/$TOTAL phases complete)" + if ($IN_PROGRESS -gt 0) { + Write-Host "[planning-with-files] $IN_PROGRESS phase(s) still in progress." + } + if ($PENDING -gt 0) { + Write-Host "[planning-with-files] $PENDING phase(s) pending." + } +} +exit 0 diff --git a/.cursor/skills/planning-with-files/scripts/check-complete.sh b/.cursor/skills/planning-with-files/scripts/check-complete.sh new file mode 100644 index 0000000..dbfea3e --- /dev/null +++ b/.cursor/skills/planning-with-files/scripts/check-complete.sh @@ -0,0 +1,46 @@ +#!/bin/bash +# Check if all phases in task_plan.md are complete +# Always exits 0 — uses stdout for status reporting +# Used by Stop hook to report task completion status + +PLAN_FILE="${1:-task_plan.md}" + +if [ ! -f "$PLAN_FILE" ]; then + echo "[planning-with-files] No task_plan.md found — no active planning session." + exit 0 +fi + +# Count total phases +TOTAL=$(grep -c "### Phase" "$PLAN_FILE" || true) + +# Check for **Status:** format first +COMPLETE=$(grep -cF "**Status:** complete" "$PLAN_FILE" || true) +IN_PROGRESS=$(grep -cF "**Status:** in_progress" "$PLAN_FILE" || true) +PENDING=$(grep -cF "**Status:** pending" "$PLAN_FILE" || true) + +# Fallback: check for [complete] inline format if **Status:** not found +if [ "$COMPLETE" -eq 0 ] && [ "$IN_PROGRESS" -eq 0 ] && [ "$PENDING" -eq 0 ]; then + COMPLETE=$(grep -c "\[complete\]" "$PLAN_FILE" || true) + IN_PROGRESS=$(grep -c "\[in_progress\]" "$PLAN_FILE" || true) + PENDING=$(grep -c "\[pending\]" "$PLAN_FILE" || true) +fi + +# Default to 0 if empty +: "${TOTAL:=0}" +: "${COMPLETE:=0}" +: "${IN_PROGRESS:=0}" +: "${PENDING:=0}" + +# Report status (always exit 0 — incomplete task is a normal state) +if [ "$COMPLETE" -eq "$TOTAL" ] && [ "$TOTAL" -gt 0 ]; then + echo "[planning-with-files] ALL PHASES COMPLETE ($COMPLETE/$TOTAL)" +else + echo "[planning-with-files] Task in progress ($COMPLETE/$TOTAL phases complete)" + if [ "$IN_PROGRESS" -gt 0 ]; then + echo "[planning-with-files] $IN_PROGRESS phase(s) still in progress." + fi + if [ "$PENDING" -gt 0 ]; then + echo "[planning-with-files] $PENDING phase(s) pending." + fi +fi +exit 0 diff --git a/.cursor/skills/planning-with-files/scripts/init-session.ps1 b/.cursor/skills/planning-with-files/scripts/init-session.ps1 new file mode 100644 index 0000000..eeef149 --- /dev/null +++ b/.cursor/skills/planning-with-files/scripts/init-session.ps1 @@ -0,0 +1,120 @@ +# Initialize planning files for a new session +# Usage: .\init-session.ps1 [project-name] + +param( + [string]$ProjectName = "project" +) + +$DATE = Get-Date -Format "yyyy-MM-dd" + +Write-Host "Initializing planning files for: $ProjectName" + +# Create task_plan.md if it doesn't exist +if (-not (Test-Path "task_plan.md")) { + @" +# Task Plan: [Brief Description] + +## Goal +[One sentence describing the end state] + +## Current Phase +Phase 1 + +## Phases + +### Phase 1: Requirements & Discovery +- [ ] Understand user intent +- [ ] Identify constraints +- [ ] Document in findings.md +- **Status:** in_progress + +### Phase 2: Planning & Structure +- [ ] Define approach +- [ ] Create project structure +- **Status:** pending + +### Phase 3: Implementation +- [ ] Execute the plan +- [ ] Write to files before executing +- **Status:** pending + +### Phase 4: Testing & Verification +- [ ] Verify requirements met +- [ ] Document test results +- **Status:** pending + +### Phase 5: Delivery +- [ ] Review outputs +- [ ] Deliver to user +- **Status:** pending + +## Decisions Made +| Decision | Rationale | +|----------|-----------| + +## Errors Encountered +| Error | Resolution | +|-------|------------| +"@ | Out-File -FilePath "task_plan.md" -Encoding UTF8 + Write-Host "Created task_plan.md" +} else { + Write-Host "task_plan.md already exists, skipping" +} + +# Create findings.md if it doesn't exist +if (-not (Test-Path "findings.md")) { + @" +# Findings & Decisions + +## Requirements +- + +## Research Findings +- + +## Technical Decisions +| Decision | Rationale | +|----------|-----------| + +## Issues Encountered +| Issue | Resolution | +|-------|------------| + +## Resources +- +"@ | Out-File -FilePath "findings.md" -Encoding UTF8 + Write-Host "Created findings.md" +} else { + Write-Host "findings.md already exists, skipping" +} + +# Create progress.md if it doesn't exist +if (-not (Test-Path "progress.md")) { + @" +# Progress Log + +## Session: $DATE + +### Current Status +- **Phase:** 1 - Requirements & Discovery +- **Started:** $DATE + +### Actions Taken +- + +### Test Results +| Test | Expected | Actual | Status | +|------|----------|--------|--------| + +### Errors +| Error | Resolution | +|-------|------------| +"@ | Out-File -FilePath "progress.md" -Encoding UTF8 + Write-Host "Created progress.md" +} else { + Write-Host "progress.md already exists, skipping" +} + +Write-Host "" +Write-Host "Planning files initialized!" +Write-Host "Files: task_plan.md, findings.md, progress.md" diff --git a/.cursor/skills/planning-with-files/scripts/init-session.sh b/.cursor/skills/planning-with-files/scripts/init-session.sh new file mode 100644 index 0000000..1c60de8 --- /dev/null +++ b/.cursor/skills/planning-with-files/scripts/init-session.sh @@ -0,0 +1,120 @@ +#!/bin/bash +# Initialize planning files for a new session +# Usage: ./init-session.sh [project-name] + +set -e + +PROJECT_NAME="${1:-project}" +DATE=$(date +%Y-%m-%d) + +echo "Initializing planning files for: $PROJECT_NAME" + +# Create task_plan.md if it doesn't exist +if [ ! -f "task_plan.md" ]; then + cat > task_plan.md << 'EOF' +# Task Plan: [Brief Description] + +## Goal +[One sentence describing the end state] + +## Current Phase +Phase 1 + +## Phases + +### Phase 1: Requirements & Discovery +- [ ] Understand user intent +- [ ] Identify constraints +- [ ] Document in findings.md +- **Status:** in_progress + +### Phase 2: Planning & Structure +- [ ] Define approach +- [ ] Create project structure +- **Status:** pending + +### Phase 3: Implementation +- [ ] Execute the plan +- [ ] Write to files before executing +- **Status:** pending + +### Phase 4: Testing & Verification +- [ ] Verify requirements met +- [ ] Document test results +- **Status:** pending + +### Phase 5: Delivery +- [ ] Review outputs +- [ ] Deliver to user +- **Status:** pending + +## Decisions Made +| Decision | Rationale | +|----------|-----------| + +## Errors Encountered +| Error | Resolution | +|-------|------------| +EOF + echo "Created task_plan.md" +else + echo "task_plan.md already exists, skipping" +fi + +# Create findings.md if it doesn't exist +if [ ! -f "findings.md" ]; then + cat > findings.md << 'EOF' +# Findings & Decisions + +## Requirements +- + +## Research Findings +- + +## Technical Decisions +| Decision | Rationale | +|----------|-----------| + +## Issues Encountered +| Issue | Resolution | +|-------|------------| + +## Resources +- +EOF + echo "Created findings.md" +else + echo "findings.md already exists, skipping" +fi + +# Create progress.md if it doesn't exist +if [ ! -f "progress.md" ]; then + cat > progress.md << EOF +# Progress Log + +## Session: $DATE + +### Current Status +- **Phase:** 1 - Requirements & Discovery +- **Started:** $DATE + +### Actions Taken +- + +### Test Results +| Test | Expected | Actual | Status | +|------|----------|--------|--------| + +### Errors +| Error | Resolution | +|-------|------------| +EOF + echo "Created progress.md" +else + echo "progress.md already exists, skipping" +fi + +echo "" +echo "Planning files initialized!" +echo "Files: task_plan.md, findings.md, progress.md" diff --git a/.cursor/skills/planning-with-files/scripts/session-catchup.py b/.cursor/skills/planning-with-files/scripts/session-catchup.py new file mode 100644 index 0000000..281cebb --- /dev/null +++ b/.cursor/skills/planning-with-files/scripts/session-catchup.py @@ -0,0 +1,208 @@ +#!/usr/bin/env python3 +""" +Session Catchup Script for planning-with-files + +Analyzes the previous session to find unsynced context after the last +planning file update. Designed to run on SessionStart. + +Usage: python3 session-catchup.py [project-path] +""" + +import json +import sys +import os +from pathlib import Path +from typing import List, Dict, Optional, Tuple +from datetime import datetime + +PLANNING_FILES = ['task_plan.md', 'progress.md', 'findings.md'] + + +def get_project_dir(project_path: str) -> Path: + """Convert project path to Claude's storage path format.""" + sanitized = project_path.replace('/', '-') + if not sanitized.startswith('-'): + sanitized = '-' + sanitized + sanitized = sanitized.replace('_', '-') + return Path.home() / '.claude' / 'projects' / sanitized + + +def get_sessions_sorted(project_dir: Path) -> List[Path]: + """Get all session files sorted by modification time (newest first).""" + sessions = list(project_dir.glob('*.jsonl')) + main_sessions = [s for s in sessions if not s.name.startswith('agent-')] + return sorted(main_sessions, key=lambda p: p.stat().st_mtime, reverse=True) + + +def parse_session_messages(session_file: Path) -> List[Dict]: + """Parse all messages from a session file, preserving order.""" + messages = [] + with open(session_file, 'r') as f: + for line_num, line in enumerate(f): + try: + data = json.loads(line) + data['_line_num'] = line_num + messages.append(data) + except json.JSONDecodeError: + pass + return messages + + +def find_last_planning_update(messages: List[Dict]) -> Tuple[int, Optional[str]]: + """ + Find the last time a planning file was written/edited. + Returns (line_number, filename) or (-1, None) if not found. + """ + last_update_line = -1 + last_update_file = None + + for msg in messages: + msg_type = msg.get('type') + + if msg_type == 'assistant': + content = msg.get('message', {}).get('content', []) + if isinstance(content, list): + for item in content: + if item.get('type') == 'tool_use': + tool_name = item.get('name', '') + tool_input = item.get('input', {}) + + if tool_name in ('Write', 'Edit'): + file_path = tool_input.get('file_path', '') + for pf in PLANNING_FILES: + if file_path.endswith(pf): + last_update_line = msg['_line_num'] + last_update_file = pf + + return last_update_line, last_update_file + + +def extract_messages_after(messages: List[Dict], after_line: int) -> List[Dict]: + """Extract conversation messages after a certain line number.""" + result = [] + for msg in messages: + if msg['_line_num'] <= after_line: + continue + + msg_type = msg.get('type') + is_meta = msg.get('isMeta', False) + + if msg_type == 'user' and not is_meta: + content = msg.get('message', {}).get('content', '') + if isinstance(content, list): + for item in content: + if isinstance(item, dict) and item.get('type') == 'text': + content = item.get('text', '') + break + else: + content = '' + + if content and isinstance(content, str): + if content.startswith((' 20: + result.append({'role': 'user', 'content': content, 'line': msg['_line_num']}) + + elif msg_type == 'assistant': + msg_content = msg.get('message', {}).get('content', '') + text_content = '' + tool_uses = [] + + if isinstance(msg_content, str): + text_content = msg_content + elif isinstance(msg_content, list): + for item in msg_content: + if item.get('type') == 'text': + text_content = item.get('text', '') + elif item.get('type') == 'tool_use': + tool_name = item.get('name', '') + tool_input = item.get('input', {}) + if tool_name == 'Edit': + tool_uses.append(f"Edit: {tool_input.get('file_path', 'unknown')}") + elif tool_name == 'Write': + tool_uses.append(f"Write: {tool_input.get('file_path', 'unknown')}") + elif tool_name == 'Bash': + cmd = tool_input.get('command', '')[:80] + tool_uses.append(f"Bash: {cmd}") + else: + tool_uses.append(f"{tool_name}") + + if text_content or tool_uses: + result.append({ + 'role': 'assistant', + 'content': text_content[:600] if text_content else '', + 'tools': tool_uses, + 'line': msg['_line_num'] + }) + + return result + + +def main(): + project_path = sys.argv[1] if len(sys.argv) > 1 else os.getcwd() + project_dir = get_project_dir(project_path) + + # Check if planning files exist (indicates active task) + has_planning_files = any( + Path(project_path, f).exists() for f in PLANNING_FILES + ) + + if not project_dir.exists(): + # No previous sessions, nothing to catch up on + return + + sessions = get_sessions_sorted(project_dir) + if len(sessions) < 1: + return + + # Find a substantial previous session + target_session = None + for session in sessions: + if session.stat().st_size > 5000: + target_session = session + break + + if not target_session: + return + + messages = parse_session_messages(target_session) + last_update_line, last_update_file = find_last_planning_update(messages) + + # Only output if there's unsynced content + if last_update_line < 0: + messages_after = extract_messages_after(messages, len(messages) - 30) + else: + messages_after = extract_messages_after(messages, last_update_line) + + if not messages_after: + return + + # Output catchup report + print("\n[planning-with-files] SESSION CATCHUP DETECTED") + print(f"Previous session: {target_session.stem}") + + if last_update_line >= 0: + print(f"Last planning update: {last_update_file} at message #{last_update_line}") + print(f"Unsynced messages: {len(messages_after)}") + else: + print("No planning file updates found in previous session") + + print("\n--- UNSYNCED CONTEXT ---") + for msg in messages_after[-15:]: # Last 15 messages + if msg['role'] == 'user': + print(f"USER: {msg['content'][:300]}") + else: + if msg.get('content'): + print(f"CLAUDE: {msg['content'][:300]}") + if msg.get('tools'): + print(f" Tools: {', '.join(msg['tools'][:4])}") + + print("\n--- RECOMMENDED ---") + print("1. Run: git diff --stat") + print("2. Read: task_plan.md, progress.md, findings.md") + print("3. Update planning files based on above context") + print("4. Continue with task") + + +if __name__ == '__main__': + main() diff --git a/.cursor/skills/planning-with-files/templates/findings.md b/.cursor/skills/planning-with-files/templates/findings.md new file mode 100644 index 0000000..056536d --- /dev/null +++ b/.cursor/skills/planning-with-files/templates/findings.md @@ -0,0 +1,95 @@ +# Findings & Decisions + + +## Requirements + + +- + +## Research Findings + + +- + +## Technical Decisions + + +| Decision | Rationale | +|----------|-----------| +| | | + +## Issues Encountered + + +| Issue | Resolution | +|-------|------------| +| | | + +## Resources + + +- + +## Visual/Browser Findings + + + +- + +--- + +*Update this file after every 2 view/browser/search operations* +*This prevents visual information from being lost* diff --git a/.cursor/skills/planning-with-files/templates/progress.md b/.cursor/skills/planning-with-files/templates/progress.md new file mode 100644 index 0000000..dba9af9 --- /dev/null +++ b/.cursor/skills/planning-with-files/templates/progress.md @@ -0,0 +1,114 @@ +# Progress Log + + +## Session: [DATE] + + +### Phase 1: [Title] + +- **Status:** in_progress +- **Started:** [timestamp] + +- Actions taken: + + - +- Files created/modified: + + - + +### Phase 2: [Title] + +- **Status:** pending +- Actions taken: + - +- Files created/modified: + - + +## Test Results + +| Test | Input | Expected | Actual | Status | +|------|-------|----------|--------|--------| +| | | | | | + +## Error Log + + +| Timestamp | Error | Attempt | Resolution | +|-----------|-------|---------|------------| +| | | 1 | | + +## 5-Question Reboot Check + + +| Question | Answer | +|----------|--------| +| Where am I? | Phase X | +| Where am I going? | Remaining phases | +| What's the goal? | [goal statement] | +| What have I learned? | See findings.md | +| What have I done? | See above | + +--- + +*Update after completing each phase or encountering errors* diff --git a/.cursor/skills/planning-with-files/templates/task_plan.md b/.cursor/skills/planning-with-files/templates/task_plan.md new file mode 100644 index 0000000..cc85896 --- /dev/null +++ b/.cursor/skills/planning-with-files/templates/task_plan.md @@ -0,0 +1,132 @@ +# Task Plan: [Brief Description] + + +## Goal + +[One sentence describing the end state] + +## Current Phase + +Phase 1 + +## Phases + + +### Phase 1: Requirements & Discovery + +- [ ] Understand user intent +- [ ] Identify constraints and requirements +- [ ] Document findings in findings.md +- **Status:** in_progress + + +### Phase 2: Planning & Structure + +- [ ] Define technical approach +- [ ] Create project structure if needed +- [ ] Document decisions with rationale +- **Status:** pending + +### Phase 3: Implementation + +- [ ] Execute the plan step by step +- [ ] Write code to files before executing +- [ ] Test incrementally +- **Status:** pending + +### Phase 4: Testing & Verification + +- [ ] Verify all requirements met +- [ ] Document test results in progress.md +- [ ] Fix any issues found +- **Status:** pending + +### Phase 5: Delivery + +- [ ] Review all output files +- [ ] Ensure deliverables are complete +- [ ] Deliver to user +- **Status:** pending + +## Key Questions + +1. [Question to answer] +2. [Question to answer] + +## Decisions Made + +| Decision | Rationale | +|----------|-----------| +| | | + +## Errors Encountered + +| Error | Attempt | Resolution | +|-------|---------|------------| +| | 1 | | + +## Notes + +- Update phase status as you progress: pending → in_progress → complete +- Re-read this plan before major decisions (attention manipulation) +- Log ALL errors - they help avoid repetition diff --git a/.cursor/skills/prometheus-configuration/SKILL.md b/.cursor/skills/prometheus-configuration/SKILL.md new file mode 100644 index 0000000..5aa8770 --- /dev/null +++ b/.cursor/skills/prometheus-configuration/SKILL.md @@ -0,0 +1,400 @@ +--- +name: prometheus-configuration +description: Set up Prometheus for comprehensive metric collection, storage, and monitoring of infrastructure and applications. Use when implementing metrics collection, setting up monitoring infrastructure, or configuring alerting systems. +--- + +# Prometheus Configuration + +Complete guide to Prometheus setup, metric collection, scrape configuration, and recording rules. + +## Purpose + +Configure Prometheus for comprehensive metric collection, alerting, and monitoring of infrastructure and applications. + +## When to Use + +- Set up Prometheus monitoring +- Configure metric scraping +- Create recording rules +- Design alert rules +- Implement service discovery + +## Prometheus Architecture + +``` +┌──────────────┐ +│ Applications │ ← Instrumented with client libraries +└──────┬───────┘ + │ /metrics endpoint + ↓ +┌──────────────┐ +│ Prometheus │ ← Scrapes metrics periodically +│ Server │ +└──────┬───────┘ + │ + ├─→ AlertManager (alerts) + ├─→ Grafana (visualization) + └─→ Long-term storage (Thanos/Cortex) +``` + +## Installation + +### Kubernetes with Helm + +```bash +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update + +helm install prometheus prometheus-community/kube-prometheus-stack \ + --namespace monitoring \ + --create-namespace \ + --set prometheus.prometheusSpec.retention=30d \ + --set prometheus.prometheusSpec.storageVolumeSize=50Gi +``` + +### Docker Compose + +```yaml +version: "3.8" +services: + prometheus: + image: prom/prometheus:latest + ports: + - "9090:9090" + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml + - prometheus-data:/prometheus + command: + - "--config.file=/etc/prometheus/prometheus.yml" + - "--storage.tsdb.path=/prometheus" + - "--storage.tsdb.retention.time=30d" + +volumes: + prometheus-data: +``` + +## Configuration File + +**prometheus.yml:** + +```yaml +global: + scrape_interval: 15s + evaluation_interval: 15s + external_labels: + cluster: "production" + region: "us-west-2" + +# Alertmanager configuration +alerting: + alertmanagers: + - static_configs: + - targets: + - alertmanager:9093 + +# Load rules files +rule_files: + - /etc/prometheus/rules/*.yml + +# Scrape configurations +scrape_configs: + # Prometheus itself + - job_name: "prometheus" + static_configs: + - targets: ["localhost:9090"] + + # Node exporters + - job_name: "node-exporter" + static_configs: + - targets: + - "node1:9100" + - "node2:9100" + - "node3:9100" + relabel_configs: + - source_labels: [__address__] + target_label: instance + regex: "([^:]+)(:[0-9]+)?" + replacement: "${1}" + + # Kubernetes pods with annotations + - job_name: "kubernetes-pods" + kubernetes_sd_configs: + - role: pod + relabel_configs: + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] + action: keep + regex: true + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + - source_labels: + [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] + action: replace + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + target_label: __address__ + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod + + # Application metrics + - job_name: "my-app" + static_configs: + - targets: + - "app1.example.com:9090" + - "app2.example.com:9090" + metrics_path: "/metrics" + scheme: "https" + tls_config: + ca_file: /etc/prometheus/ca.crt + cert_file: /etc/prometheus/client.crt + key_file: /etc/prometheus/client.key +``` + +**Reference:** See `assets/prometheus.yml.template` + +## Scrape Configurations + +### Static Targets + +```yaml +scrape_configs: + - job_name: "static-targets" + static_configs: + - targets: ["host1:9100", "host2:9100"] + labels: + env: "production" + region: "us-west-2" +``` + +### File-based Service Discovery + +```yaml +scrape_configs: + - job_name: "file-sd" + file_sd_configs: + - files: + - /etc/prometheus/targets/*.json + - /etc/prometheus/targets/*.yml + refresh_interval: 5m +``` + +**targets/production.json:** + +```json +[ + { + "targets": ["app1:9090", "app2:9090"], + "labels": { + "env": "production", + "service": "api" + } + } +] +``` + +### Kubernetes Service Discovery + +```yaml +scrape_configs: + - job_name: "kubernetes-services" + kubernetes_sd_configs: + - role: service + relabel_configs: + - source_labels: + [__meta_kubernetes_service_annotation_prometheus_io_scrape] + action: keep + regex: true + - source_labels: + [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: replace + target_label: __scheme__ + regex: (https?) + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) +``` + +**Reference:** See `references/scrape-configs.md` + +## Recording Rules + +Create pre-computed metrics for frequently queried expressions: + +```yaml +# /etc/prometheus/rules/recording_rules.yml +groups: + - name: api_metrics + interval: 15s + rules: + # HTTP request rate per service + - record: job:http_requests:rate5m + expr: sum by (job) (rate(http_requests_total[5m])) + + # Error rate percentage + - record: job:http_requests_errors:rate5m + expr: sum by (job) (rate(http_requests_total{status=~"5.."}[5m])) + + - record: job:http_requests_error_rate:percentage + expr: | + (job:http_requests_errors:rate5m / job:http_requests:rate5m) * 100 + + # P95 latency + - record: job:http_request_duration:p95 + expr: | + histogram_quantile(0.95, + sum by (job, le) (rate(http_request_duration_seconds_bucket[5m])) + ) + + - name: resource_metrics + interval: 30s + rules: + # CPU utilization percentage + - record: instance:node_cpu:utilization + expr: | + 100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) + + # Memory utilization percentage + - record: instance:node_memory:utilization + expr: | + 100 - ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100) + + # Disk usage percentage + - record: instance:node_disk:utilization + expr: | + 100 - ((node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100) +``` + +**Reference:** See `references/recording-rules.md` + +## Alert Rules + +```yaml +# /etc/prometheus/rules/alert_rules.yml +groups: + - name: availability + interval: 30s + rules: + - alert: ServiceDown + expr: up{job="my-app"} == 0 + for: 1m + labels: + severity: critical + annotations: + summary: "Service {{ $labels.instance }} is down" + description: "{{ $labels.job }} has been down for more than 1 minute" + + - alert: HighErrorRate + expr: job:http_requests_error_rate:percentage > 5 + for: 5m + labels: + severity: warning + annotations: + summary: "High error rate for {{ $labels.job }}" + description: "Error rate is {{ $value }}% (threshold: 5%)" + + - alert: HighLatency + expr: job:http_request_duration:p95 > 1 + for: 5m + labels: + severity: warning + annotations: + summary: "High latency for {{ $labels.job }}" + description: "P95 latency is {{ $value }}s (threshold: 1s)" + + - name: resources + interval: 1m + rules: + - alert: HighCPUUsage + expr: instance:node_cpu:utilization > 80 + for: 5m + labels: + severity: warning + annotations: + summary: "High CPU usage on {{ $labels.instance }}" + description: "CPU usage is {{ $value }}%" + + - alert: HighMemoryUsage + expr: instance:node_memory:utilization > 85 + for: 5m + labels: + severity: warning + annotations: + summary: "High memory usage on {{ $labels.instance }}" + description: "Memory usage is {{ $value }}%" + + - alert: DiskSpaceLow + expr: instance:node_disk:utilization > 90 + for: 5m + labels: + severity: critical + annotations: + summary: "Low disk space on {{ $labels.instance }}" + description: "Disk usage is {{ $value }}%" +``` + +## Validation + +```bash +# Validate configuration +promtool check config prometheus.yml + +# Validate rules +promtool check rules /etc/prometheus/rules/*.yml + +# Test query +promtool query instant http://localhost:9090 'up' +``` + +**Reference:** See `scripts/validate-prometheus.sh` + +## Best Practices + +1. **Use consistent naming** for metrics (prefix_name_unit) +2. **Set appropriate scrape intervals** (15-60s typical) +3. **Use recording rules** for expensive queries +4. **Implement high availability** (multiple Prometheus instances) +5. **Configure retention** based on storage capacity +6. **Use relabeling** for metric cleanup +7. **Monitor Prometheus itself** +8. **Implement federation** for large deployments +9. **Use Thanos/Cortex** for long-term storage +10. **Document custom metrics** + +## Troubleshooting + +**Check scrape targets:** + +```bash +curl http://localhost:9090/api/v1/targets +``` + +**Check configuration:** + +```bash +curl http://localhost:9090/api/v1/status/config +``` + +**Test query:** + +```bash +curl 'http://localhost:9090/api/v1/query?query=up' +``` + +## Reference Files + +- `assets/prometheus.yml.template` - Complete configuration template +- `references/scrape-configs.md` - Scrape configuration patterns +- `references/recording-rules.md` - Recording rule examples +- `scripts/validate-prometheus.sh` - Validation script + +## Related Skills + +- `grafana-dashboards` - For visualization +- `slo-implementation` - For SLO monitoring +- `distributed-tracing` - For request tracing diff --git a/.cursor/skills/smart-contract-security/SKILL.md b/.cursor/skills/smart-contract-security/SKILL.md new file mode 100644 index 0000000..1c2b680 --- /dev/null +++ b/.cursor/skills/smart-contract-security/SKILL.md @@ -0,0 +1,244 @@ +--- +name: smart-contract-security +description: Master smart contract security with auditing, vulnerability detection, and incident response +sasmp_version: "1.3.0" +version: "2.0.0" +updated: "2025-01" +bonded_agent: 06-smart-contract-security +bond_type: PRIMARY_BOND + +# Skill Configuration +atomic: true +single_responsibility: security_auditing + +# Parameter Validation +parameters: + topic: + type: string + required: true + enum: [vulnerabilities, auditing, tools, incidents] + severity: + type: string + default: all + enum: [critical, high, medium, low, all] + +# Retry & Error Handling +retry_config: + max_attempts: 3 + backoff: exponential + initial_delay_ms: 1000 + +# Logging & Observability +logging: + level: info + include_timestamps: true + track_usage: true +--- + +# Smart Contract Security Skill + +> Master smart contract security with vulnerability detection, auditing methodology, and incident response procedures. + +## Quick Start + +```python +# Invoke this skill for security analysis +Skill("smart-contract-security", topic="vulnerabilities", severity="high") +``` + +## Topics Covered + +### 1. Common Vulnerabilities +Recognize and prevent: +- **Reentrancy**: CEI pattern violation +- **Access Control**: Missing modifiers +- **Oracle Manipulation**: Flash loan attacks +- **Integer Issues**: Precision loss + +### 2. Auditing Methodology +Systematic review process: +- **Manual Review**: Line-by-line analysis +- **Static Analysis**: Automated tools +- **Fuzzing**: Property-based testing +- **Formal Verification**: Mathematical proofs + +### 3. Security Tools +Essential tooling: +- **Slither**: Fast static analysis +- **Mythril**: Symbolic execution +- **Foundry**: Fuzzing, invariants +- **Certora**: Formal verification + +### 4. Incident Response +Handle security events: +- **Triage**: Assess severity +- **Mitigation**: Emergency actions +- **Post-mortem**: Root cause analysis +- **Disclosure**: Responsible reporting + +## Vulnerability Quick Reference + +### Critical: Reentrancy +```solidity +// VULNERABLE +function withdraw(uint256 amount) external { + (bool ok,) = msg.sender.call{value: amount}(""); + require(ok); + balances[msg.sender] -= amount; // After call! +} + +// FIXED: CEI Pattern +function withdraw(uint256 amount) external { + balances[msg.sender] -= amount; // Before call + (bool ok,) = msg.sender.call{value: amount}(""); + require(ok); +} +``` + +### High: Missing Access Control +```solidity +// VULNERABLE +function setAdmin(address newAdmin) external { + admin = newAdmin; // Anyone can call! +} + +// FIXED +function setAdmin(address newAdmin) external onlyOwner { + admin = newAdmin; +} +``` + +### High: Unchecked Return Value +```solidity +// VULNERABLE +IERC20(token).transfer(to, amount); // Ignored! + +// FIXED: Use SafeERC20 +using SafeERC20 for IERC20; +IERC20(token).safeTransfer(to, amount); +``` + +### Medium: Precision Loss +```solidity +// VULNERABLE: Division before multiplication +uint256 fee = (amount / 1000) * rate; + +// FIXED: Multiply first +uint256 fee = (amount * rate) / 1000; +``` + +## Audit Checklist + +### Pre-Audit +- [ ] Code compiles without warnings +- [ ] Tests pass with good coverage +- [ ] Documentation reviewed + +### Core Security +- [ ] CEI pattern followed +- [ ] Reentrancy guards present +- [ ] Access control on admin functions +- [ ] Input validation complete + +### DeFi Specific +- [ ] Oracle staleness checks +- [ ] Slippage protection +- [ ] Flash loan resistance +- [ ] Sandwich prevention + +## Security Tools + +### Static Analysis +```bash +# Slither - Fast vulnerability detection +slither . --exclude-dependencies + +# Mythril - Symbolic execution +myth analyze src/Contract.sol + +# Semgrep - Custom rules +semgrep --config "p/smart-contracts" . +``` + +### Fuzzing +```solidity +// Foundry fuzz test +function testFuzz_Withdraw(uint256 amount) public { + amount = bound(amount, 1, type(uint128).max); + + vm.deal(address(vault), amount); + vault.deposit{value: amount}(); + + uint256 before = address(this).balance; + vault.withdraw(amount); + + assertEq(address(this).balance, before + amount); +} +``` + +### Invariant Testing +```solidity +function invariant_BalancesMatchTotalSupply() public { + uint256 sum = 0; + for (uint i = 0; i < actors.length; i++) { + sum += token.balanceOf(actors[i]); + } + assertEq(token.totalSupply(), sum); +} +``` + +## Severity Classification + +| Severity | Impact | Examples | +|----------|--------|----------| +| Critical | Direct fund loss | Reentrancy, unprotected init | +| High | Significant damage | Access control, oracle manipulation | +| Medium | Conditional impact | Precision loss, timing issues | +| Low | Minor issues | Missing events, naming | + +## Incident Response + +### 1. Detection +```bash +# Monitor for suspicious activity +cast logs --address $CONTRACT --from-block latest +``` + +### 2. Mitigation +```solidity +// Emergency pause +function pause() external onlyOwner { + _pause(); +} +``` + +### 3. Recovery +- Assess damage scope +- Coordinate disclosure +- Deploy fixes with audit + +## Common Pitfalls + +| Pitfall | Risk | Prevention | +|---------|------|------------| +| Only testing happy path | Missing edge cases | Fuzz test boundaries | +| Ignoring integrations | External call risks | Review all dependencies | +| Trusting block.timestamp | Miner manipulation | Use for long timeframes only | + +## Cross-References + +- **Bonded Agent**: `06-smart-contract-security` +- **Related Skills**: `solidity-development`, `defi-protocols` + +## Resources + +- SWC Registry: Common weakness enumeration +- Rekt News: Hack post-mortems +- Immunefi: Bug bounties + +## Version History + +| Version | Date | Changes | +|---------|------|---------| +| 2.0.0 | 2025-01 | Production-grade with tools, methodology | +| 1.0.0 | 2024-12 | Initial release | diff --git a/.cursor/skills/smart-contract-security/assets/config.yaml b/.cursor/skills/smart-contract-security/assets/config.yaml new file mode 100644 index 0000000..541dd24 --- /dev/null +++ b/.cursor/skills/smart-contract-security/assets/config.yaml @@ -0,0 +1,41 @@ +# smart-contract-security Configuration +# Category: security +# Generated: 2025-12-30 + +skill: + name: smart-contract-security + version: "1.0.0" + category: security + +settings: + # Default settings for smart-contract-security + enabled: true + log_level: info + + # Category-specific defaults + validation: + strict_mode: false + auto_fix: false + + output: + format: markdown + include_examples: true + +# Environment-specific overrides +environments: + development: + log_level: debug + validation: + strict_mode: false + + production: + log_level: warn + validation: + strict_mode: true + +# Integration settings +integrations: + # Enable/disable integrations + git: true + linter: true + formatter: true diff --git a/.cursor/skills/smart-contract-security/assets/schema.json b/.cursor/skills/smart-contract-security/assets/schema.json new file mode 100644 index 0000000..cebd41f --- /dev/null +++ b/.cursor/skills/smart-contract-security/assets/schema.json @@ -0,0 +1,60 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "smart-contract-security Configuration Schema", + "type": "object", + "properties": { + "skill": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "pattern": "^\\d+\\.\\d+\\.\\d+$" + }, + "category": { + "type": "string", + "enum": [ + "api", + "testing", + "devops", + "security", + "database", + "frontend", + "algorithms", + "machine-learning", + "cloud", + "containers", + "general" + ] + } + }, + "required": [ + "name", + "version" + ] + }, + "settings": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "default": true + }, + "log_level": { + "type": "string", + "enum": [ + "debug", + "info", + "warn", + "error" + ] + } + } + } + }, + "required": [ + "skill" + ] +} \ No newline at end of file diff --git a/.cursor/skills/smart-contract-security/references/GUIDE.md b/.cursor/skills/smart-contract-security/references/GUIDE.md new file mode 100644 index 0000000..0990d59 --- /dev/null +++ b/.cursor/skills/smart-contract-security/references/GUIDE.md @@ -0,0 +1,95 @@ +# Smart Contract Security Guide + +## Overview + +This guide provides comprehensive documentation for the **smart-contract-security** skill in the custom-plugin-blockchain plugin. + +## Category: Security + +## Quick Start + +### Prerequisites + +- Familiarity with security concepts +- Development environment set up +- Plugin installed and configured + +### Basic Usage + +```bash +# Invoke the skill +claude "smart-contract-security - [your task description]" + +# Example +claude "smart-contract-security - analyze the current implementation" +``` + +## Core Concepts + +### Key Principles + +1. **Consistency** - Follow established patterns +2. **Clarity** - Write readable, maintainable code +3. **Quality** - Validate before deployment + +### Best Practices + +- Always validate input data +- Handle edge cases explicitly +- Document your decisions +- Write tests for critical paths + +## Common Tasks + +### Task 1: Basic Implementation + +```python +# Example implementation pattern +def implement_smart_contract_security(input_data): + """ + Implement smart-contract-security functionality. + + Args: + input_data: Input to process + + Returns: + Processed result + """ + # Validate input + if not input_data: + raise ValueError("Input required") + + # Process + result = process(input_data) + + # Return + return result +``` + +### Task 2: Advanced Usage + +For advanced scenarios, consider: + +- Configuration customization via `assets/config.yaml` +- Validation using `scripts/validate.py` +- Integration with other skills + +## Troubleshooting + +### Common Issues + +| Issue | Cause | Solution | +|-------|-------|----------| +| Skill not found | Not installed | Run plugin sync | +| Validation fails | Invalid config | Check config.yaml | +| Unexpected output | Missing context | Provide more details | + +## Related Resources + +- SKILL.md - Skill specification +- config.yaml - Configuration options +- validate.py - Validation script + +--- + +*Last updated: 2025-12-30* diff --git a/.cursor/skills/smart-contract-security/references/PATTERNS.md b/.cursor/skills/smart-contract-security/references/PATTERNS.md new file mode 100644 index 0000000..8496460 --- /dev/null +++ b/.cursor/skills/smart-contract-security/references/PATTERNS.md @@ -0,0 +1,87 @@ +# Smart Contract Security Patterns + +## Design Patterns + +### Pattern 1: Input Validation + +Always validate input before processing: + +```python +def validate_input(data): + if data is None: + raise ValueError("Data cannot be None") + if not isinstance(data, dict): + raise TypeError("Data must be a dictionary") + return True +``` + +### Pattern 2: Error Handling + +Use consistent error handling: + +```python +try: + result = risky_operation() +except SpecificError as e: + logger.error(f"Operation failed: {e}") + handle_error(e) +except Exception as e: + logger.exception("Unexpected error") + raise +``` + +### Pattern 3: Configuration Loading + +Load and validate configuration: + +```python +import yaml + +def load_config(config_path): + with open(config_path) as f: + config = yaml.safe_load(f) + validate_config(config) + return config +``` + +## Anti-Patterns to Avoid + +### ❌ Don't: Swallow Exceptions + +```python +# BAD +try: + do_something() +except: + pass +``` + +### ✅ Do: Handle Explicitly + +```python +# GOOD +try: + do_something() +except SpecificError as e: + logger.warning(f"Expected error: {e}") + return default_value +``` + +## Category-Specific Patterns: Security + +### Recommended Approach + +1. Start with the simplest implementation +2. Add complexity only when needed +3. Test each addition +4. Document decisions + +### Common Integration Points + +- Configuration: `assets/config.yaml` +- Validation: `scripts/validate.py` +- Documentation: `references/GUIDE.md` + +--- + +*Pattern library for smart-contract-security skill* diff --git a/.cursor/skills/smart-contract-security/scripts/validate.py b/.cursor/skills/smart-contract-security/scripts/validate.py new file mode 100644 index 0000000..c57bed6 --- /dev/null +++ b/.cursor/skills/smart-contract-security/scripts/validate.py @@ -0,0 +1,131 @@ +#!/usr/bin/env python3 +""" +Validation script for smart-contract-security skill. +Category: security +""" + +import os +import sys +import yaml +import json +from pathlib import Path + + +def validate_config(config_path: str) -> dict: + """ + Validate skill configuration file. + + Args: + config_path: Path to config.yaml + + Returns: + dict: Validation result with 'valid' and 'errors' keys + """ + errors = [] + + if not os.path.exists(config_path): + return {"valid": False, "errors": ["Config file not found"]} + + try: + with open(config_path, 'r') as f: + config = yaml.safe_load(f) + except yaml.YAMLError as e: + return {"valid": False, "errors": [f"YAML parse error: {e}"]} + + # Validate required fields + if 'skill' not in config: + errors.append("Missing 'skill' section") + else: + if 'name' not in config['skill']: + errors.append("Missing skill.name") + if 'version' not in config['skill']: + errors.append("Missing skill.version") + + # Validate settings + if 'settings' in config: + settings = config['settings'] + if 'log_level' in settings: + valid_levels = ['debug', 'info', 'warn', 'error'] + if settings['log_level'] not in valid_levels: + errors.append(f"Invalid log_level: {settings['log_level']}") + + return { + "valid": len(errors) == 0, + "errors": errors, + "config": config if not errors else None + } + + +def validate_skill_structure(skill_path: str) -> dict: + """ + Validate skill directory structure. + + Args: + skill_path: Path to skill directory + + Returns: + dict: Structure validation result + """ + required_dirs = ['assets', 'scripts', 'references'] + required_files = ['SKILL.md'] + + errors = [] + + # Check required files + for file in required_files: + if not os.path.exists(os.path.join(skill_path, file)): + errors.append(f"Missing required file: {file}") + + # Check required directories + for dir in required_dirs: + dir_path = os.path.join(skill_path, dir) + if not os.path.isdir(dir_path): + errors.append(f"Missing required directory: {dir}/") + else: + # Check for real content (not just .gitkeep) + files = [f for f in os.listdir(dir_path) if f != '.gitkeep'] + if not files: + errors.append(f"Directory {dir}/ has no real content") + + return { + "valid": len(errors) == 0, + "errors": errors, + "skill_name": os.path.basename(skill_path) + } + + +def main(): + """Main validation entry point.""" + skill_path = Path(__file__).parent.parent + + print(f"Validating smart-contract-security skill...") + print(f"Path: {skill_path}") + + # Validate structure + structure_result = validate_skill_structure(str(skill_path)) + print(f"\nStructure validation: {'PASS' if structure_result['valid'] else 'FAIL'}") + if structure_result['errors']: + for error in structure_result['errors']: + print(f" - {error}") + + # Validate config + config_path = skill_path / 'assets' / 'config.yaml' + if config_path.exists(): + config_result = validate_config(str(config_path)) + print(f"\nConfig validation: {'PASS' if config_result['valid'] else 'FAIL'}") + if config_result['errors']: + for error in config_result['errors']: + print(f" - {error}") + else: + print("\nConfig validation: SKIPPED (no config.yaml)") + + # Summary + all_valid = structure_result['valid'] + print(f"\n==================================================") + print(f"Overall: {'VALID' if all_valid else 'INVALID'}") + + return 0 if all_valid else 1 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/.cursor/skills/solidity-development/SKILL.md b/.cursor/skills/solidity-development/SKILL.md new file mode 100644 index 0000000..94597d7 --- /dev/null +++ b/.cursor/skills/solidity-development/SKILL.md @@ -0,0 +1,234 @@ +--- +name: solidity-development +description: Master Solidity smart contract development with patterns, testing, and best practices +sasmp_version: "1.3.0" +version: "2.0.0" +updated: "2025-01" +bonded_agent: 03-solidity-expert +bond_type: PRIMARY_BOND + +# Skill Configuration +atomic: true +single_responsibility: solidity_development + +# Parameter Validation +parameters: + topic: + type: string + required: true + enum: [syntax, patterns, testing, upgrades, security] + solidity_version: + type: string + default: "0.8.24" + +# Retry & Error Handling +retry_config: + max_attempts: 3 + backoff: exponential + initial_delay_ms: 1000 + +# Logging & Observability +logging: + level: info + include_timestamps: true + track_usage: true +--- + +# Solidity Development Skill + +> Master Solidity smart contract development with design patterns, testing strategies, and production best practices. + +## Quick Start + +```python +# Invoke this skill for Solidity development +Skill("solidity-development", topic="patterns", solidity_version="0.8.24") +``` + +## Topics Covered + +### 1. Language Features (0.8.x) +Modern Solidity essentials: +- **Data Types**: Value, reference, mappings +- **Functions**: Visibility, modifiers, overloading +- **Inheritance**: Diamond problem, C3 linearization +- **Custom Errors**: Gas-efficient error handling + +### 2. Design Patterns +Battle-tested patterns: +- **CEI**: Checks-Effects-Interactions +- **Factory**: Contract deployment patterns +- **Proxy**: Upgradeable contracts +- **Access Control**: RBAC, Ownable + +### 3. Testing +Comprehensive test strategies: +- **Unit Tests**: Foundry, Hardhat +- **Fuzz Testing**: Property-based testing +- **Invariant Testing**: System-wide properties +- **Fork Testing**: Mainnet simulation + +### 4. Upgradability +Safe upgrade patterns: +- **UUPS**: Self-upgrading proxy +- **Transparent**: Admin separation +- **Beacon**: Shared implementation +- **Diamond**: Multi-facet + +## Code Examples + +### CEI Pattern +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.24; + +contract SecureVault { + mapping(address => uint256) public balances; + + error InsufficientBalance(); + error TransferFailed(); + + function withdraw(uint256 amount) external { + // 1. CHECKS + if (balances[msg.sender] < amount) revert InsufficientBalance(); + + // 2. EFFECTS + balances[msg.sender] -= amount; + + // 3. INTERACTIONS + (bool ok,) = msg.sender.call{value: amount}(""); + if (!ok) revert TransferFailed(); + } +} +``` + +### Factory Pattern +```solidity +contract TokenFactory { + event TokenCreated(address indexed token, address indexed owner); + + function createToken( + string memory name, + string memory symbol + ) external returns (address) { + Token token = new Token(name, symbol, msg.sender); + emit TokenCreated(address(token), msg.sender); + return address(token); + } +} +``` + +### Foundry Test +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.24; + +import "forge-std/Test.sol"; + +contract VaultTest is Test { + Vault vault; + address alice = makeAddr("alice"); + + function setUp() public { + vault = new Vault(); + vm.deal(alice, 10 ether); + } + + function test_Deposit() public { + vm.prank(alice); + vault.deposit{value: 1 ether}(); + + assertEq(vault.balances(alice), 1 ether); + } + + function testFuzz_Withdraw(uint256 amount) public { + amount = bound(amount, 0.01 ether, 10 ether); + + vm.startPrank(alice); + vault.deposit{value: amount}(); + vault.withdraw(amount); + vm.stopPrank(); + + assertEq(vault.balances(alice), 0); + } + + function test_RevertWhen_InsufficientBalance() public { + vm.prank(alice); + vm.expectRevert(Vault.InsufficientBalance.selector); + vault.withdraw(1 ether); + } +} +``` + +## Pattern Reference + +| Pattern | Use Case | Complexity | +|---------|----------|------------| +| CEI | All state changes | Low | +| Factory | Multiple instances | Low | +| Clone (1167) | Gas-efficient copies | Medium | +| UUPS | Upgradeable | Medium | +| Diamond | Unlimited size | High | + +## Common Pitfalls + +| Pitfall | Issue | Solution | +|---------|-------|----------| +| Stack too deep | >16 variables | Use structs or helpers | +| Contract too large | >24KB | Split into libraries | +| Reentrancy | State after call | Use CEI pattern | +| Missing access | Anyone can call | Add modifiers | + +## Troubleshooting + +### "Stack too deep" +```solidity +// Solution 1: Use struct +struct Params { uint256 a; uint256 b; uint256 c; } + +// Solution 2: Block scoping +{ uint256 temp = x + y; } + +// Solution 3: Internal function +function _helper(uint256 a) internal { } +``` + +### "Contract size exceeds limit" +```bash +# Check contract sizes +forge build --sizes +``` +Solutions: Split into libraries, use Diamond pattern. + +## Security Checklist + +- [ ] CEI pattern on all withdrawals +- [ ] Access control on admin functions +- [ ] Input validation (zero address, bounds) +- [ ] Reentrancy guards on external calls +- [ ] Event emission for state changes +- [ ] Custom errors for gas efficiency + +## CLI Commands + +```bash +# Development workflow +forge init # New project +forge build # Compile +forge test -vvv # Run tests +forge coverage # Coverage report +forge fmt # Format code +forge snapshot # Gas snapshot +``` + +## Cross-References + +- **Bonded Agent**: `03-solidity-expert` +- **Related Skills**: `ethereum-development`, `smart-contract-security` + +## Version History + +| Version | Date | Changes | +|---------|------|---------| +| 2.0.0 | 2025-01 | Production-grade with Foundry, patterns | +| 1.0.0 | 2024-12 | Initial release | diff --git a/.cursor/skills/solidity-development/assets/config.yaml b/.cursor/skills/solidity-development/assets/config.yaml new file mode 100644 index 0000000..e01e3a3 --- /dev/null +++ b/.cursor/skills/solidity-development/assets/config.yaml @@ -0,0 +1,41 @@ +# solidity-development Configuration +# Category: general +# Generated: 2025-12-30 + +skill: + name: solidity-development + version: "1.0.0" + category: general + +settings: + # Default settings for solidity-development + enabled: true + log_level: info + + # Category-specific defaults + validation: + strict_mode: false + auto_fix: false + + output: + format: markdown + include_examples: true + +# Environment-specific overrides +environments: + development: + log_level: debug + validation: + strict_mode: false + + production: + log_level: warn + validation: + strict_mode: true + +# Integration settings +integrations: + # Enable/disable integrations + git: true + linter: true + formatter: true diff --git a/.cursor/skills/solidity-development/assets/schema.json b/.cursor/skills/solidity-development/assets/schema.json new file mode 100644 index 0000000..d63d4f8 --- /dev/null +++ b/.cursor/skills/solidity-development/assets/schema.json @@ -0,0 +1,60 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "solidity-development Configuration Schema", + "type": "object", + "properties": { + "skill": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "pattern": "^\\d+\\.\\d+\\.\\d+$" + }, + "category": { + "type": "string", + "enum": [ + "api", + "testing", + "devops", + "security", + "database", + "frontend", + "algorithms", + "machine-learning", + "cloud", + "containers", + "general" + ] + } + }, + "required": [ + "name", + "version" + ] + }, + "settings": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "default": true + }, + "log_level": { + "type": "string", + "enum": [ + "debug", + "info", + "warn", + "error" + ] + } + } + } + }, + "required": [ + "skill" + ] +} \ No newline at end of file diff --git a/.cursor/skills/solidity-development/references/GUIDE.md b/.cursor/skills/solidity-development/references/GUIDE.md new file mode 100644 index 0000000..00d452e --- /dev/null +++ b/.cursor/skills/solidity-development/references/GUIDE.md @@ -0,0 +1,95 @@ +# Solidity Development Guide + +## Overview + +This guide provides comprehensive documentation for the **solidity-development** skill in the custom-plugin-blockchain plugin. + +## Category: General + +## Quick Start + +### Prerequisites + +- Familiarity with general concepts +- Development environment set up +- Plugin installed and configured + +### Basic Usage + +```bash +# Invoke the skill +claude "solidity-development - [your task description]" + +# Example +claude "solidity-development - analyze the current implementation" +``` + +## Core Concepts + +### Key Principles + +1. **Consistency** - Follow established patterns +2. **Clarity** - Write readable, maintainable code +3. **Quality** - Validate before deployment + +### Best Practices + +- Always validate input data +- Handle edge cases explicitly +- Document your decisions +- Write tests for critical paths + +## Common Tasks + +### Task 1: Basic Implementation + +```python +# Example implementation pattern +def implement_solidity_development(input_data): + """ + Implement solidity-development functionality. + + Args: + input_data: Input to process + + Returns: + Processed result + """ + # Validate input + if not input_data: + raise ValueError("Input required") + + # Process + result = process(input_data) + + # Return + return result +``` + +### Task 2: Advanced Usage + +For advanced scenarios, consider: + +- Configuration customization via `assets/config.yaml` +- Validation using `scripts/validate.py` +- Integration with other skills + +## Troubleshooting + +### Common Issues + +| Issue | Cause | Solution | +|-------|-------|----------| +| Skill not found | Not installed | Run plugin sync | +| Validation fails | Invalid config | Check config.yaml | +| Unexpected output | Missing context | Provide more details | + +## Related Resources + +- SKILL.md - Skill specification +- config.yaml - Configuration options +- validate.py - Validation script + +--- + +*Last updated: 2025-12-30* diff --git a/.cursor/skills/solidity-development/references/PATTERNS.md b/.cursor/skills/solidity-development/references/PATTERNS.md new file mode 100644 index 0000000..9e505d5 --- /dev/null +++ b/.cursor/skills/solidity-development/references/PATTERNS.md @@ -0,0 +1,87 @@ +# Solidity Development Patterns + +## Design Patterns + +### Pattern 1: Input Validation + +Always validate input before processing: + +```python +def validate_input(data): + if data is None: + raise ValueError("Data cannot be None") + if not isinstance(data, dict): + raise TypeError("Data must be a dictionary") + return True +``` + +### Pattern 2: Error Handling + +Use consistent error handling: + +```python +try: + result = risky_operation() +except SpecificError as e: + logger.error(f"Operation failed: {e}") + handle_error(e) +except Exception as e: + logger.exception("Unexpected error") + raise +``` + +### Pattern 3: Configuration Loading + +Load and validate configuration: + +```python +import yaml + +def load_config(config_path): + with open(config_path) as f: + config = yaml.safe_load(f) + validate_config(config) + return config +``` + +## Anti-Patterns to Avoid + +### ❌ Don't: Swallow Exceptions + +```python +# BAD +try: + do_something() +except: + pass +``` + +### ✅ Do: Handle Explicitly + +```python +# GOOD +try: + do_something() +except SpecificError as e: + logger.warning(f"Expected error: {e}") + return default_value +``` + +## Category-Specific Patterns: General + +### Recommended Approach + +1. Start with the simplest implementation +2. Add complexity only when needed +3. Test each addition +4. Document decisions + +### Common Integration Points + +- Configuration: `assets/config.yaml` +- Validation: `scripts/validate.py` +- Documentation: `references/GUIDE.md` + +--- + +*Pattern library for solidity-development skill* diff --git a/.cursor/skills/solidity-development/scripts/validate.py b/.cursor/skills/solidity-development/scripts/validate.py new file mode 100644 index 0000000..800fb5e --- /dev/null +++ b/.cursor/skills/solidity-development/scripts/validate.py @@ -0,0 +1,131 @@ +#!/usr/bin/env python3 +""" +Validation script for solidity-development skill. +Category: general +""" + +import os +import sys +import yaml +import json +from pathlib import Path + + +def validate_config(config_path: str) -> dict: + """ + Validate skill configuration file. + + Args: + config_path: Path to config.yaml + + Returns: + dict: Validation result with 'valid' and 'errors' keys + """ + errors = [] + + if not os.path.exists(config_path): + return {"valid": False, "errors": ["Config file not found"]} + + try: + with open(config_path, 'r') as f: + config = yaml.safe_load(f) + except yaml.YAMLError as e: + return {"valid": False, "errors": [f"YAML parse error: {e}"]} + + # Validate required fields + if 'skill' not in config: + errors.append("Missing 'skill' section") + else: + if 'name' not in config['skill']: + errors.append("Missing skill.name") + if 'version' not in config['skill']: + errors.append("Missing skill.version") + + # Validate settings + if 'settings' in config: + settings = config['settings'] + if 'log_level' in settings: + valid_levels = ['debug', 'info', 'warn', 'error'] + if settings['log_level'] not in valid_levels: + errors.append(f"Invalid log_level: {settings['log_level']}") + + return { + "valid": len(errors) == 0, + "errors": errors, + "config": config if not errors else None + } + + +def validate_skill_structure(skill_path: str) -> dict: + """ + Validate skill directory structure. + + Args: + skill_path: Path to skill directory + + Returns: + dict: Structure validation result + """ + required_dirs = ['assets', 'scripts', 'references'] + required_files = ['SKILL.md'] + + errors = [] + + # Check required files + for file in required_files: + if not os.path.exists(os.path.join(skill_path, file)): + errors.append(f"Missing required file: {file}") + + # Check required directories + for dir in required_dirs: + dir_path = os.path.join(skill_path, dir) + if not os.path.isdir(dir_path): + errors.append(f"Missing required directory: {dir}/") + else: + # Check for real content (not just .gitkeep) + files = [f for f in os.listdir(dir_path) if f != '.gitkeep'] + if not files: + errors.append(f"Directory {dir}/ has no real content") + + return { + "valid": len(errors) == 0, + "errors": errors, + "skill_name": os.path.basename(skill_path) + } + + +def main(): + """Main validation entry point.""" + skill_path = Path(__file__).parent.parent + + print(f"Validating solidity-development skill...") + print(f"Path: {skill_path}") + + # Validate structure + structure_result = validate_skill_structure(str(skill_path)) + print(f"\nStructure validation: {'PASS' if structure_result['valid'] else 'FAIL'}") + if structure_result['errors']: + for error in structure_result['errors']: + print(f" - {error}") + + # Validate config + config_path = skill_path / 'assets' / 'config.yaml' + if config_path.exists(): + config_result = validate_config(str(config_path)) + print(f"\nConfig validation: {'PASS' if config_result['valid'] else 'FAIL'}") + if config_result['errors']: + for error in config_result['errors']: + print(f" - {error}") + else: + print("\nConfig validation: SKIPPED (no config.yaml)") + + # Summary + all_valid = structure_result['valid'] + print(f"\n==================================================") + print(f"Overall: {'VALID' if all_valid else 'INVALID'}") + + return 0 if all_valid else 1 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/.cursor/skills/supabase-postgres-best-practices/AGENTS.md b/.cursor/skills/supabase-postgres-best-practices/AGENTS.md new file mode 100644 index 0000000..cb45e6b --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/AGENTS.md @@ -0,0 +1,90 @@ +# supabase-postgres-best-practices + +> **Note:** `CLAUDE.md` is a symlink to this file. + +## Overview + +Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, or database configurations. + +## Structure + +``` +supabase-postgres-best-practices/ + SKILL.md # Main skill file - read this first + AGENTS.md # This navigation guide + CLAUDE.md # Symlink to AGENTS.md + references/ # Detailed reference files +``` + +## Usage + +1. Read `SKILL.md` for the main skill instructions +2. Browse `references/` for detailed documentation on specific topics +3. Reference files are loaded on-demand - read only what you need + +## Reference Categories + +| Priority | Category | Impact | Prefix | +|----------|----------|--------|--------| +| 1 | Query Performance | CRITICAL | `query-` | +| 2 | Connection Management | CRITICAL | `conn-` | +| 3 | Security & RLS | CRITICAL | `security-` | +| 4 | Schema Design | HIGH | `schema-` | +| 5 | Concurrency & Locking | MEDIUM-HIGH | `lock-` | +| 6 | Data Access Patterns | MEDIUM | `data-` | +| 7 | Monitoring & Diagnostics | LOW-MEDIUM | `monitor-` | +| 8 | Advanced Features | LOW | `advanced-` | + +Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`). + +## Available References + +**Advanced Features** (`advanced-`): +- `references/advanced-full-text-search.md` +- `references/advanced-jsonb-indexing.md` + +**Connection Management** (`conn-`): +- `references/conn-idle-timeout.md` +- `references/conn-limits.md` +- `references/conn-pooling.md` +- `references/conn-prepared-statements.md` + +**Data Access Patterns** (`data-`): +- `references/data-batch-inserts.md` +- `references/data-n-plus-one.md` +- `references/data-pagination.md` +- `references/data-upsert.md` + +**Concurrency & Locking** (`lock-`): +- `references/lock-advisory.md` +- `references/lock-deadlock-prevention.md` +- `references/lock-short-transactions.md` +- `references/lock-skip-locked.md` + +**Monitoring & Diagnostics** (`monitor-`): +- `references/monitor-explain-analyze.md` +- `references/monitor-pg-stat-statements.md` +- `references/monitor-vacuum-analyze.md` + +**Query Performance** (`query-`): +- `references/query-composite-indexes.md` +- `references/query-covering-indexes.md` +- `references/query-index-types.md` +- `references/query-missing-indexes.md` +- `references/query-partial-indexes.md` + +**Schema Design** (`schema-`): +- `references/schema-data-types.md` +- `references/schema-foreign-key-indexes.md` +- `references/schema-lowercase-identifiers.md` +- `references/schema-partitioning.md` +- `references/schema-primary-keys.md` + +**Security & RLS** (`security-`): +- `references/security-privileges.md` +- `references/security-rls-basics.md` +- `references/security-rls-performance.md` + +--- + +*30 reference files across 8 categories* \ No newline at end of file diff --git a/.cursor/skills/supabase-postgres-best-practices/CLAUDE.md b/.cursor/skills/supabase-postgres-best-practices/CLAUDE.md new file mode 100644 index 0000000..47dc3e3 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file diff --git a/.cursor/skills/supabase-postgres-best-practices/SKILL.md b/.cursor/skills/supabase-postgres-best-practices/SKILL.md new file mode 100644 index 0000000..f80be15 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/SKILL.md @@ -0,0 +1,64 @@ +--- +name: supabase-postgres-best-practices +description: Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, or database configurations. +license: MIT +metadata: + author: supabase + version: "1.1.0" + organization: Supabase + date: January 2026 + abstract: Comprehensive Postgres performance optimization guide for developers using Supabase and Postgres. Contains performance rules across 8 categories, prioritized by impact from critical (query performance, connection management) to incremental (advanced features). Each rule includes detailed explanations, incorrect vs. correct SQL examples, query plan analysis, and specific performance metrics to guide automated optimization and code generation. +--- + +# Supabase Postgres Best Practices + +Comprehensive performance optimization guide for Postgres, maintained by Supabase. Contains rules across 8 categories, prioritized by impact to guide automated query optimization and schema design. + +## When to Apply + +Reference these guidelines when: +- Writing SQL queries or designing schemas +- Implementing indexes or query optimization +- Reviewing database performance issues +- Configuring connection pooling or scaling +- Optimizing for Postgres-specific features +- Working with Row-Level Security (RLS) + +## Rule Categories by Priority + +| Priority | Category | Impact | Prefix | +|----------|----------|--------|--------| +| 1 | Query Performance | CRITICAL | `query-` | +| 2 | Connection Management | CRITICAL | `conn-` | +| 3 | Security & RLS | CRITICAL | `security-` | +| 4 | Schema Design | HIGH | `schema-` | +| 5 | Concurrency & Locking | MEDIUM-HIGH | `lock-` | +| 6 | Data Access Patterns | MEDIUM | `data-` | +| 7 | Monitoring & Diagnostics | LOW-MEDIUM | `monitor-` | +| 8 | Advanced Features | LOW | `advanced-` | + +## How to Use + +Read individual rule files for detailed explanations and SQL examples: + +``` +references/query-missing-indexes.md +references/schema-partial-indexes.md +references/_sections.md +``` + +Each rule file contains: +- Brief explanation of why it matters +- Incorrect SQL example with explanation +- Correct SQL example with explanation +- Optional EXPLAIN output or metrics +- Additional context and references +- Supabase-specific notes (when applicable) + +## References + +- https://www.postgresql.org/docs/current/ +- https://supabase.com/docs +- https://wiki.postgresql.org/wiki/Performance_Optimization +- https://supabase.com/docs/guides/database/overview +- https://supabase.com/docs/guides/auth/row-level-security diff --git a/.cursor/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md b/.cursor/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md new file mode 100644 index 0000000..582cbea --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md @@ -0,0 +1,55 @@ +--- +title: Use tsvector for Full-Text Search +impact: MEDIUM +impactDescription: 100x faster than LIKE, with ranking support +tags: full-text-search, tsvector, gin, search +--- + +## Use tsvector for Full-Text Search + +LIKE with wildcards can't use indexes. Full-text search with tsvector is orders of magnitude faster. + +**Incorrect (LIKE pattern matching):** + +```sql +-- Cannot use index, scans all rows +select * from articles where content like '%postgresql%'; + +-- Case-insensitive makes it worse +select * from articles where lower(content) like '%postgresql%'; +``` + +**Correct (full-text search with tsvector):** + +```sql +-- Add tsvector column and index +alter table articles add column search_vector tsvector + generated always as (to_tsvector('english', coalesce(title,'') || ' ' || coalesce(content,''))) stored; + +create index articles_search_idx on articles using gin (search_vector); + +-- Fast full-text search +select * from articles +where search_vector @@ to_tsquery('english', 'postgresql & performance'); + +-- With ranking +select *, ts_rank(search_vector, query) as rank +from articles, to_tsquery('english', 'postgresql') query +where search_vector @@ query +order by rank desc; +``` + +Search multiple terms: + +```sql +-- AND: both terms required +to_tsquery('postgresql & performance') + +-- OR: either term +to_tsquery('postgresql | mysql') + +-- Prefix matching +to_tsquery('post:*') +``` + +Reference: [Full Text Search](https://supabase.com/docs/guides/database/full-text-search) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md b/.cursor/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md new file mode 100644 index 0000000..e3d261e --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md @@ -0,0 +1,49 @@ +--- +title: Index JSONB Columns for Efficient Querying +impact: MEDIUM +impactDescription: 10-100x faster JSONB queries with proper indexing +tags: jsonb, gin, indexes, json +--- + +## Index JSONB Columns for Efficient Querying + +JSONB queries without indexes scan the entire table. Use GIN indexes for containment queries. + +**Incorrect (no index on JSONB):** + +```sql +create table products ( + id bigint primary key, + attributes jsonb +); + +-- Full table scan for every query +select * from products where attributes @> '{"color": "red"}'; +select * from products where attributes->>'brand' = 'Nike'; +``` + +**Correct (GIN index for JSONB):** + +```sql +-- GIN index for containment operators (@>, ?, ?&, ?|) +create index products_attrs_gin on products using gin (attributes); + +-- Now containment queries use the index +select * from products where attributes @> '{"color": "red"}'; + +-- For specific key lookups, use expression index +create index products_brand_idx on products ((attributes->>'brand')); +select * from products where attributes->>'brand' = 'Nike'; +``` + +Choose the right operator class: + +```sql +-- jsonb_ops (default): supports all operators, larger index +create index idx1 on products using gin (attributes); + +-- jsonb_path_ops: only @> operator, but 2-3x smaller index +create index idx2 on products using gin (attributes jsonb_path_ops); +``` + +Reference: [JSONB Indexes](https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md b/.cursor/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md new file mode 100644 index 0000000..40b9cc5 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md @@ -0,0 +1,46 @@ +--- +title: Configure Idle Connection Timeouts +impact: HIGH +impactDescription: Reclaim 30-50% of connection slots from idle clients +tags: connections, timeout, idle, resource-management +--- + +## Configure Idle Connection Timeouts + +Idle connections waste resources. Configure timeouts to automatically reclaim them. + +**Incorrect (connections held indefinitely):** + +```sql +-- No timeout configured +show idle_in_transaction_session_timeout; -- 0 (disabled) + +-- Connections stay open forever, even when idle +select pid, state, state_change, query +from pg_stat_activity +where state = 'idle in transaction'; +-- Shows transactions idle for hours, holding locks +``` + +**Correct (automatic cleanup of idle connections):** + +```sql +-- Terminate connections idle in transaction after 30 seconds +alter system set idle_in_transaction_session_timeout = '30s'; + +-- Terminate completely idle connections after 10 minutes +alter system set idle_session_timeout = '10min'; + +-- Reload configuration +select pg_reload_conf(); +``` + +For pooled connections, configure at the pooler level: + +```ini +# pgbouncer.ini +server_idle_timeout = 60 +client_idle_timeout = 300 +``` + +Reference: [Connection Timeouts](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/conn-limits.md b/.cursor/skills/supabase-postgres-best-practices/references/conn-limits.md new file mode 100644 index 0000000..cb3e400 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/conn-limits.md @@ -0,0 +1,44 @@ +--- +title: Set Appropriate Connection Limits +impact: CRITICAL +impactDescription: Prevent database crashes and memory exhaustion +tags: connections, max-connections, limits, stability +--- + +## Set Appropriate Connection Limits + +Too many connections exhaust memory and degrade performance. Set limits based on available resources. + +**Incorrect (unlimited or excessive connections):** + +```sql +-- Default max_connections = 100, but often increased blindly +show max_connections; -- 500 (way too high for 4GB RAM) + +-- Each connection uses 1-3MB RAM +-- 500 connections * 2MB = 1GB just for connections! +-- Out of memory errors under load +``` + +**Correct (calculate based on resources):** + +```sql +-- Formula: max_connections = (RAM in MB / 5MB per connection) - reserved +-- For 4GB RAM: (4096 / 5) - 10 = ~800 theoretical max +-- But practically, 100-200 is better for query performance + +-- Recommended settings for 4GB RAM +alter system set max_connections = 100; + +-- Also set work_mem appropriately +-- work_mem * max_connections should not exceed 25% of RAM +alter system set work_mem = '8MB'; -- 8MB * 100 = 800MB max +``` + +Monitor connection usage: + +```sql +select count(*), state from pg_stat_activity group by state; +``` + +Reference: [Database Connections](https://supabase.com/docs/guides/platform/performance#connection-management) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/conn-pooling.md b/.cursor/skills/supabase-postgres-best-practices/references/conn-pooling.md new file mode 100644 index 0000000..e2ebd58 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/conn-pooling.md @@ -0,0 +1,41 @@ +--- +title: Use Connection Pooling for All Applications +impact: CRITICAL +impactDescription: Handle 10-100x more concurrent users +tags: connection-pooling, pgbouncer, performance, scalability +--- + +## Use Connection Pooling for All Applications + +Postgres connections are expensive (1-3MB RAM each). Without pooling, applications exhaust connections under load. + +**Incorrect (new connection per request):** + +```sql +-- Each request creates a new connection +-- Application code: db.connect() per request +-- Result: 500 concurrent users = 500 connections = crashed database + +-- Check current connections +select count(*) from pg_stat_activity; -- 487 connections! +``` + +**Correct (connection pooling):** + +```sql +-- Use a pooler like PgBouncer between app and database +-- Application connects to pooler, pooler reuses a small pool to Postgres + +-- Configure pool_size based on: (CPU cores * 2) + spindle_count +-- Example for 4 cores: pool_size = 10 + +-- Result: 500 concurrent users share 10 actual connections +select count(*) from pg_stat_activity; -- 10 connections +``` + +Pool modes: + +- **Transaction mode**: connection returned after each transaction (best for most apps) +- **Session mode**: connection held for entire session (needed for prepared statements, temp tables) + +Reference: [Connection Pooling](https://supabase.com/docs/guides/database/connecting-to-postgres#connection-pooler) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md b/.cursor/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md new file mode 100644 index 0000000..555547d --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md @@ -0,0 +1,46 @@ +--- +title: Use Prepared Statements Correctly with Pooling +impact: HIGH +impactDescription: Avoid prepared statement conflicts in pooled environments +tags: prepared-statements, connection-pooling, transaction-mode +--- + +## Use Prepared Statements Correctly with Pooling + +Prepared statements are tied to individual database connections. In transaction-mode pooling, connections are shared, causing conflicts. + +**Incorrect (named prepared statements with transaction pooling):** + +```sql +-- Named prepared statement +prepare get_user as select * from users where id = $1; + +-- In transaction mode pooling, next request may get different connection +execute get_user(123); +-- ERROR: prepared statement "get_user" does not exist +``` + +**Correct (use unnamed statements or session mode):** + +```sql +-- Option 1: Use unnamed prepared statements (most ORMs do this automatically) +-- The query is prepared and executed in a single protocol message + +-- Option 2: Deallocate after use in transaction mode +prepare get_user as select * from users where id = $1; +execute get_user(123); +deallocate get_user; + +-- Option 3: Use session mode pooling (port 5432 vs 6543) +-- Connection is held for entire session, prepared statements persist +``` + +Check your driver settings: + +```sql +-- Many drivers use prepared statements by default +-- Node.js pg: { prepare: false } to disable +-- JDBC: prepareThreshold=0 to disable +``` + +Reference: [Prepared Statements with Pooling](https://supabase.com/docs/guides/database/connecting-to-postgres#connection-pool-modes) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/data-batch-inserts.md b/.cursor/skills/supabase-postgres-best-practices/references/data-batch-inserts.md new file mode 100644 index 0000000..997947c --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/data-batch-inserts.md @@ -0,0 +1,54 @@ +--- +title: Batch INSERT Statements for Bulk Data +impact: MEDIUM +impactDescription: 10-50x faster bulk inserts +tags: batch, insert, bulk, performance, copy +--- + +## Batch INSERT Statements for Bulk Data + +Individual INSERT statements have high overhead. Batch multiple rows in single statements or use COPY. + +**Incorrect (individual inserts):** + +```sql +-- Each insert is a separate transaction and round trip +insert into events (user_id, action) values (1, 'click'); +insert into events (user_id, action) values (1, 'view'); +insert into events (user_id, action) values (2, 'click'); +-- ... 1000 more individual inserts + +-- 1000 inserts = 1000 round trips = slow +``` + +**Correct (batch insert):** + +```sql +-- Multiple rows in single statement +insert into events (user_id, action) values + (1, 'click'), + (1, 'view'), + (2, 'click'), + -- ... up to ~1000 rows per batch + (999, 'view'); + +-- One round trip for 1000 rows +``` + +For large imports, use COPY: + +```sql +-- COPY is fastest for bulk loading +copy events (user_id, action, created_at) +from '/path/to/data.csv' +with (format csv, header true); + +-- Or from stdin in application +copy events (user_id, action) from stdin with (format csv); +1,click +1,view +2,click +\. +``` + +Reference: [COPY](https://www.postgresql.org/docs/current/sql-copy.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/data-n-plus-one.md b/.cursor/skills/supabase-postgres-best-practices/references/data-n-plus-one.md new file mode 100644 index 0000000..2109186 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/data-n-plus-one.md @@ -0,0 +1,53 @@ +--- +title: Eliminate N+1 Queries with Batch Loading +impact: MEDIUM-HIGH +impactDescription: 10-100x fewer database round trips +tags: n-plus-one, batch, performance, queries +--- + +## Eliminate N+1 Queries with Batch Loading + +N+1 queries execute one query per item in a loop. Batch them into a single query using arrays or JOINs. + +**Incorrect (N+1 queries):** + +```sql +-- First query: get all users +select id from users where active = true; -- Returns 100 IDs + +-- Then N queries, one per user +select * from orders where user_id = 1; +select * from orders where user_id = 2; +select * from orders where user_id = 3; +-- ... 97 more queries! + +-- Total: 101 round trips to database +``` + +**Correct (single batch query):** + +```sql +-- Collect IDs and query once with ANY +select * from orders where user_id = any(array[1, 2, 3, ...]); + +-- Or use JOIN instead of loop +select u.id, u.name, o.* +from users u +left join orders o on o.user_id = u.id +where u.active = true; + +-- Total: 1 round trip +``` + +Application pattern: + +```sql +-- Instead of looping in application code: +-- for user in users: db.query("SELECT * FROM orders WHERE user_id = $1", user.id) + +-- Pass array parameter: +select * from orders where user_id = any($1::bigint[]); +-- Application passes: [1, 2, 3, 4, 5, ...] +``` + +Reference: [N+1 Query Problem](https://supabase.com/docs/guides/database/query-optimization) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/data-pagination.md b/.cursor/skills/supabase-postgres-best-practices/references/data-pagination.md new file mode 100644 index 0000000..633d839 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/data-pagination.md @@ -0,0 +1,50 @@ +--- +title: Use Cursor-Based Pagination Instead of OFFSET +impact: MEDIUM-HIGH +impactDescription: Consistent O(1) performance regardless of page depth +tags: pagination, cursor, keyset, offset, performance +--- + +## Use Cursor-Based Pagination Instead of OFFSET + +OFFSET-based pagination scans all skipped rows, getting slower on deeper pages. Cursor pagination is O(1). + +**Incorrect (OFFSET pagination):** + +```sql +-- Page 1: scans 20 rows +select * from products order by id limit 20 offset 0; + +-- Page 100: scans 2000 rows to skip 1980 +select * from products order by id limit 20 offset 1980; + +-- Page 10000: scans 200,000 rows! +select * from products order by id limit 20 offset 199980; +``` + +**Correct (cursor/keyset pagination):** + +```sql +-- Page 1: get first 20 +select * from products order by id limit 20; +-- Application stores last_id = 20 + +-- Page 2: start after last ID +select * from products where id > 20 order by id limit 20; +-- Uses index, always fast regardless of page depth + +-- Page 10000: same speed as page 1 +select * from products where id > 199980 order by id limit 20; +``` + +For multi-column sorting: + +```sql +-- Cursor must include all sort columns +select * from products +where (created_at, id) > ('2024-01-15 10:00:00', 12345) +order by created_at, id +limit 20; +``` + +Reference: [Pagination](https://supabase.com/docs/guides/database/pagination) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/data-upsert.md b/.cursor/skills/supabase-postgres-best-practices/references/data-upsert.md new file mode 100644 index 0000000..bc95e23 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/data-upsert.md @@ -0,0 +1,50 @@ +--- +title: Use UPSERT for Insert-or-Update Operations +impact: MEDIUM +impactDescription: Atomic operation, eliminates race conditions +tags: upsert, on-conflict, insert, update +--- + +## Use UPSERT for Insert-or-Update Operations + +Using separate SELECT-then-INSERT/UPDATE creates race conditions. Use INSERT ... ON CONFLICT for atomic upserts. + +**Incorrect (check-then-insert race condition):** + +```sql +-- Race condition: two requests check simultaneously +select * from settings where user_id = 123 and key = 'theme'; +-- Both find nothing + +-- Both try to insert +insert into settings (user_id, key, value) values (123, 'theme', 'dark'); +-- One succeeds, one fails with duplicate key error! +``` + +**Correct (atomic UPSERT):** + +```sql +-- Single atomic operation +insert into settings (user_id, key, value) +values (123, 'theme', 'dark') +on conflict (user_id, key) +do update set value = excluded.value, updated_at = now(); + +-- Returns the inserted/updated row +insert into settings (user_id, key, value) +values (123, 'theme', 'dark') +on conflict (user_id, key) +do update set value = excluded.value +returning *; +``` + +Insert-or-ignore pattern: + +```sql +-- Insert only if not exists (no update) +insert into page_views (page_id, user_id) +values (1, 123) +on conflict (page_id, user_id) do nothing; +``` + +Reference: [INSERT ON CONFLICT](https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/lock-advisory.md b/.cursor/skills/supabase-postgres-best-practices/references/lock-advisory.md new file mode 100644 index 0000000..572eaf0 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/lock-advisory.md @@ -0,0 +1,56 @@ +--- +title: Use Advisory Locks for Application-Level Locking +impact: MEDIUM +impactDescription: Efficient coordination without row-level lock overhead +tags: advisory-locks, coordination, application-locks +--- + +## Use Advisory Locks for Application-Level Locking + +Advisory locks provide application-level coordination without requiring database rows to lock. + +**Incorrect (creating rows just for locking):** + +```sql +-- Creating dummy rows to lock on +create table resource_locks ( + resource_name text primary key +); + +insert into resource_locks values ('report_generator'); + +-- Lock by selecting the row +select * from resource_locks where resource_name = 'report_generator' for update; +``` + +**Correct (advisory locks):** + +```sql +-- Session-level advisory lock (released on disconnect or unlock) +select pg_advisory_lock(hashtext('report_generator')); +-- ... do exclusive work ... +select pg_advisory_unlock(hashtext('report_generator')); + +-- Transaction-level lock (released on commit/rollback) +begin; +select pg_advisory_xact_lock(hashtext('daily_report')); +-- ... do work ... +commit; -- Lock automatically released +``` + +Try-lock for non-blocking operations: + +```sql +-- Returns immediately with true/false instead of waiting +select pg_try_advisory_lock(hashtext('resource_name')); + +-- Use in application +if (acquired) { + -- Do work + select pg_advisory_unlock(hashtext('resource_name')); +} else { + -- Skip or retry later +} +``` + +Reference: [Advisory Locks](https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md b/.cursor/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md new file mode 100644 index 0000000..974da5e --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md @@ -0,0 +1,68 @@ +--- +title: Prevent Deadlocks with Consistent Lock Ordering +impact: MEDIUM-HIGH +impactDescription: Eliminate deadlock errors, improve reliability +tags: deadlocks, locking, transactions, ordering +--- + +## Prevent Deadlocks with Consistent Lock Ordering + +Deadlocks occur when transactions lock resources in different orders. Always +acquire locks in a consistent order. + +**Incorrect (inconsistent lock ordering):** + +```sql +-- Transaction A -- Transaction B +begin; begin; +update accounts update accounts +set balance = balance - 100 set balance = balance - 50 +where id = 1; where id = 2; -- B locks row 2 + +update accounts update accounts +set balance = balance + 100 set balance = balance + 50 +where id = 2; -- A waits for B where id = 1; -- B waits for A + +-- DEADLOCK! Both waiting for each other +``` + +**Correct (lock rows in consistent order first):** + +```sql +-- Explicitly acquire locks in ID order before updating +begin; +select * from accounts where id in (1, 2) order by id for update; + +-- Now perform updates in any order - locks already held +update accounts set balance = balance - 100 where id = 1; +update accounts set balance = balance + 100 where id = 2; +commit; +``` + +Alternative: use a single statement to update atomically: + +```sql +-- Single statement acquires all locks atomically +begin; +update accounts +set balance = balance + case id + when 1 then -100 + when 2 then 100 +end +where id in (1, 2); +commit; +``` + +Detect deadlocks in logs: + +```sql +-- Check for recent deadlocks +select * from pg_stat_database where deadlocks > 0; + +-- Enable deadlock logging +set log_lock_waits = on; +set deadlock_timeout = '1s'; +``` + +Reference: +[Deadlocks](https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-DEADLOCKS) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/lock-short-transactions.md b/.cursor/skills/supabase-postgres-best-practices/references/lock-short-transactions.md new file mode 100644 index 0000000..e6b8ef2 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/lock-short-transactions.md @@ -0,0 +1,50 @@ +--- +title: Keep Transactions Short to Reduce Lock Contention +impact: MEDIUM-HIGH +impactDescription: 3-5x throughput improvement, fewer deadlocks +tags: transactions, locking, contention, performance +--- + +## Keep Transactions Short to Reduce Lock Contention + +Long-running transactions hold locks that block other queries. Keep transactions as short as possible. + +**Incorrect (long transaction with external calls):** + +```sql +begin; +select * from orders where id = 1 for update; -- Lock acquired + +-- Application makes HTTP call to payment API (2-5 seconds) +-- Other queries on this row are blocked! + +update orders set status = 'paid' where id = 1; +commit; -- Lock held for entire duration +``` + +**Correct (minimal transaction scope):** + +```sql +-- Validate data and call APIs outside transaction +-- Application: response = await paymentAPI.charge(...) + +-- Only hold lock for the actual update +begin; +update orders +set status = 'paid', payment_id = $1 +where id = $2 and status = 'pending' +returning *; +commit; -- Lock held for milliseconds +``` + +Use `statement_timeout` to prevent runaway transactions: + +```sql +-- Abort queries running longer than 30 seconds +set statement_timeout = '30s'; + +-- Or per-session +set local statement_timeout = '5s'; +``` + +Reference: [Transaction Management](https://www.postgresql.org/docs/current/tutorial-transactions.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/lock-skip-locked.md b/.cursor/skills/supabase-postgres-best-practices/references/lock-skip-locked.md new file mode 100644 index 0000000..77bdbb9 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/lock-skip-locked.md @@ -0,0 +1,54 @@ +--- +title: Use SKIP LOCKED for Non-Blocking Queue Processing +impact: MEDIUM-HIGH +impactDescription: 10x throughput for worker queues +tags: skip-locked, queue, workers, concurrency +--- + +## Use SKIP LOCKED for Non-Blocking Queue Processing + +When multiple workers process a queue, SKIP LOCKED allows workers to process different rows without waiting. + +**Incorrect (workers block each other):** + +```sql +-- Worker 1 and Worker 2 both try to get next job +begin; +select * from jobs where status = 'pending' order by created_at limit 1 for update; +-- Worker 2 waits for Worker 1's lock to release! +``` + +**Correct (SKIP LOCKED for parallel processing):** + +```sql +-- Each worker skips locked rows and gets the next available +begin; +select * from jobs +where status = 'pending' +order by created_at +limit 1 +for update skip locked; + +-- Worker 1 gets job 1, Worker 2 gets job 2 (no waiting) + +update jobs set status = 'processing' where id = $1; +commit; +``` + +Complete queue pattern: + +```sql +-- Atomic claim-and-update in one statement +update jobs +set status = 'processing', worker_id = $1, started_at = now() +where id = ( + select id from jobs + where status = 'pending' + order by created_at + limit 1 + for update skip locked +) +returning *; +``` + +Reference: [SELECT FOR UPDATE SKIP LOCKED](https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md b/.cursor/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md new file mode 100644 index 0000000..542978c --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md @@ -0,0 +1,45 @@ +--- +title: Use EXPLAIN ANALYZE to Diagnose Slow Queries +impact: LOW-MEDIUM +impactDescription: Identify exact bottlenecks in query execution +tags: explain, analyze, diagnostics, query-plan +--- + +## Use EXPLAIN ANALYZE to Diagnose Slow Queries + +EXPLAIN ANALYZE executes the query and shows actual timings, revealing the true performance bottlenecks. + +**Incorrect (guessing at performance issues):** + +```sql +-- Query is slow, but why? +select * from orders where customer_id = 123 and status = 'pending'; +-- "It must be missing an index" - but which one? +``` + +**Correct (use EXPLAIN ANALYZE):** + +```sql +explain (analyze, buffers, format text) +select * from orders where customer_id = 123 and status = 'pending'; + +-- Output reveals the issue: +-- Seq Scan on orders (cost=0.00..25000.00 rows=50 width=100) (actual time=0.015..450.123 rows=50 loops=1) +-- Filter: ((customer_id = 123) AND (status = 'pending'::text)) +-- Rows Removed by Filter: 999950 +-- Buffers: shared hit=5000 read=15000 +-- Planning Time: 0.150 ms +-- Execution Time: 450.500 ms +``` + +Key things to look for: + +```sql +-- Seq Scan on large tables = missing index +-- Rows Removed by Filter = poor selectivity or missing index +-- Buffers: read >> hit = data not cached, needs more memory +-- Nested Loop with high loops = consider different join strategy +-- Sort Method: external merge = work_mem too low +``` + +Reference: [EXPLAIN](https://supabase.com/docs/guides/database/inspect) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md b/.cursor/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md new file mode 100644 index 0000000..d7e82f1 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md @@ -0,0 +1,55 @@ +--- +title: Enable pg_stat_statements for Query Analysis +impact: LOW-MEDIUM +impactDescription: Identify top resource-consuming queries +tags: pg-stat-statements, monitoring, statistics, performance +--- + +## Enable pg_stat_statements for Query Analysis + +pg_stat_statements tracks execution statistics for all queries, helping identify slow and frequent queries. + +**Incorrect (no visibility into query patterns):** + +```sql +-- Database is slow, but which queries are the problem? +-- No way to know without pg_stat_statements +``` + +**Correct (enable and query pg_stat_statements):** + +```sql +-- Enable the extension +create extension if not exists pg_stat_statements; + +-- Find slowest queries by total time +select + calls, + round(total_exec_time::numeric, 2) as total_time_ms, + round(mean_exec_time::numeric, 2) as mean_time_ms, + query +from pg_stat_statements +order by total_exec_time desc +limit 10; + +-- Find most frequent queries +select calls, query +from pg_stat_statements +order by calls desc +limit 10; + +-- Reset statistics after optimization +select pg_stat_statements_reset(); +``` + +Key metrics to monitor: + +```sql +-- Queries with high mean time (candidates for optimization) +select query, mean_exec_time, calls +from pg_stat_statements +where mean_exec_time > 100 -- > 100ms average +order by mean_exec_time desc; +``` + +Reference: [pg_stat_statements](https://supabase.com/docs/guides/database/extensions/pg_stat_statements) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md b/.cursor/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md new file mode 100644 index 0000000..e0e8ea0 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md @@ -0,0 +1,55 @@ +--- +title: Maintain Table Statistics with VACUUM and ANALYZE +impact: MEDIUM +impactDescription: 2-10x better query plans with accurate statistics +tags: vacuum, analyze, statistics, maintenance, autovacuum +--- + +## Maintain Table Statistics with VACUUM and ANALYZE + +Outdated statistics cause the query planner to make poor decisions. VACUUM reclaims space, ANALYZE updates statistics. + +**Incorrect (stale statistics):** + +```sql +-- Table has 1M rows but stats say 1000 +-- Query planner chooses wrong strategy +explain select * from orders where status = 'pending'; +-- Shows: Seq Scan (because stats show small table) +-- Actually: Index Scan would be much faster +``` + +**Correct (maintain fresh statistics):** + +```sql +-- Manually analyze after large data changes +analyze orders; + +-- Analyze specific columns used in WHERE clauses +analyze orders (status, created_at); + +-- Check when tables were last analyzed +select + relname, + last_vacuum, + last_autovacuum, + last_analyze, + last_autoanalyze +from pg_stat_user_tables +order by last_analyze nulls first; +``` + +Autovacuum tuning for busy tables: + +```sql +-- Increase frequency for high-churn tables +alter table orders set ( + autovacuum_vacuum_scale_factor = 0.05, -- Vacuum at 5% dead tuples (default 20%) + autovacuum_analyze_scale_factor = 0.02 -- Analyze at 2% changes (default 10%) +); + +-- Check autovacuum status +select * from pg_stat_progress_vacuum; +``` + +Reference: [VACUUM](https://supabase.com/docs/guides/database/database-size#vacuum-operations) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/query-composite-indexes.md b/.cursor/skills/supabase-postgres-best-practices/references/query-composite-indexes.md new file mode 100644 index 0000000..fea6452 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/query-composite-indexes.md @@ -0,0 +1,44 @@ +--- +title: Create Composite Indexes for Multi-Column Queries +impact: HIGH +impactDescription: 5-10x faster multi-column queries +tags: indexes, composite-index, multi-column, query-optimization +--- + +## Create Composite Indexes for Multi-Column Queries + +When queries filter on multiple columns, a composite index is more efficient than separate single-column indexes. + +**Incorrect (separate indexes require bitmap scan):** + +```sql +-- Two separate indexes +create index orders_status_idx on orders (status); +create index orders_created_idx on orders (created_at); + +-- Query must combine both indexes (slower) +select * from orders where status = 'pending' and created_at > '2024-01-01'; +``` + +**Correct (composite index):** + +```sql +-- Single composite index (leftmost column first for equality checks) +create index orders_status_created_idx on orders (status, created_at); + +-- Query uses one efficient index scan +select * from orders where status = 'pending' and created_at > '2024-01-01'; +``` + +**Column order matters** - place equality columns first, range columns last: + +```sql +-- Good: status (=) before created_at (>) +create index idx on orders (status, created_at); + +-- Works for: WHERE status = 'pending' +-- Works for: WHERE status = 'pending' AND created_at > '2024-01-01' +-- Does NOT work for: WHERE created_at > '2024-01-01' (leftmost prefix rule) +``` + +Reference: [Multicolumn Indexes](https://www.postgresql.org/docs/current/indexes-multicolumn.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/query-covering-indexes.md b/.cursor/skills/supabase-postgres-best-practices/references/query-covering-indexes.md new file mode 100644 index 0000000..9d2a494 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/query-covering-indexes.md @@ -0,0 +1,40 @@ +--- +title: Use Covering Indexes to Avoid Table Lookups +impact: MEDIUM-HIGH +impactDescription: 2-5x faster queries by eliminating heap fetches +tags: indexes, covering-index, include, index-only-scan +--- + +## Use Covering Indexes to Avoid Table Lookups + +Covering indexes include all columns needed by a query, enabling index-only scans that skip the table entirely. + +**Incorrect (index scan + heap fetch):** + +```sql +create index users_email_idx on users (email); + +-- Must fetch name and created_at from table heap +select email, name, created_at from users where email = 'user@example.com'; +``` + +**Correct (index-only scan with INCLUDE):** + +```sql +-- Include non-searchable columns in the index +create index users_email_idx on users (email) include (name, created_at); + +-- All columns served from index, no table access needed +select email, name, created_at from users where email = 'user@example.com'; +``` + +Use INCLUDE for columns you SELECT but don't filter on: + +```sql +-- Searching by status, but also need customer_id and total +create index orders_status_idx on orders (status) include (customer_id, total); + +select status, customer_id, total from orders where status = 'shipped'; +``` + +Reference: [Index-Only Scans](https://www.postgresql.org/docs/current/indexes-index-only-scans.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/query-index-types.md b/.cursor/skills/supabase-postgres-best-practices/references/query-index-types.md new file mode 100644 index 0000000..0d7651a --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/query-index-types.md @@ -0,0 +1,45 @@ +--- +title: Choose the Right Index Type for Your Data +impact: HIGH +impactDescription: 10-100x improvement with correct index type +tags: indexes, btree, gin, brin, hash, index-types +--- + +## Choose the Right Index Type for Your Data + +Different index types excel at different query patterns. The default B-tree isn't always optimal. + +**Incorrect (B-tree for JSONB containment):** + +```sql +-- B-tree cannot optimize containment operators +create index products_attrs_idx on products (attributes); +select * from products where attributes @> '{"color": "red"}'; +-- Full table scan - B-tree doesn't support @> operator +``` + +**Correct (GIN for JSONB):** + +```sql +-- GIN supports @>, ?, ?&, ?| operators +create index products_attrs_idx on products using gin (attributes); +select * from products where attributes @> '{"color": "red"}'; +``` + +Index type guide: + +```sql +-- B-tree (default): =, <, >, BETWEEN, IN, IS NULL +create index users_created_idx on users (created_at); + +-- GIN: arrays, JSONB, full-text search +create index posts_tags_idx on posts using gin (tags); + +-- BRIN: large time-series tables (10-100x smaller) +create index events_time_idx on events using brin (created_at); + +-- Hash: equality-only (slightly faster than B-tree for =) +create index sessions_token_idx on sessions using hash (token); +``` + +Reference: [Index Types](https://www.postgresql.org/docs/current/indexes-types.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/query-missing-indexes.md b/.cursor/skills/supabase-postgres-best-practices/references/query-missing-indexes.md new file mode 100644 index 0000000..e6daace --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/query-missing-indexes.md @@ -0,0 +1,43 @@ +--- +title: Add Indexes on WHERE and JOIN Columns +impact: CRITICAL +impactDescription: 100-1000x faster queries on large tables +tags: indexes, performance, sequential-scan, query-optimization +--- + +## Add Indexes on WHERE and JOIN Columns + +Queries filtering or joining on unindexed columns cause full table scans, which become exponentially slower as tables grow. + +**Incorrect (sequential scan on large table):** + +```sql +-- No index on customer_id causes full table scan +select * from orders where customer_id = 123; + +-- EXPLAIN shows: Seq Scan on orders (cost=0.00..25000.00 rows=100 width=85) +``` + +**Correct (index scan):** + +```sql +-- Create index on frequently filtered column +create index orders_customer_id_idx on orders (customer_id); + +select * from orders where customer_id = 123; + +-- EXPLAIN shows: Index Scan using orders_customer_id_idx (cost=0.42..8.44 rows=100 width=85) +``` + +For JOIN columns, always index the foreign key side: + +```sql +-- Index the referencing column +create index orders_customer_id_idx on orders (customer_id); + +select c.name, o.total +from customers c +join orders o on o.customer_id = c.id; +``` + +Reference: [Query Optimization](https://supabase.com/docs/guides/database/query-optimization) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/query-partial-indexes.md b/.cursor/skills/supabase-postgres-best-practices/references/query-partial-indexes.md new file mode 100644 index 0000000..3e61a34 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/query-partial-indexes.md @@ -0,0 +1,45 @@ +--- +title: Use Partial Indexes for Filtered Queries +impact: HIGH +impactDescription: 5-20x smaller indexes, faster writes and queries +tags: indexes, partial-index, query-optimization, storage +--- + +## Use Partial Indexes for Filtered Queries + +Partial indexes only include rows matching a WHERE condition, making them smaller and faster when queries consistently filter on the same condition. + +**Incorrect (full index includes irrelevant rows):** + +```sql +-- Index includes all rows, even soft-deleted ones +create index users_email_idx on users (email); + +-- Query always filters active users +select * from users where email = 'user@example.com' and deleted_at is null; +``` + +**Correct (partial index matches query filter):** + +```sql +-- Index only includes active users +create index users_active_email_idx on users (email) +where deleted_at is null; + +-- Query uses the smaller, faster index +select * from users where email = 'user@example.com' and deleted_at is null; +``` + +Common use cases for partial indexes: + +```sql +-- Only pending orders (status rarely changes once completed) +create index orders_pending_idx on orders (created_at) +where status = 'pending'; + +-- Only non-null values +create index products_sku_idx on products (sku) +where sku is not null; +``` + +Reference: [Partial Indexes](https://www.postgresql.org/docs/current/indexes-partial.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/schema-data-types.md b/.cursor/skills/supabase-postgres-best-practices/references/schema-data-types.md new file mode 100644 index 0000000..f253a58 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/schema-data-types.md @@ -0,0 +1,46 @@ +--- +title: Choose Appropriate Data Types +impact: HIGH +impactDescription: 50% storage reduction, faster comparisons +tags: data-types, schema, storage, performance +--- + +## Choose Appropriate Data Types + +Using the right data types reduces storage, improves query performance, and prevents bugs. + +**Incorrect (wrong data types):** + +```sql +create table users ( + id int, -- Will overflow at 2.1 billion + email varchar(255), -- Unnecessary length limit + created_at timestamp, -- Missing timezone info + is_active varchar(5), -- String for boolean + price varchar(20) -- String for numeric +); +``` + +**Correct (appropriate data types):** + +```sql +create table users ( + id bigint generated always as identity primary key, -- 9 quintillion max + email text, -- No artificial limit, same performance as varchar + created_at timestamptz, -- Always store timezone-aware timestamps + is_active boolean default true, -- 1 byte vs variable string length + price numeric(10,2) -- Exact decimal arithmetic +); +``` + +Key guidelines: + +```sql +-- IDs: use bigint, not int (future-proofing) +-- Strings: use text, not varchar(n) unless constraint needed +-- Time: use timestamptz, not timestamp +-- Money: use numeric, not float (precision matters) +-- Enums: use text with check constraint or create enum type +``` + +Reference: [Data Types](https://www.postgresql.org/docs/current/datatype.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md b/.cursor/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md new file mode 100644 index 0000000..6c3d6ff --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md @@ -0,0 +1,59 @@ +--- +title: Index Foreign Key Columns +impact: HIGH +impactDescription: 10-100x faster JOINs and CASCADE operations +tags: foreign-key, indexes, joins, schema +--- + +## Index Foreign Key Columns + +Postgres does not automatically index foreign key columns. Missing indexes cause slow JOINs and CASCADE operations. + +**Incorrect (unindexed foreign key):** + +```sql +create table orders ( + id bigint generated always as identity primary key, + customer_id bigint references customers(id) on delete cascade, + total numeric(10,2) +); + +-- No index on customer_id! +-- JOINs and ON DELETE CASCADE both require full table scan +select * from orders where customer_id = 123; -- Seq Scan +delete from customers where id = 123; -- Locks table, scans all orders +``` + +**Correct (indexed foreign key):** + +```sql +create table orders ( + id bigint generated always as identity primary key, + customer_id bigint references customers(id) on delete cascade, + total numeric(10,2) +); + +-- Always index the FK column +create index orders_customer_id_idx on orders (customer_id); + +-- Now JOINs and cascades are fast +select * from orders where customer_id = 123; -- Index Scan +delete from customers where id = 123; -- Uses index, fast cascade +``` + +Find missing FK indexes: + +```sql +select + conrelid::regclass as table_name, + a.attname as fk_column +from pg_constraint c +join pg_attribute a on a.attrelid = c.conrelid and a.attnum = any(c.conkey) +where c.contype = 'f' + and not exists ( + select 1 from pg_index i + where i.indrelid = c.conrelid and a.attnum = any(i.indkey) + ); +``` + +Reference: [Foreign Keys](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-FK) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md b/.cursor/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md new file mode 100644 index 0000000..f007294 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md @@ -0,0 +1,55 @@ +--- +title: Use Lowercase Identifiers for Compatibility +impact: MEDIUM +impactDescription: Avoid case-sensitivity bugs with tools, ORMs, and AI assistants +tags: naming, identifiers, case-sensitivity, schema, conventions +--- + +## Use Lowercase Identifiers for Compatibility + +PostgreSQL folds unquoted identifiers to lowercase. Quoted mixed-case identifiers require quotes forever and cause issues with tools, ORMs, and AI assistants that may not recognize them. + +**Incorrect (mixed-case identifiers):** + +```sql +-- Quoted identifiers preserve case but require quotes everywhere +CREATE TABLE "Users" ( + "userId" bigint PRIMARY KEY, + "firstName" text, + "lastName" text +); + +-- Must always quote or queries fail +SELECT "firstName" FROM "Users" WHERE "userId" = 1; + +-- This fails - Users becomes users without quotes +SELECT firstName FROM Users; +-- ERROR: relation "users" does not exist +``` + +**Correct (lowercase snake_case):** + +```sql +-- Unquoted lowercase identifiers are portable and tool-friendly +CREATE TABLE users ( + user_id bigint PRIMARY KEY, + first_name text, + last_name text +); + +-- Works without quotes, recognized by all tools +SELECT first_name FROM users WHERE user_id = 1; +``` + +Common sources of mixed-case identifiers: + +```sql +-- ORMs often generate quoted camelCase - configure them to use snake_case +-- Migrations from other databases may preserve original casing +-- Some GUI tools quote identifiers by default - disable this + +-- If stuck with mixed-case, create views as a compatibility layer +CREATE VIEW users AS SELECT "userId" AS user_id, "firstName" AS first_name FROM "Users"; +``` + +Reference: [Identifiers and Key Words](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/schema-partitioning.md b/.cursor/skills/supabase-postgres-best-practices/references/schema-partitioning.md new file mode 100644 index 0000000..13137a0 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/schema-partitioning.md @@ -0,0 +1,55 @@ +--- +title: Partition Large Tables for Better Performance +impact: MEDIUM-HIGH +impactDescription: 5-20x faster queries and maintenance on large tables +tags: partitioning, large-tables, time-series, performance +--- + +## Partition Large Tables for Better Performance + +Partitioning splits a large table into smaller pieces, improving query performance and maintenance operations. + +**Incorrect (single large table):** + +```sql +create table events ( + id bigint generated always as identity, + created_at timestamptz, + data jsonb +); + +-- 500M rows, queries scan everything +select * from events where created_at > '2024-01-01'; -- Slow +vacuum events; -- Takes hours, locks table +``` + +**Correct (partitioned by time range):** + +```sql +create table events ( + id bigint generated always as identity, + created_at timestamptz not null, + data jsonb +) partition by range (created_at); + +-- Create partitions for each month +create table events_2024_01 partition of events + for values from ('2024-01-01') to ('2024-02-01'); + +create table events_2024_02 partition of events + for values from ('2024-02-01') to ('2024-03-01'); + +-- Queries only scan relevant partitions +select * from events where created_at > '2024-01-15'; -- Only scans events_2024_01+ + +-- Drop old data instantly +drop table events_2023_01; -- Instant vs DELETE taking hours +``` + +When to partition: + +- Tables > 100M rows +- Time-series data with date-based queries +- Need to efficiently drop old data + +Reference: [Table Partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/schema-primary-keys.md b/.cursor/skills/supabase-postgres-best-practices/references/schema-primary-keys.md new file mode 100644 index 0000000..fb0fbb1 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/schema-primary-keys.md @@ -0,0 +1,61 @@ +--- +title: Select Optimal Primary Key Strategy +impact: HIGH +impactDescription: Better index locality, reduced fragmentation +tags: primary-key, identity, uuid, serial, schema +--- + +## Select Optimal Primary Key Strategy + +Primary key choice affects insert performance, index size, and replication +efficiency. + +**Incorrect (problematic PK choices):** + +```sql +-- identity is the SQL-standard approach +create table users ( + id serial primary key -- Works, but IDENTITY is recommended +); + +-- Random UUIDs (v4) cause index fragmentation +create table orders ( + id uuid default gen_random_uuid() primary key -- UUIDv4 = random = scattered inserts +); +``` + +**Correct (optimal PK strategies):** + +```sql +-- Use IDENTITY for sequential IDs (SQL-standard, best for most cases) +create table users ( + id bigint generated always as identity primary key +); + +-- For distributed systems needing UUIDs, use UUIDv7 (time-ordered) +-- Requires pg_uuidv7 extension: create extension pg_uuidv7; +create table orders ( + id uuid default uuid_generate_v7() primary key -- Time-ordered, no fragmentation +); + +-- Alternative: time-prefixed IDs for sortable, distributed IDs (no extension needed) +create table events ( + id text default concat( + to_char(now() at time zone 'utc', 'YYYYMMDDHH24MISSMS'), + gen_random_uuid()::text + ) primary key +); +``` + +Guidelines: + +- Single database: `bigint identity` (sequential, 8 bytes, SQL-standard) +- Distributed/exposed IDs: UUIDv7 (requires pg_uuidv7) or ULID (time-ordered, no + fragmentation) +- `serial` works but `identity` is SQL-standard and preferred for new + applications +- Avoid random UUIDs (v4) as primary keys on large tables (causes index + fragmentation) + +Reference: +[Identity Columns](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-PARMS-GENERATED-IDENTITY) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/security-privileges.md b/.cursor/skills/supabase-postgres-best-practices/references/security-privileges.md new file mode 100644 index 0000000..448ec34 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/security-privileges.md @@ -0,0 +1,54 @@ +--- +title: Apply Principle of Least Privilege +impact: MEDIUM +impactDescription: Reduced attack surface, better audit trail +tags: privileges, security, roles, permissions +--- + +## Apply Principle of Least Privilege + +Grant only the minimum permissions required. Never use superuser for application queries. + +**Incorrect (overly broad permissions):** + +```sql +-- Application uses superuser connection +-- Or grants ALL to application role +grant all privileges on all tables in schema public to app_user; +grant all privileges on all sequences in schema public to app_user; + +-- Any SQL injection becomes catastrophic +-- drop table users; cascades to everything +``` + +**Correct (minimal, specific grants):** + +```sql +-- Create role with no default privileges +create role app_readonly nologin; + +-- Grant only SELECT on specific tables +grant usage on schema public to app_readonly; +grant select on public.products, public.categories to app_readonly; + +-- Create role for writes with limited scope +create role app_writer nologin; +grant usage on schema public to app_writer; +grant select, insert, update on public.orders to app_writer; +grant usage on sequence orders_id_seq to app_writer; +-- No DELETE permission + +-- Login role inherits from these +create role app_user login password 'xxx'; +grant app_writer to app_user; +``` + +Revoke public defaults: + +```sql +-- Revoke default public access +revoke all on schema public from public; +revoke all on all tables in schema public from public; +``` + +Reference: [Roles and Privileges](https://supabase.com/blog/postgres-roles-and-privileges) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/security-rls-basics.md b/.cursor/skills/supabase-postgres-best-practices/references/security-rls-basics.md new file mode 100644 index 0000000..c61e1a8 --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/security-rls-basics.md @@ -0,0 +1,50 @@ +--- +title: Enable Row Level Security for Multi-Tenant Data +impact: CRITICAL +impactDescription: Database-enforced tenant isolation, prevent data leaks +tags: rls, row-level-security, multi-tenant, security +--- + +## Enable Row Level Security for Multi-Tenant Data + +Row Level Security (RLS) enforces data access at the database level, ensuring users only see their own data. + +**Incorrect (application-level filtering only):** + +```sql +-- Relying only on application to filter +select * from orders where user_id = $current_user_id; + +-- Bug or bypass means all data is exposed! +select * from orders; -- Returns ALL orders +``` + +**Correct (database-enforced RLS):** + +```sql +-- Enable RLS on the table +alter table orders enable row level security; + +-- Create policy for users to see only their orders +create policy orders_user_policy on orders + for all + using (user_id = current_setting('app.current_user_id')::bigint); + +-- Force RLS even for table owners +alter table orders force row level security; + +-- Set user context and query +set app.current_user_id = '123'; +select * from orders; -- Only returns orders for user 123 +``` + +Policy for authenticated role: + +```sql +create policy orders_user_policy on orders + for all + to authenticated + using (user_id = auth.uid()); +``` + +Reference: [Row Level Security](https://supabase.com/docs/guides/database/postgres/row-level-security) diff --git a/.cursor/skills/supabase-postgres-best-practices/references/security-rls-performance.md b/.cursor/skills/supabase-postgres-best-practices/references/security-rls-performance.md new file mode 100644 index 0000000..b32d92f --- /dev/null +++ b/.cursor/skills/supabase-postgres-best-practices/references/security-rls-performance.md @@ -0,0 +1,57 @@ +--- +title: Optimize RLS Policies for Performance +impact: HIGH +impactDescription: 5-10x faster RLS queries with proper patterns +tags: rls, performance, security, optimization +--- + +## Optimize RLS Policies for Performance + +Poorly written RLS policies can cause severe performance issues. Use subqueries and indexes strategically. + +**Incorrect (function called for every row):** + +```sql +create policy orders_policy on orders + using (auth.uid() = user_id); -- auth.uid() called per row! + +-- With 1M rows, auth.uid() is called 1M times +``` + +**Correct (wrap functions in SELECT):** + +```sql +create policy orders_policy on orders + using ((select auth.uid()) = user_id); -- Called once, cached + +-- 100x+ faster on large tables +``` + +Use security definer functions for complex checks: + +```sql +-- Create helper function (runs as definer, bypasses RLS) +create or replace function is_team_member(team_id bigint) +returns boolean +language sql +security definer +set search_path = '' +as $$ + select exists ( + select 1 from public.team_members + where team_id = $1 and user_id = (select auth.uid()) + ); +$$; + +-- Use in policy (indexed lookup, not per-row check) +create policy team_orders_policy on orders + using ((select is_team_member(team_id))); +``` + +Always add indexes on columns used in RLS policies: + +```sql +create index orders_user_id_idx on orders (user_id); +``` + +Reference: [RLS Performance](https://supabase.com/docs/guides/database/postgres/row-level-security#rls-performance-recommendations) diff --git a/.cursor/wiki/robots.txt b/.cursor/wiki/robots.txt new file mode 100644 index 0000000..aa82afb --- /dev/null +++ b/.cursor/wiki/robots.txt @@ -0,0 +1,34 @@ +# HyperAgent AI Crawler Rules +# This file defines what AI agents can and cannot access + +# Allow access to documentation and code +Allow: /docs/ +Allow: /scripts/ +Allow: /hyperagent/ +Allow: /frontend/ +Allow: /packages/ +Allow: /contracts/ +Allow: /.cursor/skills/ +Allow: /.cursor/llm/ +Allow: /.cursor/rules/ + +# Block access to sensitive files +Disallow: /.env +Disallow: /.env.* +Disallow: /secrets/ +Disallow: /node_modules/ +Disallow: /.git/ +Disallow: /dist/ +Disallow: /build/ + +# Block access to temporary files +Disallow: /tmp/ +Disallow: /temp/ +Disallow: /__pycache__/ +Disallow: /.pytest_cache/ + +# Allow root level project files +Allow: /README.md +Allow: /CLAUDE.md +Allow: /.cursorrules +Allow: /llms.txt diff --git a/.cursorrules b/.cursorrules new file mode 100644 index 0000000..0ae46eb --- /dev/null +++ b/.cursorrules @@ -0,0 +1,191 @@ +# HyperAgent Cursor Editor Rules + +## Project Context + +HyperAgent is an AI-powered smart contract development platform that transforms natural language specifications into production-ready, audited contracts deployed across multiple EVM chains. + +## Code Style + +### Python +- Use Python 3.11+ features +- Follow PEP 8 style guide +- Use type hints for all function parameters and return types +- Use Black for code formatting +- Use isort for import sorting +- Maximum line length: 100 characters + +### TypeScript/JavaScript +- Use TypeScript 5.x with strict mode +- Follow ESLint and Prettier configurations +- Use async/await instead of promises +- Prefer const over let, avoid var +- Use meaningful variable and function names + +### Solidity +- Use Solidity 0.8.24+ +- Follow OpenZeppelin patterns +- Use NatSpec comments for all public functions +- Implement access control patterns +- Use events for important state changes + +## File Organization + +### Directory Structure +- Follow the monorepo structure defined in the spec +- Keep related files together +- Use clear, descriptive directory names +- Separate concerns (API, agents, contracts, etc.) + +### Naming Conventions +- Files: kebab-case for Python, camelCase for TypeScript +- Classes: PascalCase +- Functions: snake_case for Python, camelCase for TypeScript +- Constants: UPPER_SNAKE_CASE +- Private members: prefix with underscore + +## Development Workflow + +### Before Making Changes +1. Check `.cursor/skills/` for relevant skills +2. Check `.cursor/llm/` for LLM resources +3. Review existing code patterns +4. Check related documentation + +### When Writing Code +- Write self-documenting code +- Add comments for complex logic +- Include type hints/annotations +- Follow existing patterns in the codebase +- Keep functions focused and small + +### When Creating Files +- Include appropriate imports +- Add docstrings/header comments +- Follow project structure conventions +- Update relevant documentation + +## Testing Requirements + +### Unit Tests +- Write tests for all business logic +- Use descriptive test names +- Test edge cases and error conditions +- Aim for 80%+ code coverage + +### Integration Tests +- Test API endpoints +- Test database interactions +- Test external service integrations +- Use test fixtures and mocks + +## Documentation + +### Code Comments +- Use docstrings for all public functions +- Explain "why" not "what" +- Include parameter and return type descriptions +- Add examples for complex functions + +### README Files +- Include project overview +- Provide quick start guide +- Document dependencies +- Include usage examples + +## Git Practices + +### Branch Naming +- `feature/description` - New features +- `fix/description` - Bug fixes +- `docs/description` - Documentation updates +- `chore/description` - Maintenance tasks + +### Commit Messages +- Use conventional commits format +- Start with type: `feat:`, `fix:`, `docs:`, `chore:`, etc. +- Include issue number if applicable +- Keep first line under 72 characters + +## Security + +### Never Commit +- API keys or tokens +- Private keys +- Passwords +- `.env` files (except `.env.example`) +- Secrets of any kind + +### Always Validate +- User inputs +- API responses +- File uploads +- External data sources + +## Performance + +### Best Practices +- Use async/await for I/O operations +- Implement caching where appropriate +- Optimize database queries +- Use connection pooling +- Monitor resource usage + +## Error Handling + +### Patterns +- Use specific exception types +- Include context in error messages +- Log errors with appropriate levels +- Return meaningful error responses +- Handle edge cases gracefully + +## AI Agent Guidelines + +### Resource Compliance +- Always check `.cursor/skills/` before implementing +- Review `.cursor/llm/` for LLM-related tasks +- Follow patterns from existing code +- Reference documentation when available + +### Code Generation +- Generate code that follows project patterns +- Include proper error handling +- Add appropriate type hints +- Include docstrings for public APIs +- Write tests for new functionality + +## Project-Specific Rules + +### Smart Contracts +- Always use OpenZeppelin contracts when possible +- Implement proper access control +- Add events for important state changes +- Write comprehensive tests +- Run security audits before deployment + +### API Endpoints +- Use FastAPI dependency injection +- Implement rate limiting +- Add authentication/authorization +- Validate all inputs +- Return consistent error formats + +### Frontend Components +- Use TypeScript with strict mode +- Follow shadcn/ui patterns +- Implement proper error boundaries +- Add loading states +- Optimize for performance + +## Quality Checklist + +Before submitting code: +- [ ] Code follows style guidelines +- [ ] All tests pass +- [ ] Documentation updated +- [ ] No security issues +- [ ] Performance acceptable +- [ ] Error handling implemented +- [ ] Type hints/annotations added +- [ ] Reviewed for best practices + diff --git a/.env.issue.example b/.env.issue.example new file mode 100644 index 0000000..365a486 --- /dev/null +++ b/.env.issue.example @@ -0,0 +1,45 @@ +# GitHub Projects Issue Automation Configuration +# Copy this file to .env.issue and fill in your values +# DO NOT commit .env.issue to the repository + +# GitHub Authentication +# Generate a fine-grained PAT from: https://github.com/settings/tokens?type=beta +# Required scopes: repo (read/write), project (read/write) +GITHUB_TOKEN=your_fine_grained_pat_token_here +GITHUB_OWNER=hyperkit-labs +GITHUB_REPO=hyperagent + +# Project 9 Configuration +# Find Project ID using: gh api graphql -f query='query { organization(login: "hyperkit-labs") { projectsV2(first: 20) { nodes { number id title } } } }' +PROJECT_ID=PVT_kwDO... +PROJECT_NUMBER=9 + +# Custom Field IDs (from GitHub Project 9) +# Find field IDs using: gh project field-list 9 --owner hyperkit-labs --format json +SPRINT_FIELD_ID=... +TYPE_FIELD_ID=... +AREA_FIELD_ID=... +CHAIN_FIELD_ID=... +PRESET_FIELD_ID=... + +# Milestone Configuration +MILESTONE_SPRINT_1="Phase 1 – Sprint 1 (Feb 5–17)" +MILESTONE_SPRINT_2="Phase 1 – Sprint 2 (Feb 18–Mar 2)" +MILESTONE_SPRINT_3="Phase 1 – Sprint 3 (Mar 3–16)" + +# Assignment Configuration +DEFAULT_ASSIGNEE_JUSTINE="JustineDevs" +DEFAULT_ASSIGNEE_ARHON="ArhonJay" +DEFAULT_ASSIGNEE_TRISTAN="Tristan-T-Dev" + +# Issue Creation Settings +DRY_RUN=true +BATCH_SIZE=10 +RATE_LIMIT_DELAY=1 + +# CSV Configuration +ISSUES_CSV_PATH=scripts/data/issues.csv + +# Logging +LOG_LEVEL=INFO +LOG_FILE=scripts/github/issue_creation.log diff --git a/.gitattributes b/.gitattributes deleted file mode 100644 index 2219158..0000000 --- a/.gitattributes +++ /dev/null @@ -1,31 +0,0 @@ -# Git attributes for branch-specific file handling - -# Files that should only be in development branch (not in main/production) -tests/ export-ignore -scripts/ export-ignore -docs/ export-ignore -GUIDE/ export-ignore -examples/ export-ignore -pytest.ini export-ignore -tests/README.md export-ignore -*.plan.md export-ignore -.cursor/ export-ignore - -# Text files should use LF line endings -*.py text eol=lf -*.md text eol=lf -*.txt text eol=lf -*.json text eol=lf -*.yml text eol=lf -*.yaml text eol=lf -*.sh text eol=lf -*.sql text eol=lf - -# Binary files -*.png binary -*.jpg binary -*.jpeg binary -*.gif binary -*.ico binary -*.pdf binary - diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS deleted file mode 100644 index 7e5f7d8..0000000 --- a/.github/CODEOWNERS +++ /dev/null @@ -1,49 +0,0 @@ -# CODEOWNERS - Auto-assign reviewers for code changes -# See: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners - -# Default owners for all files -* @JustineDevs - -# Backend Python code -/hyperagent/ @JustineDevs -/hyperagent/**/*.py @JustineDevs - -# Frontend TypeScript/React code -/frontend/ @JustineDevs -/frontend/**/*.ts @JustineDevs -/frontend/**/*.tsx @JustineDevs - -# Tests -/tests/ @JustineDevs -/tests/**/*.py @JustineDevs - -# Documentation -/docs/ @JustineDevs -/GUIDE/ @JustineDevs -*.md @JustineDevs - -# CI/CD and DevOps -/.github/ @JustineDevs -/docker-compose.yml @JustineDevs -/Dockerfile @JustineDevs -/Makefile @JustineDevs - -# Configuration files -/requirements.txt @JustineDevs -/pyproject.toml @JustineDevs -/setup.py @JustineDevs -/alembic.ini @JustineDevs -/env.example @JustineDevs - -# Database migrations -/alembic/ @JustineDevs -/alembic/**/*.py @JustineDevs - -# Services (x402-verifier, mantle-bridge) -/services/ @JustineDevs - -# Scripts -/scripts/ @JustineDevs -/scripts/**/*.py @JustineDevs -/scripts/**/*.sh @JustineDevs - diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 0000000..8f2625a --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,155 @@ +## 📋 Description + + + +## 🔗 Related Issues + +Closes #ISSUE_NUMBER + + + +## 🏷️ Type of Change + + + +- [ ] 🐛 Bug fix (non-breaking change which fixes an issue) +- [ ] ✨ New feature (non-breaking change which adds functionality) +- [ ] 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected) +- [ ] 📚 Documentation update +- [ ] 🔧 Chore (dependency update, refactoring, etc.) +- [ ] 🎨 UI/UX improvement +- [ ] ⚡ Performance improvement +- [ ] 🔒 Security fix +- [ ] 🚀 CI/CD improvement + +## ✅ Checklist + + + +### Code Quality + +- [ ] My code follows the style guidelines of this project (ruff, black, isort for Python; ESLint for TypeScript) +- [ ] I have performed a self-review of my code +- [ ] I have commented my code, particularly in hard-to-understand areas +- [ ] I have made corresponding changes to the documentation +- [ ] My changes generate no new warnings or errors + +### Testing + +- [ ] I have added tests that prove my fix is effective or that my feature works +- [ ] New and existing unit tests pass locally with my changes (`pytest`, `npm test`) +- [ ] I have tested this on my local environment + +### Documentation + +- [ ] I have updated relevant documentation (README, API docs, comments) +- [ ] I have updated the CHANGELOG.md (if applicable) +- [ ] I have added or updated type hints/annotations + +### Dependencies + +- [ ] I have reviewed and updated dependencies if needed +- [ ] No new security vulnerabilities introduced (checked with `pip-audit` or `npm audit`) + +## 🧪 Testing + + + +### Test Coverage + +```bash +# Add test coverage output here +pytest --cov=hyperagent tests/ +# or +npm test -- --coverage +``` + +### Manual Testing Steps + +1. Step 1 +2. Step 2 +3. Step 3 + +### Test Environment + +- OS: [e.g., Ubuntu 22.04, macOS 14, Windows 11] +- Python version: [e.g., 3.12] +- Node version: [e.g., 20.x] +- Browser (if applicable): [e.g., Chrome 120, Firefox 121] + +## 📸 Screenshots (if applicable) + + + +| Before | After | +|--------|-------| +| ![Before]() | ![After]() | + +## 🚀 Deployment Notes + + + +- [ ] Requires database migration +- [ ] Requires new environment variables (documented in `.env.example`) +- [ ] Requires external service configuration +- [ ] Backward compatible +- [ ] Requires coordination with other services + +### Environment Variables Added/Changed + +```bash +# Add any new environment variables here +NEW_VAR=value +``` + +## 📝 Reviewer Notes + + + +### Areas of Focus + +- [ ] Logic in `file.py:100-150` +- [ ] Performance of database queries +- [ ] Security implications of auth changes +- [ ] API contract changes + +### Questions for Reviewers + +1. Question 1? +2. Question 2? + +## 🔄 Migration Guide (if breaking change) + + + +```typescript +// Before +const result = oldAPI(); + +// After +const result = newAPI({ newParam: true }); +``` + +## 🎯 Related Documentation + + + +- [HyperAgent Spec](../docs/HyperAgent%20Spec.md) +- [Architecture Decision Record (ADR)](../docs/adr/XXX-title.md) +- [Figma Design](https://figma.com/file/...) + +--- + + diff --git a/.github/labeler.yml b/.github/labeler.yml new file mode 100644 index 0000000..ebc2f47 --- /dev/null +++ b/.github/labeler.yml @@ -0,0 +1,110 @@ +# Auto-label PRs based on changed files +# See: https://github.com/actions/labeler + +# Areas +'area:frontend': + - changed-files: + - any-glob-to-any-file: 'apps/web/**/*' + - any-glob-to-any-file: '**/*.tsx' + - any-glob-to-any-file: '**/*.jsx' + - any-glob-to-any-file: '**/*.css' + +'area:backend': + - changed-files: + - any-glob-to-any-file: 'hyperagent/api/**/*' + - any-glob-to-any-file: 'hyperagent/core/**/*' + - any-glob-to-any-file: '**/*.py' + +'area:contracts': + - changed-files: + - any-glob-to-any-file: 'contracts/**/*' + - any-glob-to-any-file: '**/*.sol' + +'area:infra': + - changed-files: + - any-glob-to-any-file: 'k8s/**/*' + - any-glob-to-any-file: 'docker/**/*' + - any-glob-to-any-file: 'Dockerfile*' + - any-glob-to-any-file: 'docker-compose*.yml' + - any-glob-to-any-file: '.github/workflows/**/*' + +'area:database': + - changed-files: + - any-glob-to-any-file: 'hyperagent/db/**/*' + - any-glob-to-any-file: '**/migrations/**/*' + - any-glob-to-any-file: '**/alembic/**/*' + +'area:docs': + - changed-files: + - any-glob-to-any-file: 'docs/**/*' + - any-glob-to-any-file: '**/*.md' + +# Types +'type:feature': + - changed-files: + - any-glob-to-any-file: 'hyperagent/**/*.py' + - any-glob-to-any-file: 'apps/web/src/**/*' + - head-branch: ['^feature/', '^feat/'] + +'type:bugfix': + - head-branch: ['^bugfix/', '^fix/'] + +'type:hotfix': + - head-branch: ['^hotfix/'] + +'type:chore': + - head-branch: ['^chore/'] + - changed-files: + - any-glob-to-any-file: 'package*.json' + - any-glob-to-any-file: 'requirements*.txt' + - any-glob-to-any-file: 'pyproject.toml' + +'type:docs': + - changed-files: + - all-globs-to-all-files: '**/*.md' + +# Dependencies +'dependencies': + - changed-files: + - any-glob-to-any-file: 'package*.json' + - any-glob-to-any-file: 'requirements*.txt' + - any-glob-to-any-file: 'pyproject.toml' + - any-glob-to-any-file: 'Cargo.toml' + +# Size labels +'size:small': + - changed-files: + - 1-10 + +'size:medium': + - changed-files: + - 11-50 + +'size:large': + - changed-files: + - 51-100 + +'size:xl': + - changed-files: + - 101+ + +# Priority (based on file patterns) +'priority:critical': + - changed-files: + - any-glob-to-any-file: 'hyperagent/security/**/*' + - any-glob-to-any-file: 'contracts/security/**/*' + +# Testing +'needs-tests': + - changed-files: + - any-glob-to-any-file: 'hyperagent/**/*.py' + - any-glob-to-any-file: 'apps/web/src/**/*.{ts,tsx}' + - all-globs-to-all-files: '!**/*.test.{py,ts,tsx}' + - all-globs-to-all-files: '!**/tests/**' + +# Security +'security': + - changed-files: + - any-glob-to-any-file: 'hyperagent/auth/**/*' + - any-glob-to-any-file: 'hyperagent/security/**/*' + - any-glob-to-any-file: 'contracts/security/**/*' diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md deleted file mode 100644 index c2e48ff..0000000 --- a/.github/pull_request_template.md +++ /dev/null @@ -1,82 +0,0 @@ -# Pull Request - -## Description - - - -## Type of Change - - - -- [ ] Bug fix (non-breaking change which fixes an issue) -- [ ] New feature (non-breaking change which adds functionality) -- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) -- [ ] Documentation update -- [ ] Code refactoring (no functional changes) -- [ ] Performance improvement -- [ ] Test addition or update -- [ ] CI/CD or DevOps changes - -## Related Issues - - - - -## Changes Made - - - -- -- -- - -## Testing - - - -- [ ] Unit tests added/updated -- [ ] Integration tests added/updated -- [ ] Manual testing completed -- [ ] All existing tests pass - -### Test Coverage - - -- Coverage: ___% (target: 80%+) - -## Checklist - - - -- [ ] My code follows the project's style guidelines (PEP 8, type hints, async/await) -- [ ] I have performed a self-review of my own code -- [ ] I have commented my code, particularly in hard-to-understand areas -- [ ] I have made corresponding changes to the documentation -- [ ] My changes generate no new warnings or errors -- [ ] I have added tests that prove my fix is effective or that my feature works -- [ ] New and existing unit tests pass locally with my changes -- [ ] Any dependent changes have been merged and published -- [ ] I have updated the CHANGELOG.md if applicable -- [ ] I have checked for security vulnerabilities (no hardcoded secrets, proper input validation) -- [ ] I have verified the changes work in both development and production environments - -## Screenshots/Demo - - - -## Additional Notes - - - -## Reviewers - - - - ---- - -**By submitting this pull request, I confirm that:** -- [ ] I have read and followed the [Contributing Guide](../CONTRIBUTING.md) -- [ ] I have read and followed the [Code of Conduct](../CODE_OF_CONDUCT.md) -- [ ] My code is ready for review and follows Microsoft Engineering Fundamentals standards - diff --git a/.github/skills/find-skills/SKILL.md b/.github/skills/find-skills/SKILL.md new file mode 100644 index 0000000..c797184 --- /dev/null +++ b/.github/skills/find-skills/SKILL.md @@ -0,0 +1,133 @@ +--- +name: find-skills +description: Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill. +--- + +# Find Skills + +This skill helps you discover and install skills from the open agent skills ecosystem. + +## When to Use This Skill + +Use this skill when the user: + +- Asks "how do I do X" where X might be a common task with an existing skill +- Says "find a skill for X" or "is there a skill for X" +- Asks "can you do X" where X is a specialized capability +- Expresses interest in extending agent capabilities +- Wants to search for tools, templates, or workflows +- Mentions they wish they had help with a specific domain (design, testing, deployment, etc.) + +## What is the Skills CLI? + +The Skills CLI (`npx skills`) is the package manager for the open agent skills ecosystem. Skills are modular packages that extend agent capabilities with specialized knowledge, workflows, and tools. + +**Key commands:** + +- `npx skills find [query]` - Search for skills interactively or by keyword +- `npx skills add ` - Install a skill from GitHub or other sources +- `npx skills check` - Check for skill updates +- `npx skills update` - Update all installed skills + +**Browse skills at:** https://skills.sh/ + +## How to Help Users Find Skills + +### Step 1: Understand What They Need + +When a user asks for help with something, identify: + +1. The domain (e.g., React, testing, design, deployment) +2. The specific task (e.g., writing tests, creating animations, reviewing PRs) +3. Whether this is a common enough task that a skill likely exists + +### Step 2: Search for Skills + +Run the find command with a relevant query: + +```bash +npx skills find [query] +``` + +For example: + +- User asks "how do I make my React app faster?" → `npx skills find react performance` +- User asks "can you help me with PR reviews?" → `npx skills find pr review` +- User asks "I need to create a changelog" → `npx skills find changelog` + +The command will return results like: + +``` +Install with npx skills add + +vercel-labs/agent-skills@vercel-react-best-practices +└ https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices +``` + +### Step 3: Present Options to the User + +When you find relevant skills, present them to the user with: + +1. The skill name and what it does +2. The install command they can run +3. A link to learn more at skills.sh + +Example response: + +``` +I found a skill that might help! The "vercel-react-best-practices" skill provides +React and Next.js performance optimization guidelines from Vercel Engineering. + +To install it: +npx skills add vercel-labs/agent-skills@vercel-react-best-practices + +Learn more: https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices +``` + +### Step 4: Offer to Install + +If the user wants to proceed, you can install the skill for them: + +```bash +npx skills add -g -y +``` + +The `-g` flag installs globally (user-level) and `-y` skips confirmation prompts. + +## Common Skill Categories + +When searching, consider these common categories: + +| Category | Example Queries | +| --------------- | ---------------------------------------- | +| Web Development | react, nextjs, typescript, css, tailwind | +| Testing | testing, jest, playwright, e2e | +| DevOps | deploy, docker, kubernetes, ci-cd | +| Documentation | docs, readme, changelog, api-docs | +| Code Quality | review, lint, refactor, best-practices | +| Design | ui, ux, design-system, accessibility | +| Productivity | workflow, automation, git | + +## Tips for Effective Searches + +1. **Use specific keywords**: "react testing" is better than just "testing" +2. **Try alternative terms**: If "deploy" doesn't work, try "deployment" or "ci-cd" +3. **Check popular sources**: Many skills come from `vercel-labs/agent-skills` or `ComposioHQ/awesome-claude-skills` + +## When No Skills Are Found + +If no relevant skills exist: + +1. Acknowledge that no existing skill was found +2. Offer to help with the task directly using your general capabilities +3. Suggest the user could create their own skill with `npx skills init` + +Example: + +``` +I searched for skills related to "xyz" but didn't find any matches. +I can still help you with this task directly! Would you like me to proceed? + +If this is something you do often, you could create your own skill: +npx skills init my-xyz-skill +``` diff --git a/.github/skills/ui-ux-pro-max/SKILL.md b/.github/skills/ui-ux-pro-max/SKILL.md new file mode 100644 index 0000000..e58d618 --- /dev/null +++ b/.github/skills/ui-ux-pro-max/SKILL.md @@ -0,0 +1,386 @@ +--- +name: ui-ux-pro-max +description: "UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind, shadcn/ui). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient. Integrations: shadcn/ui MCP for component search and examples." +--- + +# UI/UX Pro Max - Design Intelligence + +Comprehensive design guide for web and mobile applications. Contains 50+ styles, 97 color palettes, 57 font pairings, 99 UX guidelines, and 25 chart types across 9 technology stacks. Searchable database with priority-based recommendations. + +## When to Apply + +Reference these guidelines when: +- Designing new UI components or pages +- Choosing color palettes and typography +- Reviewing code for UX issues +- Building landing pages or dashboards +- Implementing accessibility requirements + +## Rule Categories by Priority + +| Priority | Category | Impact | Domain | +|----------|----------|--------|--------| +| 1 | Accessibility | CRITICAL | `ux` | +| 2 | Touch & Interaction | CRITICAL | `ux` | +| 3 | Performance | HIGH | `ux` | +| 4 | Layout & Responsive | HIGH | `ux` | +| 5 | Typography & Color | MEDIUM | `typography`, `color` | +| 6 | Animation | MEDIUM | `ux` | +| 7 | Style Selection | MEDIUM | `style`, `product` | +| 8 | Charts & Data | LOW | `chart` | + +## Quick Reference + +### 1. Accessibility (CRITICAL) + +- `color-contrast` - Minimum 4.5:1 ratio for normal text +- `focus-states` - Visible focus rings on interactive elements +- `alt-text` - Descriptive alt text for meaningful images +- `aria-labels` - aria-label for icon-only buttons +- `keyboard-nav` - Tab order matches visual order +- `form-labels` - Use label with for attribute + +### 2. Touch & Interaction (CRITICAL) + +- `touch-target-size` - Minimum 44x44px touch targets +- `hover-vs-tap` - Use click/tap for primary interactions +- `loading-buttons` - Disable button during async operations +- `error-feedback` - Clear error messages near problem +- `cursor-pointer` - Add cursor-pointer to clickable elements + +### 3. Performance (HIGH) + +- `image-optimization` - Use WebP, srcset, lazy loading +- `reduced-motion` - Check prefers-reduced-motion +- `content-jumping` - Reserve space for async content + +### 4. Layout & Responsive (HIGH) + +- `viewport-meta` - width=device-width initial-scale=1 +- `readable-font-size` - Minimum 16px body text on mobile +- `horizontal-scroll` - Ensure content fits viewport width +- `z-index-management` - Define z-index scale (10, 20, 30, 50) + +### 5. Typography & Color (MEDIUM) + +- `line-height` - Use 1.5-1.75 for body text +- `line-length` - Limit to 65-75 characters per line +- `font-pairing` - Match heading/body font personalities + +### 6. Animation (MEDIUM) + +- `duration-timing` - Use 150-300ms for micro-interactions +- `transform-performance` - Use transform/opacity, not width/height +- `loading-states` - Skeleton screens or spinners + +### 7. Style Selection (MEDIUM) + +- `style-match` - Match style to product type +- `consistency` - Use same style across all pages +- `no-emoji-icons` - Use SVG icons, not emojis + +### 8. Charts & Data (LOW) + +- `chart-type` - Match chart type to data type +- `color-guidance` - Use accessible color palettes +- `data-table` - Provide table alternative for accessibility + +## How to Use + +Search specific domains using the CLI tool below. + +--- + +## Prerequisites + +Check if Python is installed: + +```bash +python3 --version || python --version +``` + +If Python is not installed, install it based on user's OS: + +**macOS:** +```bash +brew install python3 +``` + +**Ubuntu/Debian:** +```bash +sudo apt update && sudo apt install python3 +``` + +**Windows:** +```powershell +winget install Python.Python.3.12 +``` + +--- + +## How to Use This Skill + +When user requests UI/UX work (design, build, create, implement, review, fix, improve), follow this workflow: + +### Step 1: Analyze User Requirements + +Extract key information from user request: +- **Product type**: SaaS, e-commerce, portfolio, dashboard, landing page, etc. +- **Style keywords**: minimal, playful, professional, elegant, dark mode, etc. +- **Industry**: healthcare, fintech, gaming, education, etc. +- **Stack**: React, Vue, Next.js, or default to `html-tailwind` + +### Step 2: Generate Design System (REQUIRED) + +**Always start with `--design-system`** to get comprehensive recommendations with reasoning: + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py " " --design-system [-p "Project Name"] +``` + +This command: +1. Searches 5 domains in parallel (product, style, color, landing, typography) +2. Applies reasoning rules from `ui-reasoning.csv` to select best matches +3. Returns complete design system: pattern, style, colors, typography, effects +4. Includes anti-patterns to avoid + +**Example:** +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "beauty spa wellness service" --design-system -p "Serenity Spa" +``` + +### Step 2b: Persist Design System (Master + Overrides Pattern) + +To save the design system for **hierarchical retrieval across sessions**, add `--persist`: + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "" --design-system --persist -p "Project Name" +``` + +This creates: +- `design-system/MASTER.md` — Global Source of Truth with all design rules +- `design-system/pages/` — Folder for page-specific overrides + +**With page-specific override:** +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "" --design-system --persist -p "Project Name" --page "dashboard" +``` + +This also creates: +- `design-system/pages/dashboard.md` — Page-specific deviations from Master + +**How hierarchical retrieval works:** +1. When building a specific page (e.g., "Checkout"), first check `design-system/pages/checkout.md` +2. If the page file exists, its rules **override** the Master file +3. If not, use `design-system/MASTER.md` exclusively + +**Context-aware retrieval prompt:** +``` +I am building the [Page Name] page. Please read design-system/MASTER.md. +Also check if design-system/pages/[page-name].md exists. +If the page file exists, prioritize its rules. +If not, use the Master rules exclusively. +Now, generate the code... +``` + +### Step 3: Supplement with Detailed Searches (as needed) + +After getting the design system, use domain searches to get additional details: + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "" --domain [-n ] +``` + +**When to use detailed searches:** + +| Need | Domain | Example | +|------|--------|---------| +| More style options | `style` | `--domain style "glassmorphism dark"` | +| Chart recommendations | `chart` | `--domain chart "real-time dashboard"` | +| UX best practices | `ux` | `--domain ux "animation accessibility"` | +| Alternative fonts | `typography` | `--domain typography "elegant luxury"` | +| Landing structure | `landing` | `--domain landing "hero social-proof"` | + +### Step 4: Stack Guidelines (Default: html-tailwind) + +Get implementation-specific best practices. If user doesn't specify a stack, **default to `html-tailwind`**. + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "" --stack html-tailwind +``` + +Available stacks: `html-tailwind`, `react`, `nextjs`, `vue`, `svelte`, `swiftui`, `react-native`, `flutter`, `shadcn`, `jetpack-compose` + +--- + +## Search Reference + +### Available Domains + +| Domain | Use For | Example Keywords | +|--------|---------|------------------| +| `product` | Product type recommendations | SaaS, e-commerce, portfolio, healthcare, beauty, service | +| `style` | UI styles, colors, effects | glassmorphism, minimalism, dark mode, brutalism | +| `typography` | Font pairings, Google Fonts | elegant, playful, professional, modern | +| `color` | Color palettes by product type | saas, ecommerce, healthcare, beauty, fintech, service | +| `landing` | Page structure, CTA strategies | hero, hero-centric, testimonial, pricing, social-proof | +| `chart` | Chart types, library recommendations | trend, comparison, timeline, funnel, pie | +| `ux` | Best practices, anti-patterns | animation, accessibility, z-index, loading | +| `react` | React/Next.js performance | waterfall, bundle, suspense, memo, rerender, cache | +| `web` | Web interface guidelines | aria, focus, keyboard, semantic, virtualize | +| `prompt` | AI prompts, CSS keywords | (style name) | + +### Available Stacks + +| Stack | Focus | +|-------|-------| +| `html-tailwind` | Tailwind utilities, responsive, a11y (DEFAULT) | +| `react` | State, hooks, performance, patterns | +| `nextjs` | SSR, routing, images, API routes | +| `vue` | Composition API, Pinia, Vue Router | +| `svelte` | Runes, stores, SvelteKit | +| `swiftui` | Views, State, Navigation, Animation | +| `react-native` | Components, Navigation, Lists | +| `flutter` | Widgets, State, Layout, Theming | +| `shadcn` | shadcn/ui components, theming, forms, patterns | +| `jetpack-compose` | Composables, Modifiers, State Hoisting, Recomposition | + +--- + +## Example Workflow + +**User request:** "Làm landing page cho dịch vụ chăm sóc da chuyên nghiệp" + +### Step 1: Analyze Requirements +- Product type: Beauty/Spa service +- Style keywords: elegant, professional, soft +- Industry: Beauty/Wellness +- Stack: html-tailwind (default) + +### Step 2: Generate Design System (REQUIRED) + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "beauty spa wellness service elegant" --design-system -p "Serenity Spa" +``` + +**Output:** Complete design system with pattern, style, colors, typography, effects, and anti-patterns. + +### Step 3: Supplement with Detailed Searches (as needed) + +```bash +# Get UX guidelines for animation and accessibility +python3 skills/ui-ux-pro-max/scripts/search.py "animation accessibility" --domain ux + +# Get alternative typography options if needed +python3 skills/ui-ux-pro-max/scripts/search.py "elegant luxury serif" --domain typography +``` + +### Step 4: Stack Guidelines + +```bash +python3 skills/ui-ux-pro-max/scripts/search.py "layout responsive form" --stack html-tailwind +``` + +**Then:** Synthesize design system + detailed searches and implement the design. + +--- + +## Output Formats + +The `--design-system` flag supports two output formats: + +```bash +# ASCII box (default) - best for terminal display +python3 skills/ui-ux-pro-max/scripts/search.py "fintech crypto" --design-system + +# Markdown - best for documentation +python3 skills/ui-ux-pro-max/scripts/search.py "fintech crypto" --design-system -f markdown +``` + +--- + +## Tips for Better Results + +1. **Be specific with keywords** - "healthcare SaaS dashboard" > "app" +2. **Search multiple times** - Different keywords reveal different insights +3. **Combine domains** - Style + Typography + Color = Complete design system +4. **Always check UX** - Search "animation", "z-index", "accessibility" for common issues +5. **Use stack flag** - Get implementation-specific best practices +6. **Iterate** - If first search doesn't match, try different keywords + +--- + +## Common Rules for Professional UI + +These are frequently overlooked issues that make UI look unprofessional: + +### Icons & Visual Elements + +| Rule | Do | Don't | +|------|----|----- | +| **No emoji icons** | Use SVG icons (Heroicons, Lucide, Simple Icons) | Use emojis like 🎨 🚀 ⚙️ as UI icons | +| **Stable hover states** | Use color/opacity transitions on hover | Use scale transforms that shift layout | +| **Correct brand logos** | Research official SVG from Simple Icons | Guess or use incorrect logo paths | +| **Consistent icon sizing** | Use fixed viewBox (24x24) with w-6 h-6 | Mix different icon sizes randomly | + +### Interaction & Cursor + +| Rule | Do | Don't | +|------|----|----- | +| **Cursor pointer** | Add `cursor-pointer` to all clickable/hoverable cards | Leave default cursor on interactive elements | +| **Hover feedback** | Provide visual feedback (color, shadow, border) | No indication element is interactive | +| **Smooth transitions** | Use `transition-colors duration-200` | Instant state changes or too slow (>500ms) | + +### Light/Dark Mode Contrast + +| Rule | Do | Don't | +|------|----|----- | +| **Glass card light mode** | Use `bg-white/80` or higher opacity | Use `bg-white/10` (too transparent) | +| **Text contrast light** | Use `#0F172A` (slate-900) for text | Use `#94A3B8` (slate-400) for body text | +| **Muted text light** | Use `#475569` (slate-600) minimum | Use gray-400 or lighter | +| **Border visibility** | Use `border-gray-200` in light mode | Use `border-white/10` (invisible) | + +### Layout & Spacing + +| Rule | Do | Don't | +|------|----|----- | +| **Floating navbar** | Add `top-4 left-4 right-4` spacing | Stick navbar to `top-0 left-0 right-0` | +| **Content padding** | Account for fixed navbar height | Let content hide behind fixed elements | +| **Consistent max-width** | Use same `max-w-6xl` or `max-w-7xl` | Mix different container widths | + +--- + +## Pre-Delivery Checklist + +Before delivering UI code, verify these items: + +### Visual Quality +- [ ] No emojis used as icons (use SVG instead) +- [ ] All icons from consistent icon set (Heroicons/Lucide) +- [ ] Brand logos are correct (verified from Simple Icons) +- [ ] Hover states don't cause layout shift +- [ ] Use theme colors directly (bg-primary) not var() wrapper + +### Interaction +- [ ] All clickable elements have `cursor-pointer` +- [ ] Hover states provide clear visual feedback +- [ ] Transitions are smooth (150-300ms) +- [ ] Focus states visible for keyboard navigation + +### Light/Dark Mode +- [ ] Light mode text has sufficient contrast (4.5:1 minimum) +- [ ] Glass/transparent elements visible in light mode +- [ ] Borders visible in both modes +- [ ] Test both modes before delivery + +### Layout +- [ ] Floating elements have proper spacing from edges +- [ ] No content hidden behind fixed navbars +- [ ] Responsive at 375px, 768px, 1024px, 1440px +- [ ] No horizontal scroll on mobile + +### Accessibility +- [ ] All images have alt text +- [ ] Form inputs have labels +- [ ] Color is not the only indicator +- [ ] `prefers-reduced-motion` respected diff --git a/.github/skills/ui-ux-pro-max/data b/.github/skills/ui-ux-pro-max/data new file mode 100644 index 0000000..e5b9469 --- /dev/null +++ b/.github/skills/ui-ux-pro-max/data @@ -0,0 +1 @@ +../../../src/ui-ux-pro-max/data \ No newline at end of file diff --git a/.github/skills/ui-ux-pro-max/scripts b/.github/skills/ui-ux-pro-max/scripts new file mode 100644 index 0000000..ccb93f7 --- /dev/null +++ b/.github/skills/ui-ux-pro-max/scripts @@ -0,0 +1 @@ +../../../src/ui-ux-pro-max/scripts \ No newline at end of file diff --git a/.github/skills/vercel-react-best-practices/AGENTS.md b/.github/skills/vercel-react-best-practices/AGENTS.md new file mode 100644 index 0000000..db951ab --- /dev/null +++ b/.github/skills/vercel-react-best-practices/AGENTS.md @@ -0,0 +1,2934 @@ +# React Best Practices + +**Version 1.0.0** +Vercel Engineering +January 2026 + +> **Note:** +> This document is mainly for agents and LLMs to follow when maintaining, +> generating, or refactoring React and Next.js codebases. Humans +> may also find it useful, but guidance here is optimized for automation +> and consistency by AI-assisted workflows. + +--- + +## Abstract + +Comprehensive performance optimization guide for React and Next.js applications, designed for AI agents and LLMs. Contains 40+ rules across 8 categories, prioritized by impact from critical (eliminating waterfalls, reducing bundle size) to incremental (advanced patterns). Each rule includes detailed explanations, real-world examples comparing incorrect vs. correct implementations, and specific impact metrics to guide automated refactoring and code generation. + +--- + +## Table of Contents + +1. [Eliminating Waterfalls](#1-eliminating-waterfalls) — **CRITICAL** + - 1.1 [Defer Await Until Needed](#11-defer-await-until-needed) + - 1.2 [Dependency-Based Parallelization](#12-dependency-based-parallelization) + - 1.3 [Prevent Waterfall Chains in API Routes](#13-prevent-waterfall-chains-in-api-routes) + - 1.4 [Promise.all() for Independent Operations](#14-promiseall-for-independent-operations) + - 1.5 [Strategic Suspense Boundaries](#15-strategic-suspense-boundaries) +2. [Bundle Size Optimization](#2-bundle-size-optimization) — **CRITICAL** + - 2.1 [Avoid Barrel File Imports](#21-avoid-barrel-file-imports) + - 2.2 [Conditional Module Loading](#22-conditional-module-loading) + - 2.3 [Defer Non-Critical Third-Party Libraries](#23-defer-non-critical-third-party-libraries) + - 2.4 [Dynamic Imports for Heavy Components](#24-dynamic-imports-for-heavy-components) + - 2.5 [Preload Based on User Intent](#25-preload-based-on-user-intent) +3. [Server-Side Performance](#3-server-side-performance) — **HIGH** + - 3.1 [Authenticate Server Actions Like API Routes](#31-authenticate-server-actions-like-api-routes) + - 3.2 [Avoid Duplicate Serialization in RSC Props](#32-avoid-duplicate-serialization-in-rsc-props) + - 3.3 [Cross-Request LRU Caching](#33-cross-request-lru-caching) + - 3.4 [Minimize Serialization at RSC Boundaries](#34-minimize-serialization-at-rsc-boundaries) + - 3.5 [Parallel Data Fetching with Component Composition](#35-parallel-data-fetching-with-component-composition) + - 3.6 [Per-Request Deduplication with React.cache()](#36-per-request-deduplication-with-reactcache) + - 3.7 [Use after() for Non-Blocking Operations](#37-use-after-for-non-blocking-operations) +4. [Client-Side Data Fetching](#4-client-side-data-fetching) — **MEDIUM-HIGH** + - 4.1 [Deduplicate Global Event Listeners](#41-deduplicate-global-event-listeners) + - 4.2 [Use Passive Event Listeners for Scrolling Performance](#42-use-passive-event-listeners-for-scrolling-performance) + - 4.3 [Use SWR for Automatic Deduplication](#43-use-swr-for-automatic-deduplication) + - 4.4 [Version and Minimize localStorage Data](#44-version-and-minimize-localstorage-data) +5. [Re-render Optimization](#5-re-render-optimization) — **MEDIUM** + - 5.1 [Calculate Derived State During Rendering](#51-calculate-derived-state-during-rendering) + - 5.2 [Defer State Reads to Usage Point](#52-defer-state-reads-to-usage-point) + - 5.3 [Do not wrap a simple expression with a primitive result type in useMemo](#53-do-not-wrap-a-simple-expression-with-a-primitive-result-type-in-usememo) + - 5.4 [Extract Default Non-primitive Parameter Value from Memoized Component to Constant](#54-extract-default-non-primitive-parameter-value-from-memoized-component-to-constant) + - 5.5 [Extract to Memoized Components](#55-extract-to-memoized-components) + - 5.6 [Narrow Effect Dependencies](#56-narrow-effect-dependencies) + - 5.7 [Put Interaction Logic in Event Handlers](#57-put-interaction-logic-in-event-handlers) + - 5.8 [Subscribe to Derived State](#58-subscribe-to-derived-state) + - 5.9 [Use Functional setState Updates](#59-use-functional-setstate-updates) + - 5.10 [Use Lazy State Initialization](#510-use-lazy-state-initialization) + - 5.11 [Use Transitions for Non-Urgent Updates](#511-use-transitions-for-non-urgent-updates) + - 5.12 [Use useRef for Transient Values](#512-use-useref-for-transient-values) +6. [Rendering Performance](#6-rendering-performance) — **MEDIUM** + - 6.1 [Animate SVG Wrapper Instead of SVG Element](#61-animate-svg-wrapper-instead-of-svg-element) + - 6.2 [CSS content-visibility for Long Lists](#62-css-content-visibility-for-long-lists) + - 6.3 [Hoist Static JSX Elements](#63-hoist-static-jsx-elements) + - 6.4 [Optimize SVG Precision](#64-optimize-svg-precision) + - 6.5 [Prevent Hydration Mismatch Without Flickering](#65-prevent-hydration-mismatch-without-flickering) + - 6.6 [Suppress Expected Hydration Mismatches](#66-suppress-expected-hydration-mismatches) + - 6.7 [Use Activity Component for Show/Hide](#67-use-activity-component-for-showhide) + - 6.8 [Use Explicit Conditional Rendering](#68-use-explicit-conditional-rendering) + - 6.9 [Use useTransition Over Manual Loading States](#69-use-usetransition-over-manual-loading-states) +7. [JavaScript Performance](#7-javascript-performance) — **LOW-MEDIUM** + - 7.1 [Avoid Layout Thrashing](#71-avoid-layout-thrashing) + - 7.2 [Build Index Maps for Repeated Lookups](#72-build-index-maps-for-repeated-lookups) + - 7.3 [Cache Property Access in Loops](#73-cache-property-access-in-loops) + - 7.4 [Cache Repeated Function Calls](#74-cache-repeated-function-calls) + - 7.5 [Cache Storage API Calls](#75-cache-storage-api-calls) + - 7.6 [Combine Multiple Array Iterations](#76-combine-multiple-array-iterations) + - 7.7 [Early Length Check for Array Comparisons](#77-early-length-check-for-array-comparisons) + - 7.8 [Early Return from Functions](#78-early-return-from-functions) + - 7.9 [Hoist RegExp Creation](#79-hoist-regexp-creation) + - 7.10 [Use Loop for Min/Max Instead of Sort](#710-use-loop-for-minmax-instead-of-sort) + - 7.11 [Use Set/Map for O(1) Lookups](#711-use-setmap-for-o1-lookups) + - 7.12 [Use toSorted() Instead of sort() for Immutability](#712-use-tosorted-instead-of-sort-for-immutability) +8. [Advanced Patterns](#8-advanced-patterns) — **LOW** + - 8.1 [Initialize App Once, Not Per Mount](#81-initialize-app-once-not-per-mount) + - 8.2 [Store Event Handlers in Refs](#82-store-event-handlers-in-refs) + - 8.3 [useEffectEvent for Stable Callback Refs](#83-useeffectevent-for-stable-callback-refs) + +--- + +## 1. Eliminating Waterfalls + +**Impact: CRITICAL** + +Waterfalls are the #1 performance killer. Each sequential await adds full network latency. Eliminating them yields the largest gains. + +### 1.1 Defer Await Until Needed + +**Impact: HIGH (avoids blocking unused code paths)** + +Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them. + +**Incorrect: blocks both branches** + +```typescript +async function handleRequest(userId: string, skipProcessing: boolean) { + const userData = await fetchUserData(userId) + + if (skipProcessing) { + // Returns immediately but still waited for userData + return { skipped: true } + } + + // Only this branch uses userData + return processUserData(userData) +} +``` + +**Correct: only blocks when needed** + +```typescript +async function handleRequest(userId: string, skipProcessing: boolean) { + if (skipProcessing) { + // Returns immediately without waiting + return { skipped: true } + } + + // Fetch only when needed + const userData = await fetchUserData(userId) + return processUserData(userData) +} +``` + +**Another example: early return optimization** + +```typescript +// Incorrect: always fetches permissions +async function updateResource(resourceId: string, userId: string) { + const permissions = await fetchPermissions(userId) + const resource = await getResource(resourceId) + + if (!resource) { + return { error: 'Not found' } + } + + if (!permissions.canEdit) { + return { error: 'Forbidden' } + } + + return await updateResourceData(resource, permissions) +} + +// Correct: fetches only when needed +async function updateResource(resourceId: string, userId: string) { + const resource = await getResource(resourceId) + + if (!resource) { + return { error: 'Not found' } + } + + const permissions = await fetchPermissions(userId) + + if (!permissions.canEdit) { + return { error: 'Forbidden' } + } + + return await updateResourceData(resource, permissions) +} +``` + +This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive. + +### 1.2 Dependency-Based Parallelization + +**Impact: CRITICAL (2-10× improvement)** + +For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment. + +**Incorrect: profile waits for config unnecessarily** + +```typescript +const [user, config] = await Promise.all([ + fetchUser(), + fetchConfig() +]) +const profile = await fetchProfile(user.id) +``` + +**Correct: config and profile run in parallel** + +```typescript +import { all } from 'better-all' + +const { user, config, profile } = await all({ + async user() { return fetchUser() }, + async config() { return fetchConfig() }, + async profile() { + return fetchProfile((await this.$.user).id) + } +}) +``` + +**Alternative without extra dependencies:** + +```typescript +const userPromise = fetchUser() +const profilePromise = userPromise.then(user => fetchProfile(user.id)) + +const [user, config, profile] = await Promise.all([ + userPromise, + fetchConfig(), + profilePromise +]) +``` + +We can also create all the promises first, and do `Promise.all()` at the end. + +Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all) + +### 1.3 Prevent Waterfall Chains in API Routes + +**Impact: CRITICAL (2-10× improvement)** + +In API routes and Server Actions, start independent operations immediately, even if you don't await them yet. + +**Incorrect: config waits for auth, data waits for both** + +```typescript +export async function GET(request: Request) { + const session = await auth() + const config = await fetchConfig() + const data = await fetchData(session.user.id) + return Response.json({ data, config }) +} +``` + +**Correct: auth and config start immediately** + +```typescript +export async function GET(request: Request) { + const sessionPromise = auth() + const configPromise = fetchConfig() + const session = await sessionPromise + const [config, data] = await Promise.all([ + configPromise, + fetchData(session.user.id) + ]) + return Response.json({ data, config }) +} +``` + +For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization). + +### 1.4 Promise.all() for Independent Operations + +**Impact: CRITICAL (2-10× improvement)** + +When async operations have no interdependencies, execute them concurrently using `Promise.all()`. + +**Incorrect: sequential execution, 3 round trips** + +```typescript +const user = await fetchUser() +const posts = await fetchPosts() +const comments = await fetchComments() +``` + +**Correct: parallel execution, 1 round trip** + +```typescript +const [user, posts, comments] = await Promise.all([ + fetchUser(), + fetchPosts(), + fetchComments() +]) +``` + +### 1.5 Strategic Suspense Boundaries + +**Impact: HIGH (faster initial paint)** + +Instead of awaiting data in async components before returning JSX, use Suspense boundaries to show the wrapper UI faster while data loads. + +**Incorrect: wrapper blocked by data fetching** + +```tsx +async function Page() { + const data = await fetchData() // Blocks entire page + + return ( +
+
Sidebar
+
Header
+
+ +
+
Footer
+
+ ) +} +``` + +The entire layout waits for data even though only the middle section needs it. + +**Correct: wrapper shows immediately, data streams in** + +```tsx +function Page() { + return ( +
+
Sidebar
+
Header
+
+ }> + + +
+
Footer
+
+ ) +} + +async function DataDisplay() { + const data = await fetchData() // Only blocks this component + return
{data.content}
+} +``` + +Sidebar, Header, and Footer render immediately. Only DataDisplay waits for data. + +**Alternative: share promise across components** + +```tsx +function Page() { + // Start fetch immediately, but don't await + const dataPromise = fetchData() + + return ( +
+
Sidebar
+
Header
+ }> + + + +
Footer
+
+ ) +} + +function DataDisplay({ dataPromise }: { dataPromise: Promise }) { + const data = use(dataPromise) // Unwraps the promise + return
{data.content}
+} + +function DataSummary({ dataPromise }: { dataPromise: Promise }) { + const data = use(dataPromise) // Reuses the same promise + return
{data.summary}
+} +``` + +Both components share the same promise, so only one fetch occurs. Layout renders immediately while both components wait together. + +**When NOT to use this pattern:** + +- Critical data needed for layout decisions (affects positioning) + +- SEO-critical content above the fold + +- Small, fast queries where suspense overhead isn't worth it + +- When you want to avoid layout shift (loading → content jump) + +**Trade-off:** Faster initial paint vs potential layout shift. Choose based on your UX priorities. + +--- + +## 2. Bundle Size Optimization + +**Impact: CRITICAL** + +Reducing initial bundle size improves Time to Interactive and Largest Contentful Paint. + +### 2.1 Avoid Barrel File Imports + +**Impact: CRITICAL (200-800ms import cost, slow builds)** + +Import directly from source files instead of barrel files to avoid loading thousands of unused modules. **Barrel files** are entry points that re-export multiple modules (e.g., `index.js` that does `export * from './module'`). + +Popular icon and component libraries can have **up to 10,000 re-exports** in their entry file. For many React packages, **it takes 200-800ms just to import them**, affecting both development speed and production cold starts. + +**Why tree-shaking doesn't help:** When a library is marked as external (not bundled), the bundler can't optimize it. If you bundle it to enable tree-shaking, builds become substantially slower analyzing the entire module graph. + +**Incorrect: imports entire library** + +```tsx +import { Check, X, Menu } from 'lucide-react' +// Loads 1,583 modules, takes ~2.8s extra in dev +// Runtime cost: 200-800ms on every cold start + +import { Button, TextField } from '@mui/material' +// Loads 2,225 modules, takes ~4.2s extra in dev +``` + +**Correct: imports only what you need** + +```tsx +import Check from 'lucide-react/dist/esm/icons/check' +import X from 'lucide-react/dist/esm/icons/x' +import Menu from 'lucide-react/dist/esm/icons/menu' +// Loads only 3 modules (~2KB vs ~1MB) + +import Button from '@mui/material/Button' +import TextField from '@mui/material/TextField' +// Loads only what you use +``` + +**Alternative: Next.js 13.5+** + +```js +// next.config.js - use optimizePackageImports +module.exports = { + experimental: { + optimizePackageImports: ['lucide-react', '@mui/material'] + } +} + +// Then you can keep the ergonomic barrel imports: +import { Check, X, Menu } from 'lucide-react' +// Automatically transformed to direct imports at build time +``` + +Direct imports provide 15-70% faster dev boot, 28% faster builds, 40% faster cold starts, and significantly faster HMR. + +Libraries commonly affected: `lucide-react`, `@mui/material`, `@mui/icons-material`, `@tabler/icons-react`, `react-icons`, `@headlessui/react`, `@radix-ui/react-*`, `lodash`, `ramda`, `date-fns`, `rxjs`, `react-use`. + +Reference: [https://vercel.com/blog/how-we-optimized-package-imports-in-next-js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js) + +### 2.2 Conditional Module Loading + +**Impact: HIGH (loads large data only when needed)** + +Load large data or modules only when a feature is activated. + +**Example: lazy-load animation frames** + +```tsx +function AnimationPlayer({ enabled, setEnabled }: { enabled: boolean; setEnabled: React.Dispatch> }) { + const [frames, setFrames] = useState(null) + + useEffect(() => { + if (enabled && !frames && typeof window !== 'undefined') { + import('./animation-frames.js') + .then(mod => setFrames(mod.frames)) + .catch(() => setEnabled(false)) + } + }, [enabled, frames, setEnabled]) + + if (!frames) return + return +} +``` + +The `typeof window !== 'undefined'` check prevents bundling this module for SSR, optimizing server bundle size and build speed. + +### 2.3 Defer Non-Critical Third-Party Libraries + +**Impact: MEDIUM (loads after hydration)** + +Analytics, logging, and error tracking don't block user interaction. Load them after hydration. + +**Incorrect: blocks initial bundle** + +```tsx +import { Analytics } from '@vercel/analytics/react' + +export default function RootLayout({ children }) { + return ( + + + {children} + + + + ) +} +``` + +**Correct: loads after hydration** + +```tsx +import dynamic from 'next/dynamic' + +const Analytics = dynamic( + () => import('@vercel/analytics/react').then(m => m.Analytics), + { ssr: false } +) + +export default function RootLayout({ children }) { + return ( + + + {children} + + + + ) +} +``` + +### 2.4 Dynamic Imports for Heavy Components + +**Impact: CRITICAL (directly affects TTI and LCP)** + +Use `next/dynamic` to lazy-load large components not needed on initial render. + +**Incorrect: Monaco bundles with main chunk ~300KB** + +```tsx +import { MonacoEditor } from './monaco-editor' + +function CodePanel({ code }: { code: string }) { + return +} +``` + +**Correct: Monaco loads on demand** + +```tsx +import dynamic from 'next/dynamic' + +const MonacoEditor = dynamic( + () => import('./monaco-editor').then(m => m.MonacoEditor), + { ssr: false } +) + +function CodePanel({ code }: { code: string }) { + return +} +``` + +### 2.5 Preload Based on User Intent + +**Impact: MEDIUM (reduces perceived latency)** + +Preload heavy bundles before they're needed to reduce perceived latency. + +**Example: preload on hover/focus** + +```tsx +function EditorButton({ onClick }: { onClick: () => void }) { + const preload = () => { + if (typeof window !== 'undefined') { + void import('./monaco-editor') + } + } + + return ( + + ) +} +``` + +**Example: preload when feature flag is enabled** + +```tsx +function FlagsProvider({ children, flags }: Props) { + useEffect(() => { + if (flags.editorEnabled && typeof window !== 'undefined') { + void import('./monaco-editor').then(mod => mod.init()) + } + }, [flags.editorEnabled]) + + return + {children} + +} +``` + +The `typeof window !== 'undefined'` check prevents bundling preloaded modules for SSR, optimizing server bundle size and build speed. + +--- + +## 3. Server-Side Performance + +**Impact: HIGH** + +Optimizing server-side rendering and data fetching eliminates server-side waterfalls and reduces response times. + +### 3.1 Authenticate Server Actions Like API Routes + +**Impact: CRITICAL (prevents unauthorized access to server mutations)** + +Server Actions (functions with `"use server"`) are exposed as public endpoints, just like API routes. Always verify authentication and authorization **inside** each Server Action—do not rely solely on middleware, layout guards, or page-level checks, as Server Actions can be invoked directly. + +Next.js documentation explicitly states: "Treat Server Actions with the same security considerations as public-facing API endpoints, and verify if the user is allowed to perform a mutation." + +**Incorrect: no authentication check** + +```typescript +'use server' + +export async function deleteUser(userId: string) { + // Anyone can call this! No auth check + await db.user.delete({ where: { id: userId } }) + return { success: true } +} +``` + +**Correct: authentication inside the action** + +```typescript +'use server' + +import { verifySession } from '@/lib/auth' +import { unauthorized } from '@/lib/errors' + +export async function deleteUser(userId: string) { + // Always check auth inside the action + const session = await verifySession() + + if (!session) { + throw unauthorized('Must be logged in') + } + + // Check authorization too + if (session.user.role !== 'admin' && session.user.id !== userId) { + throw unauthorized('Cannot delete other users') + } + + await db.user.delete({ where: { id: userId } }) + return { success: true } +} +``` + +**With input validation:** + +```typescript +'use server' + +import { verifySession } from '@/lib/auth' +import { z } from 'zod' + +const updateProfileSchema = z.object({ + userId: z.string().uuid(), + name: z.string().min(1).max(100), + email: z.string().email() +}) + +export async function updateProfile(data: unknown) { + // Validate input first + const validated = updateProfileSchema.parse(data) + + // Then authenticate + const session = await verifySession() + if (!session) { + throw new Error('Unauthorized') + } + + // Then authorize + if (session.user.id !== validated.userId) { + throw new Error('Can only update own profile') + } + + // Finally perform the mutation + await db.user.update({ + where: { id: validated.userId }, + data: { + name: validated.name, + email: validated.email + } + }) + + return { success: true } +} +``` + +Reference: [https://nextjs.org/docs/app/guides/authentication](https://nextjs.org/docs/app/guides/authentication) + +### 3.2 Avoid Duplicate Serialization in RSC Props + +**Impact: LOW (reduces network payload by avoiding duplicate serialization)** + +RSC→client serialization deduplicates by object reference, not value. Same reference = serialized once; new reference = serialized again. Do transformations (`.toSorted()`, `.filter()`, `.map()`) in client, not server. + +**Incorrect: duplicates array** + +```tsx +// RSC: sends 6 strings (2 arrays × 3 items) + +``` + +**Correct: sends 3 strings** + +```tsx +// RSC: send once + + +// Client: transform there +'use client' +const sorted = useMemo(() => [...usernames].sort(), [usernames]) +``` + +**Nested deduplication behavior:** + +```tsx +// string[] - duplicates everything +usernames={['a','b']} sorted={usernames.toSorted()} // sends 4 strings + +// object[] - duplicates array structure only +users={[{id:1},{id:2}]} sorted={users.toSorted()} // sends 2 arrays + 2 unique objects (not 4) +``` + +Deduplication works recursively. Impact varies by data type: + +- `string[]`, `number[]`, `boolean[]`: **HIGH impact** - array + all primitives fully duplicated + +- `object[]`: **LOW impact** - array duplicated, but nested objects deduplicated by reference + +**Operations breaking deduplication: create new references** + +- Arrays: `.toSorted()`, `.filter()`, `.map()`, `.slice()`, `[...arr]` + +- Objects: `{...obj}`, `Object.assign()`, `structuredClone()`, `JSON.parse(JSON.stringify())` + +**More examples:** + +```tsx +// ❌ Bad + u.active)} /> + + +// ✅ Good + + +// Do filtering/destructuring in client +``` + +**Exception:** Pass derived data when transformation is expensive or client doesn't need original. + +### 3.3 Cross-Request LRU Caching + +**Impact: HIGH (caches across requests)** + +`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache. + +**Implementation:** + +```typescript +import { LRUCache } from 'lru-cache' + +const cache = new LRUCache({ + max: 1000, + ttl: 5 * 60 * 1000 // 5 minutes +}) + +export async function getUser(id: string) { + const cached = cache.get(id) + if (cached) return cached + + const user = await db.user.findUnique({ where: { id } }) + cache.set(id, user) + return user +} + +// Request 1: DB query, result cached +// Request 2: cache hit, no DB query +``` + +Use when sequential user actions hit multiple endpoints needing the same data within seconds. + +**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis. + +**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching. + +Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache) + +### 3.4 Minimize Serialization at RSC Boundaries + +**Impact: HIGH (reduces data transfer size)** + +The React Server/Client boundary serializes all object properties into strings and embeds them in the HTML response and subsequent RSC requests. This serialized data directly impacts page weight and load time, so **size matters a lot**. Only pass fields that the client actually uses. + +**Incorrect: serializes all 50 fields** + +```tsx +async function Page() { + const user = await fetchUser() // 50 fields + return +} + +'use client' +function Profile({ user }: { user: User }) { + return
{user.name}
// uses 1 field +} +``` + +**Correct: serializes only 1 field** + +```tsx +async function Page() { + const user = await fetchUser() + return +} + +'use client' +function Profile({ name }: { name: string }) { + return
{name}
+} +``` + +### 3.5 Parallel Data Fetching with Component Composition + +**Impact: CRITICAL (eliminates server-side waterfalls)** + +React Server Components execute sequentially within a tree. Restructure with composition to parallelize data fetching. + +**Incorrect: Sidebar waits for Page's fetch to complete** + +```tsx +export default async function Page() { + const header = await fetchHeader() + return ( +
+
{header}
+ +
+ ) +} + +async function Sidebar() { + const items = await fetchSidebarItems() + return +} +``` + +**Correct: both fetch simultaneously** + +```tsx +async function Header() { + const data = await fetchHeader() + return
{data}
+} + +async function Sidebar() { + const items = await fetchSidebarItems() + return +} + +export default function Page() { + return ( +
+
+ +
+ ) +} +``` + +**Alternative with children prop:** + +```tsx +async function Header() { + const data = await fetchHeader() + return
{data}
+} + +async function Sidebar() { + const items = await fetchSidebarItems() + return +} + +function Layout({ children }: { children: ReactNode }) { + return ( +
+
+ {children} +
+ ) +} + +export default function Page() { + return ( + + + + ) +} +``` + +### 3.6 Per-Request Deduplication with React.cache() + +**Impact: MEDIUM (deduplicates within request)** + +Use `React.cache()` for server-side request deduplication. Authentication and database queries benefit most. + +**Usage:** + +```typescript +import { cache } from 'react' + +export const getCurrentUser = cache(async () => { + const session = await auth() + if (!session?.user?.id) return null + return await db.user.findUnique({ + where: { id: session.user.id } + }) +}) +``` + +Within a single request, multiple calls to `getCurrentUser()` execute the query only once. + +**Avoid inline objects as arguments:** + +`React.cache()` uses shallow equality (`Object.is`) to determine cache hits. Inline objects create new references each call, preventing cache hits. + +**Incorrect: always cache miss** + +```typescript +const getUser = cache(async (params: { uid: number }) => { + return await db.user.findUnique({ where: { id: params.uid } }) +}) + +// Each call creates new object, never hits cache +getUser({ uid: 1 }) +getUser({ uid: 1 }) // Cache miss, runs query again +``` + +**Correct: cache hit** + +```typescript +const params = { uid: 1 } +getUser(params) // Query runs +getUser(params) // Cache hit (same reference) +``` + +If you must pass objects, pass the same reference: + +**Next.js-Specific Note:** + +In Next.js, the `fetch` API is automatically extended with request memoization. Requests with the same URL and options are automatically deduplicated within a single request, so you don't need `React.cache()` for `fetch` calls. However, `React.cache()` is still essential for other async tasks: + +- Database queries (Prisma, Drizzle, etc.) + +- Heavy computations + +- Authentication checks + +- File system operations + +- Any non-fetch async work + +Use `React.cache()` to deduplicate these operations across your component tree. + +Reference: [https://react.dev/reference/react/cache](https://react.dev/reference/react/cache) + +### 3.7 Use after() for Non-Blocking Operations + +**Impact: MEDIUM (faster response times)** + +Use Next.js's `after()` to schedule work that should execute after a response is sent. This prevents logging, analytics, and other side effects from blocking the response. + +**Incorrect: blocks response** + +```tsx +import { logUserAction } from '@/app/utils' + +export async function POST(request: Request) { + // Perform mutation + await updateDatabase(request) + + // Logging blocks the response + const userAgent = request.headers.get('user-agent') || 'unknown' + await logUserAction({ userAgent }) + + return new Response(JSON.stringify({ status: 'success' }), { + status: 200, + headers: { 'Content-Type': 'application/json' } + }) +} +``` + +**Correct: non-blocking** + +```tsx +import { after } from 'next/server' +import { headers, cookies } from 'next/headers' +import { logUserAction } from '@/app/utils' + +export async function POST(request: Request) { + // Perform mutation + await updateDatabase(request) + + // Log after response is sent + after(async () => { + const userAgent = (await headers()).get('user-agent') || 'unknown' + const sessionCookie = (await cookies()).get('session-id')?.value || 'anonymous' + + logUserAction({ sessionCookie, userAgent }) + }) + + return new Response(JSON.stringify({ status: 'success' }), { + status: 200, + headers: { 'Content-Type': 'application/json' } + }) +} +``` + +The response is sent immediately while logging happens in the background. + +**Common use cases:** + +- Analytics tracking + +- Audit logging + +- Sending notifications + +- Cache invalidation + +- Cleanup tasks + +**Important notes:** + +- `after()` runs even if the response fails or redirects + +- Works in Server Actions, Route Handlers, and Server Components + +Reference: [https://nextjs.org/docs/app/api-reference/functions/after](https://nextjs.org/docs/app/api-reference/functions/after) + +--- + +## 4. Client-Side Data Fetching + +**Impact: MEDIUM-HIGH** + +Automatic deduplication and efficient data fetching patterns reduce redundant network requests. + +### 4.1 Deduplicate Global Event Listeners + +**Impact: LOW (single listener for N components)** + +Use `useSWRSubscription()` to share global event listeners across component instances. + +**Incorrect: N instances = N listeners** + +```tsx +function useKeyboardShortcut(key: string, callback: () => void) { + useEffect(() => { + const handler = (e: KeyboardEvent) => { + if (e.metaKey && e.key === key) { + callback() + } + } + window.addEventListener('keydown', handler) + return () => window.removeEventListener('keydown', handler) + }, [key, callback]) +} +``` + +When using the `useKeyboardShortcut` hook multiple times, each instance will register a new listener. + +**Correct: N instances = 1 listener** + +```tsx +import useSWRSubscription from 'swr/subscription' + +// Module-level Map to track callbacks per key +const keyCallbacks = new Map void>>() + +function useKeyboardShortcut(key: string, callback: () => void) { + // Register this callback in the Map + useEffect(() => { + if (!keyCallbacks.has(key)) { + keyCallbacks.set(key, new Set()) + } + keyCallbacks.get(key)!.add(callback) + + return () => { + const set = keyCallbacks.get(key) + if (set) { + set.delete(callback) + if (set.size === 0) { + keyCallbacks.delete(key) + } + } + } + }, [key, callback]) + + useSWRSubscription('global-keydown', () => { + const handler = (e: KeyboardEvent) => { + if (e.metaKey && keyCallbacks.has(e.key)) { + keyCallbacks.get(e.key)!.forEach(cb => cb()) + } + } + window.addEventListener('keydown', handler) + return () => window.removeEventListener('keydown', handler) + }) +} + +function Profile() { + // Multiple shortcuts will share the same listener + useKeyboardShortcut('p', () => { /* ... */ }) + useKeyboardShortcut('k', () => { /* ... */ }) + // ... +} +``` + +### 4.2 Use Passive Event Listeners for Scrolling Performance + +**Impact: MEDIUM (eliminates scroll delay caused by event listeners)** + +Add `{ passive: true }` to touch and wheel event listeners to enable immediate scrolling. Browsers normally wait for listeners to finish to check if `preventDefault()` is called, causing scroll delay. + +**Incorrect:** + +```typescript +useEffect(() => { + const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX) + const handleWheel = (e: WheelEvent) => console.log(e.deltaY) + + document.addEventListener('touchstart', handleTouch) + document.addEventListener('wheel', handleWheel) + + return () => { + document.removeEventListener('touchstart', handleTouch) + document.removeEventListener('wheel', handleWheel) + } +}, []) +``` + +**Correct:** + +```typescript +useEffect(() => { + const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX) + const handleWheel = (e: WheelEvent) => console.log(e.deltaY) + + document.addEventListener('touchstart', handleTouch, { passive: true }) + document.addEventListener('wheel', handleWheel, { passive: true }) + + return () => { + document.removeEventListener('touchstart', handleTouch) + document.removeEventListener('wheel', handleWheel) + } +}, []) +``` + +**Use passive when:** tracking/analytics, logging, any listener that doesn't call `preventDefault()`. + +**Don't use passive when:** implementing custom swipe gestures, custom zoom controls, or any listener that needs `preventDefault()`. + +### 4.3 Use SWR for Automatic Deduplication + +**Impact: MEDIUM-HIGH (automatic deduplication)** + +SWR enables request deduplication, caching, and revalidation across component instances. + +**Incorrect: no deduplication, each instance fetches** + +```tsx +function UserList() { + const [users, setUsers] = useState([]) + useEffect(() => { + fetch('/api/users') + .then(r => r.json()) + .then(setUsers) + }, []) +} +``` + +**Correct: multiple instances share one request** + +```tsx +import useSWR from 'swr' + +function UserList() { + const { data: users } = useSWR('/api/users', fetcher) +} +``` + +**For immutable data:** + +```tsx +import { useImmutableSWR } from '@/lib/swr' + +function StaticContent() { + const { data } = useImmutableSWR('/api/config', fetcher) +} +``` + +**For mutations:** + +```tsx +import { useSWRMutation } from 'swr/mutation' + +function UpdateButton() { + const { trigger } = useSWRMutation('/api/user', updateUser) + return +} +``` + +Reference: [https://swr.vercel.app](https://swr.vercel.app) + +### 4.4 Version and Minimize localStorage Data + +**Impact: MEDIUM (prevents schema conflicts, reduces storage size)** + +Add version prefix to keys and store only needed fields. Prevents schema conflicts and accidental storage of sensitive data. + +**Incorrect:** + +```typescript +// No version, stores everything, no error handling +localStorage.setItem('userConfig', JSON.stringify(fullUserObject)) +const data = localStorage.getItem('userConfig') +``` + +**Correct:** + +```typescript +const VERSION = 'v2' + +function saveConfig(config: { theme: string; language: string }) { + try { + localStorage.setItem(`userConfig:${VERSION}`, JSON.stringify(config)) + } catch { + // Throws in incognito/private browsing, quota exceeded, or disabled + } +} + +function loadConfig() { + try { + const data = localStorage.getItem(`userConfig:${VERSION}`) + return data ? JSON.parse(data) : null + } catch { + return null + } +} + +// Migration from v1 to v2 +function migrate() { + try { + const v1 = localStorage.getItem('userConfig:v1') + if (v1) { + const old = JSON.parse(v1) + saveConfig({ theme: old.darkMode ? 'dark' : 'light', language: old.lang }) + localStorage.removeItem('userConfig:v1') + } + } catch {} +} +``` + +**Store minimal fields from server responses:** + +```typescript +// User object has 20+ fields, only store what UI needs +function cachePrefs(user: FullUser) { + try { + localStorage.setItem('prefs:v1', JSON.stringify({ + theme: user.preferences.theme, + notifications: user.preferences.notifications + })) + } catch {} +} +``` + +**Always wrap in try-catch:** `getItem()` and `setItem()` throw in incognito/private browsing (Safari, Firefox), when quota exceeded, or when disabled. + +**Benefits:** Schema evolution via versioning, reduced storage size, prevents storing tokens/PII/internal flags. + +--- + +## 5. Re-render Optimization + +**Impact: MEDIUM** + +Reducing unnecessary re-renders minimizes wasted computation and improves UI responsiveness. + +### 5.1 Calculate Derived State During Rendering + +**Impact: MEDIUM (avoids redundant renders and state drift)** + +If a value can be computed from current props/state, do not store it in state or update it in an effect. Derive it during render to avoid extra renders and state drift. Do not set state in effects solely in response to prop changes; prefer derived values or keyed resets instead. + +**Incorrect: redundant state and effect** + +```tsx +function Form() { + const [firstName, setFirstName] = useState('First') + const [lastName, setLastName] = useState('Last') + const [fullName, setFullName] = useState('') + + useEffect(() => { + setFullName(firstName + ' ' + lastName) + }, [firstName, lastName]) + + return

{fullName}

+} +``` + +**Correct: derive during render** + +```tsx +function Form() { + const [firstName, setFirstName] = useState('First') + const [lastName, setLastName] = useState('Last') + const fullName = firstName + ' ' + lastName + + return

{fullName}

+} +``` + +Reference: [https://react.dev/learn/you-might-not-need-an-effect](https://react.dev/learn/you-might-not-need-an-effect) + +### 5.2 Defer State Reads to Usage Point + +**Impact: MEDIUM (avoids unnecessary subscriptions)** + +Don't subscribe to dynamic state (searchParams, localStorage) if you only read it inside callbacks. + +**Incorrect: subscribes to all searchParams changes** + +```tsx +function ShareButton({ chatId }: { chatId: string }) { + const searchParams = useSearchParams() + + const handleShare = () => { + const ref = searchParams.get('ref') + shareChat(chatId, { ref }) + } + + return +} +``` + +**Correct: reads on demand, no subscription** + +```tsx +function ShareButton({ chatId }: { chatId: string }) { + const handleShare = () => { + const params = new URLSearchParams(window.location.search) + const ref = params.get('ref') + shareChat(chatId, { ref }) + } + + return +} +``` + +### 5.3 Do not wrap a simple expression with a primitive result type in useMemo + +**Impact: LOW-MEDIUM (wasted computation on every render)** + +When an expression is simple (few logical or arithmetical operators) and has a primitive result type (boolean, number, string), do not wrap it in `useMemo`. + +Calling `useMemo` and comparing hook dependencies may consume more resources than the expression itself. + +**Incorrect:** + +```tsx +function Header({ user, notifications }: Props) { + const isLoading = useMemo(() => { + return user.isLoading || notifications.isLoading + }, [user.isLoading, notifications.isLoading]) + + if (isLoading) return + // return some markup +} +``` + +**Correct:** + +```tsx +function Header({ user, notifications }: Props) { + const isLoading = user.isLoading || notifications.isLoading + + if (isLoading) return + // return some markup +} +``` + +### 5.4 Extract Default Non-primitive Parameter Value from Memoized Component to Constant + +**Impact: MEDIUM (restores memoization by using a constant for default value)** + +When memoized component has a default value for some non-primitive optional parameter, such as an array, function, or object, calling the component without that parameter results in broken memoization. This is because new value instances are created on every rerender, and they do not pass strict equality comparison in `memo()`. + +To address this issue, extract the default value into a constant. + +**Incorrect: `onClick` has different values on every rerender** + +```tsx +const UserAvatar = memo(function UserAvatar({ onClick = () => {} }: { onClick?: () => void }) { + // ... +}) + +// Used without optional onClick + +``` + +**Correct: stable default value** + +```tsx +const NOOP = () => {}; + +const UserAvatar = memo(function UserAvatar({ onClick = NOOP }: { onClick?: () => void }) { + // ... +}) + +// Used without optional onClick + +``` + +### 5.5 Extract to Memoized Components + +**Impact: MEDIUM (enables early returns)** + +Extract expensive work into memoized components to enable early returns before computation. + +**Incorrect: computes avatar even when loading** + +```tsx +function Profile({ user, loading }: Props) { + const avatar = useMemo(() => { + const id = computeAvatarId(user) + return + }, [user]) + + if (loading) return + return
{avatar}
+} +``` + +**Correct: skips computation when loading** + +```tsx +const UserAvatar = memo(function UserAvatar({ user }: { user: User }) { + const id = useMemo(() => computeAvatarId(user), [user]) + return +}) + +function Profile({ user, loading }: Props) { + if (loading) return + return ( +
+ +
+ ) +} +``` + +**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, manual memoization with `memo()` and `useMemo()` is not necessary. The compiler automatically optimizes re-renders. + +### 5.6 Narrow Effect Dependencies + +**Impact: LOW (minimizes effect re-runs)** + +Specify primitive dependencies instead of objects to minimize effect re-runs. + +**Incorrect: re-runs on any user field change** + +```tsx +useEffect(() => { + console.log(user.id) +}, [user]) +``` + +**Correct: re-runs only when id changes** + +```tsx +useEffect(() => { + console.log(user.id) +}, [user.id]) +``` + +**For derived state, compute outside effect:** + +```tsx +// Incorrect: runs on width=767, 766, 765... +useEffect(() => { + if (width < 768) { + enableMobileMode() + } +}, [width]) + +// Correct: runs only on boolean transition +const isMobile = width < 768 +useEffect(() => { + if (isMobile) { + enableMobileMode() + } +}, [isMobile]) +``` + +### 5.7 Put Interaction Logic in Event Handlers + +**Impact: MEDIUM (avoids effect re-runs and duplicate side effects)** + +If a side effect is triggered by a specific user action (submit, click, drag), run it in that event handler. Do not model the action as state + effect; it makes effects re-run on unrelated changes and can duplicate the action. + +**Incorrect: event modeled as state + effect** + +```tsx +function Form() { + const [submitted, setSubmitted] = useState(false) + const theme = useContext(ThemeContext) + + useEffect(() => { + if (submitted) { + post('/api/register') + showToast('Registered', theme) + } + }, [submitted, theme]) + + return +} +``` + +**Correct: do it in the handler** + +```tsx +function Form() { + const theme = useContext(ThemeContext) + + function handleSubmit() { + post('/api/register') + showToast('Registered', theme) + } + + return +} +``` + +Reference: [https://react.dev/learn/removing-effect-dependencies#should-this-code-move-to-an-event-handler](https://react.dev/learn/removing-effect-dependencies#should-this-code-move-to-an-event-handler) + +### 5.8 Subscribe to Derived State + +**Impact: MEDIUM (reduces re-render frequency)** + +Subscribe to derived boolean state instead of continuous values to reduce re-render frequency. + +**Incorrect: re-renders on every pixel change** + +```tsx +function Sidebar() { + const width = useWindowWidth() // updates continuously + const isMobile = width < 768 + return