A sophisticated multi-agent system built with CrewAI that automatically generates comprehensive README.md files for any codebase. This system uses four specialized AI agents working in sequence to analyze, document, and format professional documentation. The system has been enhanced with improved task context management and native CrewAI LLM integration for better accuracy and reliability.
The Multiagent README Generator employs a team of four specialized AI agents:
- Code Analyst Agent - Analyzes codebase structure and identifies key components
- Documentation Writer Agent - Creates clear, user-friendly documentation
- Example Generator Agent - Develops practical code examples and usage scenarios
- README Formatter Agent - Combines everything into a professional, well-structured README
- Automatic Code Analysis: Intelligently analyzes project structure, dependencies, and functionality
- Multi-Language Support: Works with Python, JavaScript, TypeScript, Rust, Go, and more
- Professional Documentation: Generates comprehensive READMEs with proper formatting
- Code Examples: Creates practical, runnable code examples
- Customizable Output: Adapts to different project types and structures
- CrewAI Integration: Leverages advanced multi-agent orchestration
- Native LLM Support: Uses native CrewAI LLM integration for better compatibility
- Enhanced Context Management: Improved task context passing between agents for accurate analysis
- Flexible Sample Generation: Separate script for creating sample codebases
- Python 3.7 or higher
- Google Gemini API key (or alternative LLM provider)
- Internet connection for AI model access
-
Clone the repository:
git clone <repository-url> cd multiagent_doc_generator
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp env_template.txt .env # Edit .env and add your Google Gemini API key -
Get your Google Gemini API key:
- Visit Google AI Studio
- Create a new API key
- Add it to your
.envfile
from readme_generator import ReadmeGenerator
# Create the generator
generator = ReadmeGenerator()
# Generate README for a codebase
readme_content = generator.generate("./path/to/your/codebase")
# Save the result
with open("README.md", "w") as f:
f.write(readme_content)python sample_code_gen.pyThis will create a sample calculator codebase in the sample_codebase/ directory.
python readme_generator.pyThis will:
- Analyze the sample codebase (or any specified codebase)
- Generate a comprehensive README for it
- Save the result to
sample_codebase/generated_README.md - Display the content in the terminal
# First, create the sample codebase
python sample_code_gen.py
# Then generate the README
python readme_generator.py- Role: Senior Code Analyst
- Goal: Thoroughly analyze codebase structure and identify key components
- Output: Structured JSON analysis of the project
- Role: Technical Writer
- Goal: Create clear, user-friendly documentation
- Output: Well-structured Markdown documentation sections
- Role: Code Example Specialist
- Goal: Create practical, easy-to-understand code examples
- Output: Runnable code snippets with explanations
- Role: Markdown Formatting Expert
- Goal: Combine all content into a professional README
- Output: Complete, formatted README.md content
Codebase Input โ Code Analysis โ Documentation โ Examples โ Final README
โ โ โ โ โ
Path String โ JSON Analysis โ Markdown โ Code Snippets โ Complete README
The system now uses proper task context passing between agents:
- Analysis Task: Analyzes the actual codebase content
- Documentation Task: Uses analysis context to create project-specific documentation
- Example Task: Uses analysis context to generate relevant code examples
- Formatting Task: Combines all previous task outputs into the final README
This ensures that each agent has access to the actual project analysis, preventing generic content generation.
- Enhanced Task Context Management: Fixed the issue where agents were generating generic content instead of analyzing the actual codebase
- Native CrewAI LLM Integration: Replaced custom LLM wrapper with native CrewAI LLM for better compatibility and reliability
- Separated Sample Generation: Split the main script into
sample_code_gen.pyfor creating sample codebases andreadme_generator.pyfor generating READMEs - Improved File Organization: README files are now saved inside the target codebase directory
- Better Error Handling: Enhanced debugging and error reporting throughout the system
- Accurate Project Analysis: The system now properly analyzes and documents the actual project structure and functionality
- Context Passing: Agents now properly receive and use context from previous tasks
- Code Analysis: Enhanced codebase analysis to extract actual classes, functions, and project structure
- Output Location: Generated READMEs are saved in the correct location within the project directory
- LLM Compatibility: Improved LLM integration to avoid Vertex AI issues
multiagent_doc_generator/
โโโ readme_generator.py # Main ReadmeGenerator class with enhanced context management
โโโ sample_code_gen.py # Script to create sample codebases
โโโ custom_gemini_llm.py # Custom LLM wrapper (legacy, now using native CrewAI LLM)
โโโ requirements.txt # Python dependencies
โโโ env_template.txt # Environment variables template
โโโ README.md # This file
โโโ sample_codebase/ # Generated sample project for testing
โโโ main.py # Simple calculator with basic operations
โโโ src/
โ โโโ calculator.py # Advanced calculator with power, sqrt, factorial, memory
โโโ tests/
โ โโโ test_calculator.py # Unit tests for AdvancedCalculator
โโโ requirements.txt
โโโ setup.py
โโโ generated_README.md # Generated README (created by readme_generator.py)
| Variable | Description | Required | Default |
|---|---|---|---|
GOOGLE_API_KEY |
Google Gemini API key | Yes | - |
CREWAI_VERBOSE |
Enable verbose logging | No | true |
CREWAI_MAX_ITERATIONS |
Maximum agent iterations | No | 10 |
The system now uses native CrewAI LLM integration with Google Gemini by default:
# In readme_generator.py, the LLM is configured as:
self.llm = LLM(
model="gemini/gemini-2.0-flash-exp",
temperature=0.1,
api_key=os.getenv("GOOGLE_API_KEY"),
)You can configure it to use other LLM providers:
# For OpenAI
self.llm = LLM(
model="openai/gpt-4",
api_key=os.getenv("OPENAI_API_KEY"),
temperature=0.1,
)
# For Anthropic Claude
self.llm = LLM(
model="anthropic/claude-3-sonnet",
api_key=os.getenv("ANTHROPIC_API_KEY"),
temperature=0.1,
)- Python Projects: Full support with dependency analysis
- Node.js Projects: Package.json analysis and npm scripts
- Rust Projects: Cargo.toml analysis
- Go Projects: go.mod analysis
- Generic Projects: Basic file structure analysis
The generated READMEs include:
- Project Title and Description
- Features List
- Installation Instructions
- Usage Examples
- Project Structure Overview
- Configuration Options
- Contributing Guidelines
- License Information
- Code Examples with Syntax Highlighting
- Professional Formatting
-
Create a sample codebase:
python sample_code_gen.py
-
Generate README for the sample:
python readme_generator.py
-
Check the generated README:
cat sample_codebase/generated_README.md
from readme_generator import ReadmeGenerator
generator = ReadmeGenerator()
readme_content = generator.generate("./path/to/your/project")The system will analyze your project and generate a comprehensive README that accurately describes your codebase structure, functionality, and usage.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- CrewAI for the multi-agent framework
- Google Gemini for the AI capabilities
- The open-source community for inspiration and tools
- API Key Error: Ensure your Google Gemini API key is correctly set in the
.envfile - Import Errors: Make sure all dependencies are installed with
pip install -r requirements.txt - Permission Errors: Ensure the script has write permissions in the target directory
- Generic Content Generation: If the system generates generic content instead of analyzing your specific project, ensure the task context is properly configured (this has been fixed in the latest version)
- LLM Connection Issues: Verify your API key and internet connection for LLM access
- Check the CrewAI documentation
- Review the Google Gemini API documentation
- Open an issue in this repository
- Support for more programming languages
- Custom README templates
- Integration with popular code hosting platforms
- Batch processing for multiple projects
- Advanced code analysis with AST parsing
- Integration with documentation generators
- Interactive README generation with user prompts
- Support for multiple output formats (HTML, PDF, etc.)
- Integration with CI/CD pipelines
- Real-time codebase monitoring and auto-documentation updates
Built with โค๏ธ using CrewAI and Google Gemini AI