A fast local semantic search tool that helps you find code using natural language queries. No internet required, everything runs locally using the embeddinggemma-300m model.
Install Odino directly from PyPI:
pip install odinoOr install from source:
git clone https://github.com/cesp99/odino.git
cd odino
pip install -e .For detailed installation instructions, including uninstallation and troubleshooting, see INSTALL.md.
# Index current directory
odino index .
# Index specific directory
odino index /path/to/project
# Index with custom model (optional)
odino index /path/to/project --model <your-own-model># Basic search (returns 2 results by default)
odino -q "function that handles user authentication"
# Search with custom number of results
odino -q "database connection" -r 10
# Search specific file types
odino -q "error handling" --include "*.py"odino statusFind authentication code:
odino -q "user login function"Search for database queries:
odino -q "sql select statement" --include "*.sql"Find error handling patterns:
odino -q "try catch exception handling"odino/
├── odino/
│ ├── __init__.py
│ ├── cli.py # CLI entry point
│ ├── indexer.py # File indexing logic
│ ├── searcher.py # Semantic search implementation
│ └── utils.py # Utility functions
├── pyproject.toml # Project configuration
├── README.md # This file
└── .odinoignore # Default ignore patterns
Odino creates a .odino/ directory in your project root with:
config.json- Configuration settingschroma_db/- Vector database storageindexed_files.json- File tracking metadata
Default configuration:
{
"model_name": "EmmanuelEA/eea-embedding-gemma",
"chunk_size": 512,
"chunk_overlap": 50,
"max_results": 2,
"embedding_batch_size": 32,
"device_preference": "auto"
}- Indexing: Scans your codebase, chunks files, and generates embeddings using the embeddinggemma-300m model
- Storage: Saves embeddings locally in ChromaDB vector database
- Search: Converts your natural language query to embeddings and finds semantically similar code
- Results: Displays file paths, similarity scores, and code snippets
- Local Processing: No internet required, everything runs offline
- Fast Indexing: embeddinggemma-300m model optimized for speed
- Smart Chunking: Handles large files by splitting into manageable chunks
- Beautiful Output: Rich console formatting with syntax highlighting
- Incremental Updates: Only reindexes changed files
- Flexible Filtering: Search by file type, limit results, custom patterns
Create a .odinoignore file in your project root:
# Ignore specific directories
build/
dist/
node_modules/
# Ignore file patterns
*.log
*.tmp
*.cache
odino index . --forceodino statusThe embeddinggemma-300m model downloads automatically on first use. Ensure you have:
- Stable internet connection for initial download
- Sufficient disk space (~300MB for model)
Make sure you have read permissions for files you want to index and write permissions for the .odino/ directory.
For very large codebases, consider:
- Reducing chunk size in configuration
- Excluding large directories with
.odinoignore - Indexing in batches
If you encounter MPS backend out of memory errors on Apple Silicon:
- Reduce batch size in your
.odino/config.json:
{
"embedding_batch_size": 16,
"device_preference": "auto"
}- Force CPU usage for stable processing:
{
"device_preference": "cpu"
}- Use smaller batch sizes if memory issues persist:
{
"embedding_batch_size": 8
}The system automatically handles MPS memory management with:
- Automatic batch processing in configurable sizes
- MPS memory clearing after each batch
- Automatic CPU fallback when MPS runs out of memory
- Smart device selection based on availability
For advanced memory management configuration and more detailed troubleshooting, see MEMORY_MANAGEMENT.md.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
AI agents working with this codebase should refer to the ODINO.md file for detailed usage instructions and best practices. This file contains comprehensive documentation on:
- Basic Commands: Indexing and searching operations
- Advanced Search Options: Filtering, path targeting, and result limiting
- Semantic Search Capabilities: How to find files by meaning rather than exact keywords
- Best Practices: When to use Odino vs traditional grep, filtering strategies, and query optimization
- Workflow Examples: Real-world usage patterns for code discovery
The ODINO.md file is specifically designed to help AI agents understand how to effectively use Odino's semantic search capabilities to navigate and understand codebases during development tasks.
This project is licensed under the GNU General Public License v3.0 - see LICENSE file for details.
- Built with Typer for the CLI
- Uses Sentence Transformers for embeddings
- Powered by ChromaDB for vector storage
- Formatted with Rich for beautiful output