A professional, self-hosted AI service for generating cooking recipes based on available ingredients. Built with Python, Ollama, and modern AI models. Perfect for developers, home cooks, and anyone who wants to create recipes from whatever ingredients they have on hand.
- π Privacy First: Your data stays on your machine - no cloud dependencies
- π Easy Setup: Simple installation with Ollama integration
- π― Smart Recipes: AI generates recipes that actually use your ingredients
- β‘ Fast & Reliable: Local processing means no network delays
- π§ Developer Friendly: Clean API, comprehensive tests, professional structure
- π Open Source: MIT licensed, community-driven development
- π€ AI-Powered Recipe Generation: Create recipes from any ingredient list with intelligent ingredient utilization
- π Self-Hosted: Complete privacy and control over your data - no cloud dependencies
- π― Smart Ingredient Usage: AI ensures all provided ingredients are used in the recipe
- π Multi-Cuisine Support: Generate recipes for various cuisines and dietary preferences
- βοΈ Professional Structure: Follows Python conventions and best practices
- π§ Easy Integration: Simple API for building applications and services
- π Quality Analysis: Built-in testing and quality assessment for recipe generation
- π‘οΈ Configurable AI: Adjust temperature, creativity, and other AI parameters
- π Structured Output: Consistent JSON-formatted recipe responses
- Python 3.10+ - Modern Python with type hints and async support
- Ollama - Local AI model serving platform
llama2:7bmodel - Downloaded and ready in Ollama- Git - For cloning and version control
- Make - For build automation (optional, but recommended)
# Clone the repository
git clone https://github.com/jonbarlo/recipe-ai-engine.git
cd recipe-ai-engine
# Install all requirements and package
make install
# Setup the fine-tuned recipe model (optional)
make setup-model
# Test the installation
make test# Clone the repository
git clone https://github.com/jonbarlo/recipe-ai-engine.git
cd recipe-ai-engine
# Install all requirements
pip install -r requirements.txt
# Install package in development mode
pip install -e .
# Setup the recipe model (optional)
python scripts/setup_fine_tuned_model.py# Install Ollama (if not already installed)
# Visit: https://ollama.ai/download
# Download the base model
ollama pull llama2:7b
# Verify installation
ollama listIf you have a locally fine-tuned model directory, point the setup script to it:
set FINE_TUNED_MODEL_PATH=./recipe-model
python scripts/setup_fine_tuned_model.pyIf FINE_TUNED_MODEL_PATH (or ./recipe-model) is not found, the script will fall back to llama2:7b.
# Run all tests
make test
# Test recipe generation
make test-generation
# Test recipe quality
make test-quality
# Test environment variables
python scripts/test_env.pyFor on-device fine-tuning using Hugging Face + QLoRA:
# Install training requirements (GPU recommended)
make install-train-gpu
# Check GPU availability
make check-gpu
# Login to Hugging Face (for gated models like Llama 2)
make hf-login# Test training (1000 recipes)
make train-gpt2-gpu
# Full training (complete dataset)
make train-gpt2-full-gpu# Test training (1000 recipes)
make train-mistral-gpu
# Full training (complete dataset)
make train-mistral-full-gpu# Test training (1000 recipes)
make train-llama-gpu
# Full training (complete dataset)
make train-llama-full-gpu# Install CPU requirements
make install-train-cpu
# Train on CPU
make train-gpt2-cpu
make train-mistral-cpu
make train-llama-cpu# Install training requirements
pip install -r requirements-train.txt --index-url https://download.pytorch.org/whl/cu121
# Run training
python scripts/train.py \
--dataset datasets/recipe_dataset_1000.json \
--model meta-llama/Llama-2-7b-hf \
--output ./recipe-model \
--epochs 1 \
--batch-size 2 \
--testThe test suite includes:
- Generation Tests: Verify package imports and basic recipe generation functionality
- Quality Tests: Evaluate AI model performance with edge cases and recipe quality
- Quality Analysis: Assess recipe generation quality and consistency
- Environment Tests: Verify configuration system
# Generate a recipe with sample ingredients
make recipe-quick# Start interactive mode
make recipe
# Enter ingredients when prompted# Generate a recipe with specific ingredients
python -c "
from recipe_ai_engine import RecipeRequest, RecipeGenerator
request = RecipeRequest(ingredients=['chicken', 'rice', 'vegetables'])
generator = RecipeGenerator()
recipe = generator.generate_recipe(request)
print(f'Recipe: {recipe.title}')
"# List available models
ollama list
# Pull a model
ollama pull llama2:7b
# Run a model
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format"
# Run a model with a custom prompt
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format"
# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format json
# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format json
# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format jsonOllama custom model
# Create a custom model
ollama create recipe-ai:latest --model llama2:7b --base-url http://localhost:11434
ollama rm recipe-aifrom recipe_ai_engine import RecipeRequest, RecipeGenerator
# Create a recipe request
request = RecipeRequest(
ingredients=["chicken breast", "rice", "vegetables"],
cuisine="Asian",
servings=4,
difficulty="medium"
)
# Generate the recipe
generator = RecipeGenerator()
recipe = generator.generate_recipe(request)
# Access recipe details
print(f"Recipe: {recipe.title}")
print(f"Cooking Time: {recipe.cooking_time}")
print(f"Difficulty: {recipe.difficulty}")
print(f"Servings: {recipe.servings}")
# Print ingredients
for ingredient in recipe.ingredients:
print(f"- {ingredient['item']}: {ingredient['amount']}")
# Print instructions
for i, step in enumerate(recipe.instructions, 1):
print(f"{i}. {step}")Start the API server:
# Production-like (no reload)
make api
# Development (auto-reload on code changes)
make api-devInteractive docs will be available at http://localhost:8000/docs.
Endpoints:
GET /healthPOST /recipes/generate
Using curl:
curl -X POST http://localhost:8000/recipes/generate \
-H "Content-Type: application/json" \
-d '{
"ingredients": ["chicken", "rice", "vegetables"],
"cuisine_type": "Asian",
"difficulty_level": "medium",
"serving_size": 2
}'Using Python:
import requests
payload = {
"ingredients": ["chicken", "rice", "vegetables"],
"cuisine_type": "Asian",
"difficulty_level": "medium",
"serving_size": 2
}
resp = requests.post("http://localhost:8000/recipes/generate", json=payload, timeout=60)
resp.raise_for_status()
print(resp.json())from recipe_ai_engine import RecipeRequest, RecipeGenerator
from recipe_ai_engine.ai import OllamaRecipeGenerator
# Custom AI generator with specific model
custom_generator = OllamaRecipeGenerator(model_name="recipe-ai:latest")
generator = RecipeGenerator(ai_generator=custom_generator)
# Generate multiple recipe variations
recipes = generator.generate_multiple_recipes(request, count=3)
for i, recipe in enumerate(recipes, 1):
print(f"\nRecipe Variation {i}:")
print(f"Title: {recipe.title}")
print(f"Difficulty: {recipe.difficulty}")recipe-ai-engine/
βββ recipe_ai_engine/ # Main package
β βββ core/ # Core functionality
β β βββ models.py # Pydantic models
β β βββ exceptions.py # Custom exceptions
β β βββ config.py # Configuration management
β βββ ai/ # AI model components
β β βββ base.py # Base AI interface
β β βββ ollama_client.py # Ollama implementation
β β βββ prompts.py # Prompt generation
β βββ recipes/ # Recipe-specific logic
β βββ generator.py # Main orchestrator
β βββ validator.py # Recipe validation
βββ docs/ # Documentation
βββ tests/ # Test suite
βββ scripts/ # Utility scripts
βββ datasets/ # Recipe datasets
βββ requirements.txt # Main dependencies
βββ Makefile # Build and test commands
βββ README.md # This file
The engine uses Pydantic Settings for configuration. Key settings:
# Default configuration (can be overridden with environment variables)
ollama_base_url: str = "http://localhost:11434"
ollama_model_name: str = "llama2:7b"
temperature: float = 0.3
top_p: float = 0.9
max_tokens: int = 800You can override settings using environment variables:
# Set the model to use
set OLLAMA_MODEL_NAME=recipe-ai:latest
# Adjust AI parameters
set AI_TEMPERATURE=0.7
set AI_TOP_P=0.8
set AI_MAX_TOKENS=1000
# Change timeouts
set REQUEST_TIMEOUT=120
set CONNECTION_TIMEOUT=10| Variable | Default | Description |
|---|---|---|
OLLAMA_MODEL_NAME |
llama2:7b |
Ollama model to use |
OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama API base URL |
AI_TEMPERATURE |
0.3 |
AI model temperature |
AI_TOP_P |
0.9 |
AI model top-p parameter |
AI_MAX_TOKENS |
800 |
Maximum tokens to generate |
REQUEST_TIMEOUT |
60 |
Request timeout in seconds |
CONNECTION_TIMEOUT |
5 |
Connection timeout in seconds |
# Installation
make install # Install all requirements and package
# Testing
make test # Run all tests
make test-generation # Test recipe generation
make test-quality # Test recipe quality
# Recipe Generation
make recipe # Interactive recipe generation
make recipe-quick # Quick recipe with sample ingredients
# Training Installation
make install-train-cpu # Install training requirements (CPU only)
make install-train-gpu # Install training requirements (GPU/CUDA)
make check-gpu # Check GPU availability
# Training Commands
make train-gpt2-cpu/gpu # Train GPT2 (CPU/GPU)
make train-mistral-cpu/gpu # Train Mistral (CPU/GPU)
make train-llama-cpu/gpu # Train Llama 2 (CPU/GPU)
make train-gpt2/mistral/llama # Default GPU training
# Development
make setup-model # Setup fine-tuned recipe model
make hf-login # Login to Hugging Face
make clean # Clean up temporary files
make clean-cache # Clean Hugging Face cache
make clean-models # Clean trained models
make help # Show all available commandsThe fine-tuned recipe-ai model provides:
- Better Recipe Quality: More detailed and accurate instructions
- Consistent Formatting: Proper ingredient and instruction structure
- Cuisine Awareness: Better understanding of different cooking styles
- Ingredient Utilization: More efficient use of provided ingredients
-
Ollama not running
# Start Ollama ollama serve -
Model not found
# Download the base model ollama pull llama2:7b # Setup the recipe model make setup-model
-
Import errors
# Reinstall requirements make install-all
- Follow the existing code structure
- Add tests for new features
- Update documentation
- Follow Python conventions
This project is licensed under the MIT License.
Python 3, OLLAMA, LLMs (GPT2, Llama2, Mistral), Pydantic, FastAPI, Hugging Face, QLoRA, PyTorch, Transformers, CUDA, GPU, CPU, Linux, Windows, MacOS, Docker, Git, Makefile, VSCode, PyCharm, Cursor