Skip to content

jonbarlo/recipe-ai-engine

Repository files navigation

Recipe AI Engine 🍳

Python 3.10+ License: MIT Ollama

A professional, self-hosted AI service for generating cooking recipes based on available ingredients. Built with Python, Ollama, and modern AI models. Perfect for developers, home cooks, and anyone who wants to create recipes from whatever ingredients they have on hand.

🌟 Why Recipe AI Engine?

  • πŸ”’ Privacy First: Your data stays on your machine - no cloud dependencies
  • πŸš€ Easy Setup: Simple installation with Ollama integration
  • 🎯 Smart Recipes: AI generates recipes that actually use your ingredients
  • ⚑ Fast & Reliable: Local processing means no network delays
  • πŸ”§ Developer Friendly: Clean API, comprehensive tests, professional structure
  • 🌍 Open Source: MIT licensed, community-driven development

πŸš€ Features

  • πŸ€– AI-Powered Recipe Generation: Create recipes from any ingredient list with intelligent ingredient utilization
  • πŸ”’ Self-Hosted: Complete privacy and control over your data - no cloud dependencies
  • 🎯 Smart Ingredient Usage: AI ensures all provided ingredients are used in the recipe
  • 🌍 Multi-Cuisine Support: Generate recipes for various cuisines and dietary preferences
  • βš™οΈ Professional Structure: Follows Python conventions and best practices
  • πŸ”§ Easy Integration: Simple API for building applications and services
  • πŸ“Š Quality Analysis: Built-in testing and quality assessment for recipe generation
  • 🌑️ Configurable AI: Adjust temperature, creativity, and other AI parameters
  • πŸ“ Structured Output: Consistent JSON-formatted recipe responses

πŸ“‹ Prerequisites

  • Python 3.10+ - Modern Python with type hints and async support
  • Ollama - Local AI model serving platform
  • llama2:7b model - Downloaded and ready in Ollama
  • Git - For cloning and version control
  • Make - For build automation (optional, but recommended)

πŸ› οΈ Installation

Quick Start (Recommended)

# Clone the repository
git clone https://github.com/jonbarlo/recipe-ai-engine.git
cd recipe-ai-engine

# Install all requirements and package
make install

# Setup the fine-tuned recipe model (optional)
make setup-model

# Test the installation
make test

Manual Installation

# Clone the repository
git clone https://github.com/jonbarlo/recipe-ai-engine.git
cd recipe-ai-engine

# Install all requirements
pip install -r requirements.txt

# Install package in development mode
pip install -e .

# Setup the recipe model (optional)
python scripts/setup_fine_tuned_model.py

Ollama Setup

# Install Ollama (if not already installed)
# Visit: https://ollama.ai/download

# Download the base model
ollama pull llama2:7b

# Verify installation
ollama list

Using a Fine-Tuned Model

If you have a locally fine-tuned model directory, point the setup script to it:

set FINE_TUNED_MODEL_PATH=./recipe-model
python scripts/setup_fine_tuned_model.py

If FINE_TUNED_MODEL_PATH (or ./recipe-model) is not found, the script will fall back to llama2:7b.

πŸ§ͺ Testing

# Run all tests
make test

# Test recipe generation
make test-generation

# Test recipe quality
make test-quality

# Test environment variables
python scripts/test_env.py

🧩 Training / Fine-tuning (Optional)

For on-device fine-tuning using Hugging Face + QLoRA:

Quick Setup

# Install training requirements (GPU recommended)
make install-train-gpu

# Check GPU availability
make check-gpu

# Login to Hugging Face (for gated models like Llama 2)
make hf-login

Training Commands

GPT2 Training (Fast, Low VRAM)

# Test training (1000 recipes)
make train-gpt2-gpu

# Full training (complete dataset)
make train-gpt2-full-gpu

Mistral Training (Good Balance)

# Test training (1000 recipes)
make train-mistral-gpu

# Full training (complete dataset)
make train-mistral-full-gpu

Llama 2 Training (High Quality)

# Test training (1000 recipes)
make train-llama-gpu

# Full training (complete dataset)
make train-llama-full-gpu

CPU Training (Slower but Works Everywhere)

# Install CPU requirements
make install-train-cpu

# Train on CPU
make train-gpt2-cpu
make train-mistral-cpu
make train-llama-cpu

Manual Training

# Install training requirements
pip install -r requirements-train.txt --index-url https://download.pytorch.org/whl/cu121

# Run training
python scripts/train.py \
  --dataset datasets/recipe_dataset_1000.json \
  --model meta-llama/Llama-2-7b-hf \
  --output ./recipe-model \
  --epochs 1 \
  --batch-size 2 \
  --test

Test Results

The test suite includes:

  • Generation Tests: Verify package imports and basic recipe generation functionality
  • Quality Tests: Evaluate AI model performance with edge cases and recipe quality
  • Quality Analysis: Assess recipe generation quality and consistency
  • Environment Tests: Verify configuration system

🍽️ Usage Examples

Quick Recipe Generation

# Generate a recipe with sample ingredients
make recipe-quick

Interactive Recipe Generation

# Start interactive mode
make recipe
# Enter ingredients when prompted

Command Line Usage

# Generate a recipe with specific ingredients
python -c "
from recipe_ai_engine import RecipeRequest, RecipeGenerator
request = RecipeRequest(ingredients=['chicken', 'rice', 'vegetables'])
generator = RecipeGenerator()
recipe = generator.generate_recipe(request)
print(f'Recipe: {recipe.title}')
"

Ollama Usage

# List available models
ollama list

# Pull a model
ollama pull llama2:7b

# Run a model
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format"

# Run a model with a custom prompt
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format"

# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format json

# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format json

# Run a model with a custom prompt and output format
ollama run llama2:7b "Generate a simple recipe with chicken, rice, and vegetables in JSON format" --format json

Ollama custom model

# Create a custom model
ollama create recipe-ai:latest --model llama2:7b --base-url http://localhost:11434

ollama rm recipe-ai

Python API Usage

from recipe_ai_engine import RecipeRequest, RecipeGenerator

# Create a recipe request
request = RecipeRequest(
    ingredients=["chicken breast", "rice", "vegetables"],
    cuisine="Asian",
    servings=4,
    difficulty="medium"
)

# Generate the recipe
generator = RecipeGenerator()
recipe = generator.generate_recipe(request)

# Access recipe details
print(f"Recipe: {recipe.title}")
print(f"Cooking Time: {recipe.cooking_time}")
print(f"Difficulty: {recipe.difficulty}")
print(f"Servings: {recipe.servings}")

# Print ingredients
for ingredient in recipe.ingredients:
    print(f"- {ingredient['item']}: {ingredient['amount']}")

# Print instructions
for i, step in enumerate(recipe.instructions, 1):
    print(f"{i}. {step}")

Web API (FastAPI)

Start the API server:

# Production-like (no reload)
make api

# Development (auto-reload on code changes)
make api-dev

Interactive docs will be available at http://localhost:8000/docs.

Endpoints:

  • GET /health
  • POST /recipes/generate

Sample request

Using curl:

curl -X POST http://localhost:8000/recipes/generate \
  -H "Content-Type: application/json" \
  -d '{
        "ingredients": ["chicken", "rice", "vegetables"],
        "cuisine_type": "Asian",
        "difficulty_level": "medium",
        "serving_size": 2
      }'

Using Python:

import requests

payload = {
  "ingredients": ["chicken", "rice", "vegetables"],
  "cuisine_type": "Asian",
  "difficulty_level": "medium",
  "serving_size": 2
}

resp = requests.post("http://localhost:8000/recipes/generate", json=payload, timeout=60)
resp.raise_for_status()
print(resp.json())

Advanced Usage

from recipe_ai_engine import RecipeRequest, RecipeGenerator
from recipe_ai_engine.ai import OllamaRecipeGenerator

# Custom AI generator with specific model
custom_generator = OllamaRecipeGenerator(model_name="recipe-ai:latest")
generator = RecipeGenerator(ai_generator=custom_generator)

# Generate multiple recipe variations
recipes = generator.generate_multiple_recipes(request, count=3)

for i, recipe in enumerate(recipes, 1):
    print(f"\nRecipe Variation {i}:")
    print(f"Title: {recipe.title}")
    print(f"Difficulty: {recipe.difficulty}")

πŸ“ Project Structure

recipe-ai-engine/
β”œβ”€β”€ recipe_ai_engine/          # Main package
β”‚   β”œβ”€β”€ core/                  # Core functionality
β”‚   β”‚   β”œβ”€β”€ models.py          # Pydantic models
β”‚   β”‚   β”œβ”€β”€ exceptions.py      # Custom exceptions
β”‚   β”‚   └── config.py          # Configuration management
β”‚   β”œβ”€β”€ ai/                    # AI model components
β”‚   β”‚   β”œβ”€β”€ base.py            # Base AI interface
β”‚   β”‚   β”œβ”€β”€ ollama_client.py   # Ollama implementation
β”‚   β”‚   └── prompts.py         # Prompt generation
β”‚   └── recipes/               # Recipe-specific logic
β”‚       β”œβ”€β”€ generator.py        # Main orchestrator
β”‚       └── validator.py        # Recipe validation
β”œβ”€β”€ docs/                      # Documentation
β”œβ”€β”€ tests/                     # Test suite
β”œβ”€β”€ scripts/                   # Utility scripts
β”œβ”€β”€ datasets/                  # Recipe datasets
β”œβ”€β”€ requirements.txt           # Main dependencies

β”œβ”€β”€ Makefile                   # Build and test commands
└── README.md                  # This file

πŸ”§ Configuration

The engine uses Pydantic Settings for configuration. Key settings:

# Default configuration (can be overridden with environment variables)
ollama_base_url: str = "http://localhost:11434"
ollama_model_name: str = "llama2:7b"
temperature: float = 0.3
top_p: float = 0.9
max_tokens: int = 800

Environment Variables

You can override settings using environment variables:

# Set the model to use
set OLLAMA_MODEL_NAME=recipe-ai:latest

# Adjust AI parameters
set AI_TEMPERATURE=0.7
set AI_TOP_P=0.8
set AI_MAX_TOKENS=1000

# Change timeouts
set REQUEST_TIMEOUT=120
set CONNECTION_TIMEOUT=10

Available Environment Variables

Variable Default Description
OLLAMA_MODEL_NAME llama2:7b Ollama model to use
OLLAMA_BASE_URL http://localhost:11434 Ollama API base URL
AI_TEMPERATURE 0.3 AI model temperature
AI_TOP_P 0.9 AI model top-p parameter
AI_MAX_TOKENS 800 Maximum tokens to generate
REQUEST_TIMEOUT 60 Request timeout in seconds
CONNECTION_TIMEOUT 5 Connection timeout in seconds

πŸ§ͺ Available Commands

# Installation
make install        # Install all requirements and package

# Testing
make test           # Run all tests
make test-generation # Test recipe generation
make test-quality   # Test recipe quality

# Recipe Generation
make recipe         # Interactive recipe generation
make recipe-quick   # Quick recipe with sample ingredients

# Training Installation
make install-train-cpu # Install training requirements (CPU only)
make install-train-gpu # Install training requirements (GPU/CUDA)
make check-gpu      # Check GPU availability

# Training Commands
make train-gpt2-cpu/gpu     # Train GPT2 (CPU/GPU)
make train-mistral-cpu/gpu  # Train Mistral (CPU/GPU)
make train-llama-cpu/gpu    # Train Llama 2 (CPU/GPU)
make train-gpt2/mistral/llama # Default GPU training

# Development
make setup-model    # Setup fine-tuned recipe model
make hf-login       # Login to Hugging Face
make clean          # Clean up temporary files
make clean-cache    # Clean Hugging Face cache
make clean-models   # Clean trained models
make help           # Show all available commands

πŸ“Š Model Performance

The fine-tuned recipe-ai model provides:

  • Better Recipe Quality: More detailed and accurate instructions
  • Consistent Formatting: Proper ingredient and instruction structure
  • Cuisine Awareness: Better understanding of different cooking styles
  • Ingredient Utilization: More efficient use of provided ingredients

πŸ” Troubleshooting

Common Issues

  1. Ollama not running

    # Start Ollama
    ollama serve
  2. Model not found

    # Download the base model
    ollama pull llama2:7b
    
    # Setup the recipe model
    make setup-model
  3. Import errors

    # Reinstall requirements
    make install-all

🀝 Contributing

  1. Follow the existing code structure
  2. Add tests for new features
  3. Update documentation
  4. Follow Python conventions

πŸ“„ License

This project is licensed under the MIT License.

πŸ™ Acknowledgments

  • Built with Ollama
  • Uses Llama 2 as the base model
  • Follows Python packaging best practices

CV

Python 3, OLLAMA, LLMs (GPT2, Llama2, Mistral), Pydantic, FastAPI, Hugging Face, QLoRA, PyTorch, Transformers, CUDA, GPU, CPU, Linux, Windows, MacOS, Docker, Git, Makefile, VSCode, PyCharm, Cursor

About

OLLAMA Pretrained model that can be easily trained with more datasets

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published