Skip to content

avilaplana/ai-learning-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Learning Project: Local LLM Integration

🎯 Project Overview

This project is designed to help learn AI concepts through hands-on experience with:

  • Local LLMs: Integration with Ollama and Llama 2 (no API costs!)
  • Conversation Management: Understanding how LLMs maintain context
  • Parameter Tuning: Learning how temperature, top_p, top_k affect responses
  • Async Programming: Using async/await for better performance

🏗️ Project Structure

ai-learning-project/
├── Makefile                    # Simple commands for project management
├── requirements.txt            # Just one dependency: aiohttp
├── .gitignore                 # Security and cleanup
├── README.md                  # This file
├── PROJECT_JOURNAL.md         # Learning notes and progress
└── src/
    ├── __init__.py
    └── llm/
        ├── __init__.py
        └── ollama_client.py   # Simple Ollama client
└── examples/
    ├── basic_llm_demo.py      # Basic LLM functionality demo
    └── conversation_test.py   # 3-question conversation demo

Note: This is the current simplified structure. We may add more features in future phases (MCP protocol, AI agents, etc.) but for now, this minimal structure is perfect for learning the core LLM concepts.

🚀 Learning Path

Phase 1: LLM Integration ✅ COMPLETED

  • Set up Ollama with Llama 2 locally
  • Understand message roles (system, user, assistant)
  • Implement basic chat functionality with chat() method
  • Implement simple Q&A with ask() method
  • Learn about different model parameters (temperature, top_p, top_k)
  • Understand conversation context and message objects

Phase 2: MCP Protocol (Future)

  • Understand MCP protocol concepts
  • Build MCP server with custom tools
  • Create MCP client for tool calling
  • Implement file operations and code analysis tools

Phase 3: AI Agents (Future)

  • Design agent architecture
  • Implement tool-calling capabilities
  • Build task planning and execution
  • Create code assistant agent

Phase 4: Advanced Features (Future)

  • Add web search capabilities
  • Implement conversation memory
  • Build user interface
  • Add error handling and logging

🛠️ Setup Instructions

  1. Install Ollama:

    # Install Ollama (if not already installed)
    curl -fsSL https://ollama.ai/install.sh | sh
    
    # Start Ollama server
    ollama serve
    
    # Download Llama 2 model
    ollama pull llama2
  2. Setup Project:

    cd /Users/alvarovilaplana/projects/ai-learning-project
    make setup
  3. Run Examples:

    make basic-demo        # Basic LLM functionality
    make conversation-demo # 3-question conversation test

📚 Key Learning Concepts

LLM Concepts (✅ Learned)

  • Message Roles: Understanding system, user, and assistant roles
  • Conversation Context: How LLMs maintain context across multiple exchanges
  • Parameter Effects: How temperature, top_p, top_k affect response quality
  • Token Management: Understanding max_tokens and response length
  • Async Programming: Using async/await for better API performance

Local LLM Benefits (✅ Experienced)

  • Cost: $0 vs $0.01-0.10 per request with cloud APIs
  • Privacy: Data never leaves your machine
  • Offline: Works without internet connection
  • Control: Full control over model parameters and behavior

Future Concepts (🚀 Coming Next)

  • MCP Protocol: Model Context Protocol for structured AI interactions
  • AI Agents: Building intelligent agents with tool-calling capabilities
  • Tool Integration: Connecting LLMs to external tools and APIs

🎯 Project Goals

Phase 1 Goals (✅ ACHIEVED):

  1. ✅ Understand how to integrate local LLMs into applications
  2. ✅ Know how to manage conversation context with message objects
  3. ✅ Be able to tune LLM parameters for different response styles
  4. ✅ Have a working local LLM setup with Ollama and Llama 2
  5. ✅ Understand async programming for LLM interactions

Future Goals (🚀 Coming Next):

  1. Build and use MCP servers for tool integration
  2. Create AI agents with tool-calling capabilities
  3. Build a working code assistant with development tools
  4. Understand best practices for AI application development

📝 Next Steps

  1. Completed: Run through the examples and understand the code
  2. Completed: Read the PROJECT_JOURNAL.md for detailed learning notes
  3. 🚀 Next: Experiment with different models (ollama pull <model-name>)
  4. 🚀 Next: Try different parameters and see how they affect responses
  5. 🚀 Next: Build your own simple examples using the OllamaClient

📚 Resources and Documentation

🤝 Getting Help

  • Check the examples in the examples/ folder
  • Review the PROJECT_JOURNAL.md for detailed learning notes
  • Use make help to see available commands
  • Modify the code and experiment - it's designed to be simple!

🎉 What You've Accomplished

Phase 1 Complete! You now have:

  • ✅ A working local LLM setup with Ollama and Llama 2
  • ✅ Understanding of message roles and conversation context
  • ✅ Knowledge of LLM parameters and their effects
  • ✅ Experience with async programming for LLM interactions
  • ✅ A simple, clean codebase to build upon

Remember: This is a learning project. Phase 1 is complete - you've learned the fundamentals! Now you can experiment, try different models, and prepare for the next phases. 🚀

About

AI playground to build knowledge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors