This project is designed to help learn AI concepts through hands-on experience with:
- Local LLMs: Integration with Ollama and Llama 2 (no API costs!)
- Conversation Management: Understanding how LLMs maintain context
- Parameter Tuning: Learning how temperature, top_p, top_k affect responses
- Async Programming: Using async/await for better performance
ai-learning-project/
├── Makefile # Simple commands for project management
├── requirements.txt # Just one dependency: aiohttp
├── .gitignore # Security and cleanup
├── README.md # This file
├── PROJECT_JOURNAL.md # Learning notes and progress
└── src/
├── __init__.py
└── llm/
├── __init__.py
└── ollama_client.py # Simple Ollama client
└── examples/
├── basic_llm_demo.py # Basic LLM functionality demo
└── conversation_test.py # 3-question conversation demo
Note: This is the current simplified structure. We may add more features in future phases (MCP protocol, AI agents, etc.) but for now, this minimal structure is perfect for learning the core LLM concepts.
- Set up Ollama with Llama 2 locally
- Understand message roles (system, user, assistant)
- Implement basic chat functionality with
chat()method - Implement simple Q&A with
ask()method - Learn about different model parameters (temperature, top_p, top_k)
- Understand conversation context and message objects
- Understand MCP protocol concepts
- Build MCP server with custom tools
- Create MCP client for tool calling
- Implement file operations and code analysis tools
- Design agent architecture
- Implement tool-calling capabilities
- Build task planning and execution
- Create code assistant agent
- Add web search capabilities
- Implement conversation memory
- Build user interface
- Add error handling and logging
-
Install Ollama:
# Install Ollama (if not already installed) curl -fsSL https://ollama.ai/install.sh | sh # Start Ollama server ollama serve # Download Llama 2 model ollama pull llama2
-
Setup Project:
cd /Users/alvarovilaplana/projects/ai-learning-project make setup -
Run Examples:
make basic-demo # Basic LLM functionality make conversation-demo # 3-question conversation test
- Message Roles: Understanding system, user, and assistant roles
- Conversation Context: How LLMs maintain context across multiple exchanges
- Parameter Effects: How temperature, top_p, top_k affect response quality
- Token Management: Understanding max_tokens and response length
- Async Programming: Using async/await for better API performance
- Cost: $0 vs $0.01-0.10 per request with cloud APIs
- Privacy: Data never leaves your machine
- Offline: Works without internet connection
- Control: Full control over model parameters and behavior
- MCP Protocol: Model Context Protocol for structured AI interactions
- AI Agents: Building intelligent agents with tool-calling capabilities
- Tool Integration: Connecting LLMs to external tools and APIs
Phase 1 Goals (✅ ACHIEVED):
- ✅ Understand how to integrate local LLMs into applications
- ✅ Know how to manage conversation context with message objects
- ✅ Be able to tune LLM parameters for different response styles
- ✅ Have a working local LLM setup with Ollama and Llama 2
- ✅ Understand async programming for LLM interactions
Future Goals (🚀 Coming Next):
- Build and use MCP servers for tool integration
- Create AI agents with tool-calling capabilities
- Build a working code assistant with development tools
- Understand best practices for AI application development
- ✅ Completed: Run through the examples and understand the code
- ✅ Completed: Read the PROJECT_JOURNAL.md for detailed learning notes
- 🚀 Next: Experiment with different models (
ollama pull <model-name>) - 🚀 Next: Try different parameters and see how they affect responses
- 🚀 Next: Build your own simple examples using the OllamaClient
- Check the examples in the
examples/folder - Review the PROJECT_JOURNAL.md for detailed learning notes
- Use
make helpto see available commands - Modify the code and experiment - it's designed to be simple!
Phase 1 Complete! You now have:
- ✅ A working local LLM setup with Ollama and Llama 2
- ✅ Understanding of message roles and conversation context
- ✅ Knowledge of LLM parameters and their effects
- ✅ Experience with async programming for LLM interactions
- ✅ A simple, clean codebase to build upon
Remember: This is a learning project. Phase 1 is complete - you've learned the fundamentals! Now you can experiment, try different models, and prepare for the next phases. 🚀