Skip to content

sudipta-chaudhari/ai-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

17 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenAI Chatbot using Python

A Python-based conversational AI chat application that seamlessly integrates with OpenAI models or compatible local LLM services. This project allows you to have interactive conversations while maintaining full conversation history.


Table of Contents


Features

✨ Core Features:

  • πŸ’¬ Interactive conversational CLI interface with persistent message history
  • πŸ”Œ Flexible LLM backend support (OpenAI, local models, or compatible endpoints)
  • βš™οΈ Highly configurable model parameters (temperature, max tokens, etc.)
  • πŸ›‘οΈ Secure API key management with environment variable support
  • πŸ“ Conversation context preservation for multi-turn interactions
  • πŸš€ Simple and intuitive command-line interface

Prerequisites

Before you begin, ensure you have the following installed on your system:

  • Python: Version 3.8 or higher
  • pip: Python package manager (comes with Python)
  • Git: For cloning the repository (optional)

Optional Requirements

Depending on your LLM backend choice:

  • Local LLM Service: If using local models, ensure a compatible LLM service is running (e.g., LM Studio, Ollama, or similar)
  • OpenAI API Key: If using OpenAI's API directly instead of a local service

Installation

1. Clone the Repository

git clone https://github.com/sudipta-chaudhari/ai-chat.git
cd ai-chat

2. Create a Virtual Environment (Recommended)

On Windows:

python -m venv venv
venv\Scripts\activate

On macOS/Linux:

python -m venv venv
source venv/bin/activate

3. Install Dependencies

pip install -e .

Or install dependencies manually:

pip install "openai>=1.3.0"

Configuration

The project uses a Settings class to manage all LLM configuration parameters. Configuration is defined in src/settings.py.

Required Parameters

The Settings class requires all 5 parameters to be provided during initialization (no defaults):

Parameter Description Type Constraints
base_url API endpoint URL str Must start with http or https
api_key Authentication key str Any string (local models may use "not needed")
model Model identifier str Model name/ID
temperature Response creativity float 0.0 - 1.0
max_tokens Max response tokens int Positive integer

Customizing Configuration

Configure settings in main.py when creating the Settings instance:

settings = Settings(
    base_url="http://127.0.0.1:1234/v1",
    api_key="not needed",
    model="liquid/lfm2.5-1.2b",
    temperature=0.7,
    max_tokens=512
)

All configuration values can be modified after creation using property setters:

settings.temperature = 0.9  # Adjust creativity
settings.max_tokens = 1024  # Increase response length

Using Environment Variables (Recommended for Production)

For better security, especially for API keys, use environment variables in main.py:

import os

settings = Settings(
    base_url=os.getenv("LLM_BASE_URL", "http://127.0.0.1:1234/v1"),
    api_key=os.getenv("LLM_API_KEY", "not needed"),
    model=os.getenv("LLM_MODEL", "liquid/lfm2.5-1.2b"),
    temperature=float(os.getenv("LLM_TEMPERATURE", "0.7")),
    max_tokens=int(os.getenv("LLM_MAX_TOKENS", "512"))
)

Then set environment variables:

On Windows (PowerShell):

$env:LLM_API_KEY = "your-api-key-here"
$env:LLM_BASE_URL = "https://api.openai.com/v1"
$env:LLM_MODEL = "gpt-3.5-turbo"

On macOS/Linux (Bash):

export LLM_API_KEY="your-api-key-here"
export LLM_BASE_URL="https://api.openai.com/v1"
export LLM_MODEL="gpt-3.5-turbo"

Usage

Running the Application

python main.py

Example Interaction

πŸ€– Chat initialized with model: liquid/lfm2.5-1.2b
Type 'exit' to quit, 'clear' to clear conversation history
--------------------------------------------------

You: What is Machine Learning? Explain in one sentence. 
Assistant: Machine learning is a subset of artificial intelligence that enables computers to learn patterns from data without being explicitly programmed.

You: Write one more sentence.
Assistant: Machine learning empowers systems to improve their performance on tasks through experience and data analysis.

You: exit
Goodbye!

Commands

Command Action
Type any question Send message to the chat
clear Clear conversation history
exit Quit the application

Project Structure

ai-chat/
β”œβ”€β”€ main.py                          # Application entry point
β”œβ”€β”€ pyproject.toml                   # Project metadata and dependencies
β”œβ”€β”€ README.md                        # This file is primary documentation for a project
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ __init__.py                 # Package initialization
β”‚   β”œβ”€β”€ settings.py                 # Configuration settings (Settings class)
β”‚   └── chat/
β”‚       β”œβ”€β”€ __init__.py             # Package initialization
β”‚       β”œβ”€β”€ chat_client.py          # OpenAI client wrapper
β”‚       └── chat_session.py         # Conversation history management
└── openai_chat.egg-info/           # Package metadata (generated)

File Descriptions


Advanced Configuration

Setting Up with OpenAI's API

To use OpenAI's official API instead of a local service:

  1. Create an OpenAI account at https://openai.com
  2. Generate an API key from your account dashboard
  3. Configure settings in main.py when creating the Settings instance:
settings = Settings(
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here",  # IMPORTANT: For production, use environment variables as shown above!
    model="gpt-3.5-turbo",  # or "gpt-4"
    temperature=0.7,
    max_tokens=512
)

Setting Up with Local LLM Services

Using LM Studio

  1. Download from https://lmstudio.ai
  2. Download a model in the LM Studio interface
  3. Start the local server (usually runs on http://127.0.0.1:1234)
  4. Keep default configuration in src/settings.py (or override in main.py).

Using Ollama

  1. Install from https://ollama.ai
  2. Pull a model: ollama pull llama2
  3. Models run on http://127.0.0.1:11434 by default
  4. Update src/settings.py (or override in main.py):
# Example of overriding in main.py, before initializing ChatClient:
settings.base_url = "http://127.0.0.1:11434/v1"
settings.model = "llama2"

Optimizing for Different Use Cases

You can optimize the model's behavior by adjusting temperature and max_tokens settings. These are defined as default values in src/settings.py and can be overridden in main.py before initializing the ChatClient.

For Factual/Precise Responses:

# Default in src/settings.py or override in main.py before initializing ChatClient:
settings.temperature = 0.1  # Lower temperature = more deterministic/focused
settings.max_tokens = 256   # Shorter responses for conciseness

For Creative Responses:

# Default in src/settings.py or override in main.py before initializing ChatClient:
settings.temperature = 0.9  # Higher temperature = more creative/varied
settings.max_tokens = 1024  # Longer responses for richer content

For Balanced Performance (Default):

# Default settings in src/settings.py:
settings.temperature = 0.7  # Moderate creativity and consistency
settings.max_tokens = 512   # Balanced response length

Troubleshooting

Issue: "Connection refused" error

Cause: LLM service is not running or URL is incorrect

Solution:

  1. Verify the LLM service is running on the configured URL
  2. Check base_url in src/settings.py
  3. Test the endpoint with: curl http://127.0.0.1:1234/v1/models

Issue: "Authentication failed" error

Cause: Invalid or missing API key

Solution:

  1. For local models: Set api_key = "not needed" in src/settings.py
  2. For OpenAI: Verify your API key is correct and active
  3. Check for typos or extra whitespace in the key

Issue: Slow responses or timeouts

Cause: Model is processing-intensive or network issues

Solution:

  1. Reduce max_tokens in src/settings.py
  2. Lower temperature for simpler processing
  3. Check network connectivity
  4. Try a smaller or faster model

Issue: ImportError for 'openai' module

Cause: Dependencies not installed

Solution:

pip install -e .
# or
pip install "openai>=1.3.0"

Issue: Model not found error

Cause: Specified model is not available

Solution:

  1. Verify the model name in model
  2. Ensure the model is downloaded/installed on your system
  3. Check available models: ollama list or your service's model manager

Contributing

Contributions are welcome! Please follow these guidelines:

  1. Fork the repository on GitHub
  2. Create a feature branch: git checkout -b feature/your-feature-name
  3. Make your changes with clear, descriptive commits
  4. Test thoroughly before submitting
  5. Push to your fork: git push origin feature/your-feature-name
  6. Submit a Pull Request with a description of your changes

Code Style

  • Follow PEP 8 guidelines
  • Use meaningful variable and function names
  • Add docstrings to functions
  • Keep functions focused and single-purpose

License

This project is open-source and available under the MIT License. See LICENSE file for details.


Support & Contact


Changelog

Version 0.1.0 (Current)

  • Initial release
  • Basic CLI interface
  • OpenAI API integration
  • Local LLM support
  • Conversation history management

Acknowledgments

  • Built with OpenAI Python SDK
  • Compatible with local LLM services like LM Studio and Ollama
  • Inspired by modern AI chat interfaces

Last Updated: March 2026

Releases

No releases published

Packages

 
 
 

Contributors

Languages