Skip to content

An interactive Tic Tac Toe experience that blends machine learning, web development, and game theory to create an engaging, intelligent opponent. Built as a full-stack AI gaming project, it features a sleek user interface, strategic AI moves, and real-time gameplay for an immersive challenge.

License

Notifications You must be signed in to change notification settings

jahnavigbedre/Tic--Tac-Toe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

8 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐ŸŽฎ AI-Powered Tic Tac Toe Game

A sophisticated Tic Tac Toe game featuring a reinforcement learning AI opponent and a modern web interface. This project combines machine learning, game theory, and web development to create an engaging gaming experience.

Tic Tac Toe Game AI Python Flask

๐ŸŒŸ Features

๐Ÿค– AI Components

  • Deep Q-Network (DQN) trained using Stable Baselines3
  • Multiple AI Strategies: Random, Smart, Minimax, and Adaptive opponents
  • Advanced Game Logic: Fork detection, strategic positioning, and tactical play
  • Reinforcement Learning Environment compatible with OpenAI Gymnasium

๐ŸŽฏ Game Modes

  • 1 vs 1: Two players with custom names
  • 1 vs Bot: Play against the trained AI opponent
  • Real-time Turn Display: Shows whose turn it is with player names
  • Personalized Results: Winner announcements by player name

๐ŸŽจ Modern Web Interface

  • Intuitive Design: Clean, modern UI with gradient backgrounds
  • Responsive Layout: Works on desktop and mobile devices
  • Interactive Board: Click-to-play with visual feedback
  • Mode Selection: Easy switching between game modes
  • Custom Player Names: Personalize your gaming experience

๐Ÿš€ Quick Start

Prerequisites

  • Python 3.8 or higher
  • pip package manager
  • Web browser for the frontend

Installation & Running

  1. Clone the repository

    git clone https://github.com/jahnavigbedre/Tic--Tac-Toe.git
    cd Tic--Tac-Toe
  2. Install dependencies

    pip install -r requirements.txt
  3. Train the AI (Optional - pre-trained model included)

    python train_agent.py
  4. Start the backend (Flask API server)

    python app.py
    # By default, runs on http://localhost:5000
  5. Start the frontend (HTML server)

    • You must serve index.html using a local web server (not by double-clicking the file), otherwise browser security will block API requests.
    • You can use Python's built-in HTTP server:
      # In the project directory (where index.html is located):
      python -m http.server 8000
      # Now open http://localhost:8000 in your browser
    • The frontend is configured to call the backend API at http://localhost:5000/move.
    • You can run the frontend server on any port (e.g., 8000, 3000, etc.), but the backend must be running on port 5000 for the API calls to work (or update the JS code if you change the backend port).

Note:

  • If you open index.html directly as a file (file://...), the browser will block API requests to the backend. Always use a local server for the frontend.
  • CORS is enabled in the backend to allow cross-origin requests from your frontend server.

๐ŸŽฎ How to Play

  1. Choose Game Mode

    • Select "1 vs 1" for two-player mode
    • Select "1 vs Bot" to play against AI
  2. Enter Player Names

    • For 1v1: Enter both player names
    • For vs Bot: Enter your name
  3. Start Playing

    • Click "Start Game" to begin
    • Click on board squares to make moves
    • Follow turn indicators
  4. Game Results

    • Winner displayed by name
    • "Restart" to play again with different settings

๐Ÿ—๏ธ Project Structure

Tic--Tac-Toe/
โ”œโ”€โ”€ app.py                 # Flask web server and API
โ”œโ”€โ”€ index.html            # Modern web interface
โ”œโ”€โ”€ Tic--Tac-Toe_env.py      # RL environment with multiple AI strategies
โ”œโ”€โ”€ train_agent.py        # AI training script
โ”œโ”€โ”€ test.py              # Model testing script
โ”œโ”€โ”€ requirements.txt      # Python dependencies
โ”œโ”€โ”€ Tic--Tac-Toe_agent.zip   # Pre-trained AI model
โ””โ”€โ”€ README.md            # This file

๐Ÿง  AI Architecture

Reinforcement Learning Environment

  • State Space: 18-dimensional (9 board positions + 9 valid move mask)
  • Action Space: 9 discrete actions (board positions 0-8)
  • Reward System:
    • +10 for winning
    • -5 for losing
    • +1 for draws
    • -10 for invalid moves
    • Small positional rewards for strategic play

AI Opponent Strategies

๐ŸŽฏ Smart Strategy

  1. Win immediately if possible
  2. Block opponent from winning
  3. Create forks (multiple winning paths)
  4. Block opponent forks
  5. Take center for strategic advantage
  6. Take opposite corners
  7. Prefer corners over edges
  8. Strategic positioning based on evaluation

๐Ÿงฎ Minimax Strategy

  • Perfect play using minimax algorithm with alpha-beta pruning
  • Guaranteed optimal moves (never loses when playing optimally)

๐Ÿ”„ Adaptive Strategy

  • Learns from player patterns
  • Mixes strategies based on game history
  • Increases difficulty over time

๐Ÿ”ง API Reference

Endpoints

POST /move

Get AI move for current board state.

Request Body:

{
  "board": [0, 1, 0, 2, 1, 0, 0, 0, 2]
}

Response:

{
  "move": 6
}

Board Encoding:

  • 0: Empty cell
  • 1: Player X
  • 2: Player O (Bot)

๐Ÿ› ๏ธ Development

Training Your Own Model

from stable_baselines3 import DQN
from Tic--Tac-Toe_env import SmartTic--Tac-ToeEnv

# Create environment with desired opponent strategy
env = SmartTic--Tac-ToeEnv(opponent_strategy='smart')

# Train model
model = DQN("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=100_000)

# Save model
model.save("your_model_name")

Testing the Model

from stable_baselines3 import DQN
from Tic--Tac-Toe_env import SmartTic--Tac-ToeEnv

# Load model
model = DQN.load("Tic--Tac-Toe_agent")
env = SmartTic--Tac-ToeEnv()

# Test gameplay
obs, _ = env.reset()
done = False

while not done:
    action, _ = model.predict(obs)
    obs, reward, done, _, _ = env.step(action)
    env.render()

Customizing AI Difficulty

Modify the opponent strategy in Tic--Tac-Toe_env.py:

  • 'random': Easy (random moves)
  • 'smart': Medium (strategic but not perfect)
  • 'minimax': Hard (optimal play)
  • 'adaptive': Dynamic (learns and adapts)

๐Ÿ“Š Performance Metrics

The trained AI achieves:

  • 95%+ win rate against random opponents
  • 60-70% win/draw rate against smart opponents
  • 50% draw rate against minimax (optimal play)
  • Sub-second response time for move calculations

๐Ÿš€ Deployment

Local Development

python app.py
# Access at http://localhost:5000

Production Deployment

For production deployment, consider:

  • Using Gunicorn or uWSGI for serving Flask
  • Setting up reverse proxy with Nginx
  • Using environment variables for configuration
  • Implementing proper logging and monitoring

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Guidelines

  • Follow PEP 8 style guidelines
  • Add tests for new features
  • Update documentation as needed
  • Ensure AI training convergence before committing models

๐Ÿ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Stable Baselines3 for the DQN implementation
  • OpenAI Gymnasium for the RL environment framework
  • Flask for the web API framework
  • NumPy for numerical computations

๐Ÿ› Known Issues

  • AI model requires retraining if environment parameters change significantly
  • Web interface requires manual refresh after network errors
  • Training time scales with complexity of opponent strategy

๐Ÿ”ฎ Future Enhancements

  • Neural network visualization for AI decision making
  • Online multiplayer with WebSocket support
  • Tournament mode with multiple AI opponents
  • Mobile app version
  • AI vs AI battle mode
  • Advanced statistics and analytics
  • Custom board sizes (4x4, 5x5)
  • AI difficulty slider with real-time adjustment

๐Ÿ“ง Contact

For questions, suggestions, or collaboration opportunities, please open an issue or reach out via:


Enjoy playing against our AI! Can you beat the machine? ๐Ÿค–๐ŸŽฏ

About

An interactive Tic Tac Toe experience that blends machine learning, web development, and game theory to create an engaging, intelligent opponent. Built as a full-stack AI gaming project, it features a sleek user interface, strategic AI moves, and real-time gameplay for an immersive challenge.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published