Skip to content

Arkay92/NEAT-Federated-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NEAT-Federated-Learning

Combine NeuroEvolution of Augmenting Topologies (NEAT) with TensorFlow Federated (TFF) for decentralized reinforcement learning in OpenAI Gym environments.

Repository this paper: https://www.sciencedirect.com/science/article/pii/S1877050925009986

Features

  • NEAT Evolution
    • Auto-generate and manage your neat_config.txt
    • Evolve networks with neat-python, evaluate with Gym
  • Custom Keras "Markov" Layer
    • Wrap evolved NEAT policies into a Keras layer (NEATMarkovLayer)
    • Internally uses a small MLP (acts like a Gaussian-process proxy)
  • Federated Learning
    • Build tff.learning.algorithms.build_weighted_fed_avg over Keras/NEAT
    • Adjust per-client learning rates dynamically
    • Early stopping, checkpointing, and weighted aggregation by performance
  • Benchmark & Profiling
    • Compare federated vs non-federated NEAT performance
    • cProfile integration for bottleneck analysis
    • Built-in plotting utilities for MSE and reward trends

Requirements

Make sure you have Python 3.8+ installed. For large-scale training, a CUDA-compatible GPU (tested with CUDA 11.x) is recommended.

Install Python dependencies:

pip install -r requirements.txt

requirements.txt should include at least:

neat-python
tensorflow>=2.10
tensorflow-federated
tensorflow-addons
gym
numpy
matplotlib
scikit-learn

Project Structure

.
├── neat_config.txt         # autogenerated NEAT settings
├── best_genome.pkl         # saved "winning" genome
├── demonstrations.pkl      # random/guided Gym demos
├── checkpoints/            # federated training snapshots
├── main.py                 # entry-point script
├── federated_learning.py   # FederatedLearningTest class
└── requirements.txt        # Python dependencies

Usage

The main.py script supports two modes via the --mode flag:

  • --mode evolve : Evolve a NEAT population (non-federated)
  • --mode federated : Run federated training

Additional flags:

  • --generations N : Number of NEAT generations (default: 5)
  • --rounds R : Number of federated rounds (default: 5)
  • --clients C : Number of federated clients (default: 10)

You can view all options with:

python main.py --help

1. Auto-generate your NEAT config

On first run, create neat_config.txt with default settings:

from main import create_neat_config
create_neat_config('neat_config.txt')

2. Evolve a NEAT population (non-federated)

python main.py --mode evolve --generations 5

This will:

  • Load/create neat_config.txt
  • Run NEAT for up to 5 generations in BipedalWalker-v3
  • Save the best genome to best_genome.pkl

3. Run federated training

python main.py --mode federated --rounds 5 --clients 10

This will:

  • Load best_genome.pkl and spawn N Gym+NEAT clients
  • Wrap each NEAT policy in NEATMarkovLayer and build a TFF FedAvg trainer
  • Execute 5 federated rounds with:
    • Per-client adaptive learning rates
    • Weighted aggregation by average reward
    • Checkpointing in ./checkpoints/
    • Early stopping if MSE < 0.01 over 3 rounds
  • Plot round-wise MSE and client reward trends

4. Benchmark & Visualize

After training, the script automatically compares:

  • Non-federated vs Federated average rewards
  • Round-by-round client performance
  • MSE over federated rounds

All plots use Matplotlib and will display in your default viewer.

Custom Usage Examples

from federated_learning import FederatedLearningTest, model_fn, collect_client_data
import neat, pickle
import tensorflow as tf

# Load config and best genome
config = neat.Config(
    neat.DefaultGenome,
    neat.DefaultReproduction,
    neat.DefaultSpeciesSet,
    neat.DefaultStagnation,
    'neat_config.txt'
)
genome = pickle.load(open('best_genome.pkl', 'rb'))

# Prepare clients
def create_env_net(i):
    from federated_learning import create_environment_and_network
    return create_environment_and_network(i, 1.0 + 0.1 * i, config)
clients = [create_env_net(i) for i in range(5)]

# Build TFF trainer & initial state
trainer = tff.learning.algorithms.build_weighted_fed_avg(
    model_fn=model_fn,
    client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.1)
)
state = trainer.initialize()
flt_test = FederatedLearningTest(clients, model_fn, trainer, state, config, [])

# Run and plot
state, metrics = flt_test.run_federated_training(rounds=10)
flt_test.plot_metrics(metrics)

Contributing

  • Fork the repo
  • Create a feature branch (git checkout -b my-feature)
  • Commit your changes (git commit -am 'Add X')
  • Push to the branch (git push origin my-feature)
  • Open a Pull Request

Please follow the existing code style and include tests for new functionality.

License

This project is licensed under MIT. See LICENSE for details.

About

NEAT-Federated: Combining NEAT and TensorFlow Federated for Distributed Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published