Combine NeuroEvolution of Augmenting Topologies (NEAT) with TensorFlow Federated (TFF) for decentralized reinforcement learning in OpenAI Gym environments.
Repository this paper: https://www.sciencedirect.com/science/article/pii/S1877050925009986
- NEAT Evolution
- Auto-generate and manage your
neat_config.txt - Evolve networks with
neat-python, evaluate with Gym
- Auto-generate and manage your
- Custom Keras "Markov" Layer
- Wrap evolved NEAT policies into a Keras layer (
NEATMarkovLayer) - Internally uses a small MLP (acts like a Gaussian-process proxy)
- Wrap evolved NEAT policies into a Keras layer (
- Federated Learning
- Build
tff.learning.algorithms.build_weighted_fed_avgover Keras/NEAT - Adjust per-client learning rates dynamically
- Early stopping, checkpointing, and weighted aggregation by performance
- Build
- Benchmark & Profiling
- Compare federated vs non-federated NEAT performance
cProfileintegration for bottleneck analysis- Built-in plotting utilities for MSE and reward trends
Make sure you have Python 3.8+ installed. For large-scale training, a CUDA-compatible GPU (tested with CUDA 11.x) is recommended.
Install Python dependencies:
pip install -r requirements.txtrequirements.txt should include at least:
neat-python
tensorflow>=2.10
tensorflow-federated
tensorflow-addons
gym
numpy
matplotlib
scikit-learn
.
├── neat_config.txt # autogenerated NEAT settings
├── best_genome.pkl # saved "winning" genome
├── demonstrations.pkl # random/guided Gym demos
├── checkpoints/ # federated training snapshots
├── main.py # entry-point script
├── federated_learning.py # FederatedLearningTest class
└── requirements.txt # Python dependencies
The main.py script supports two modes via the --mode flag:
--mode evolve: Evolve a NEAT population (non-federated)--mode federated: Run federated training
Additional flags:
--generations N: Number of NEAT generations (default: 5)--rounds R: Number of federated rounds (default: 5)--clients C: Number of federated clients (default: 10)
You can view all options with:
python main.py --helpOn first run, create neat_config.txt with default settings:
from main import create_neat_config
create_neat_config('neat_config.txt')python main.py --mode evolve --generations 5This will:
- Load/create
neat_config.txt - Run NEAT for up to 5 generations in
BipedalWalker-v3 - Save the best genome to
best_genome.pkl
python main.py --mode federated --rounds 5 --clients 10This will:
- Load
best_genome.pkland spawn N Gym+NEAT clients - Wrap each NEAT policy in
NEATMarkovLayerand build a TFF FedAvg trainer - Execute 5 federated rounds with:
- Per-client adaptive learning rates
- Weighted aggregation by average reward
- Checkpointing in
./checkpoints/ - Early stopping if MSE < 0.01 over 3 rounds
- Plot round-wise MSE and client reward trends
After training, the script automatically compares:
- Non-federated vs Federated average rewards
- Round-by-round client performance
- MSE over federated rounds
All plots use Matplotlib and will display in your default viewer.
from federated_learning import FederatedLearningTest, model_fn, collect_client_data
import neat, pickle
import tensorflow as tf
# Load config and best genome
config = neat.Config(
neat.DefaultGenome,
neat.DefaultReproduction,
neat.DefaultSpeciesSet,
neat.DefaultStagnation,
'neat_config.txt'
)
genome = pickle.load(open('best_genome.pkl', 'rb'))
# Prepare clients
def create_env_net(i):
from federated_learning import create_environment_and_network
return create_environment_and_network(i, 1.0 + 0.1 * i, config)
clients = [create_env_net(i) for i in range(5)]
# Build TFF trainer & initial state
trainer = tff.learning.algorithms.build_weighted_fed_avg(
model_fn=model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.1)
)
state = trainer.initialize()
flt_test = FederatedLearningTest(clients, model_fn, trainer, state, config, [])
# Run and plot
state, metrics = flt_test.run_federated_training(rounds=10)
flt_test.plot_metrics(metrics)- Fork the repo
- Create a feature branch (
git checkout -b my-feature) - Commit your changes (
git commit -am 'Add X') - Push to the branch (
git push origin my-feature) - Open a Pull Request
Please follow the existing code style and include tests for new functionality.
This project is licensed under MIT. See LICENSE for details.