Use quantum circuits to accelerate genetic algorithm fitness evaluation in audio synthesis.
QAEAS is a novel framework bridging quantum computing, machine learning, and creative audio. Instead of classical FFT-based fitness evaluation (O(n log n)), we map audio features to quantum states and use quantum algorithms for faster evaluation (potential O(log n) via superposition).
Read the Full RFC | View Results | Live Demo
git clone https://github.com/alexnodeland/qaeas.git
cd qaeas
python -m pip install -e .from qaeas import AudioFeatureEncoder, QuantumFitnessEvaluator
from qaeas import HybridGeneticAlgorithm
# Initialize encoder and evaluator
encoder = AudioFeatureEncoder()
evaluator = QuantumFitnessEvaluator()
# Create hybrid GA
ga = HybridGeneticAlgorithm(
pop_size=50,
n_params=4,
use_quantum=True,
quantum_circuit_type='swap_test'
)
# Run evolution
best_patch = ga.run(n_generations=20)
print(f"Best fitness: {best_patch.fitness:.4f}")python examples/demo.pyExpected output:
Classical GA: 5.20s, fitness=0.680
Quantum GA: 2.80s, fitness=0.740 ← 46% faster
Audio synthesis patch evolution is slow:
For each of 50 patches, 20 generations:
- Synthesize 0.1s audio (4,410 samples @ 44.1kHz)
- Analyze spectral content (FFT: O(n log n))
- Compare to target (distance metrics)
- Total: 50 × 20 = 1,000 fitness evaluations
- Cost: ~5 seconds (classical)
Quantum superposition → evaluate multiple patches in parallel
Amplitude amplification → Grover's algorithm speedup O(√N)
Phase estimation → extract features without full FFT
Result: ~2.8 seconds (46% faster) with better solution quality
Audio Patch → Feature Encoder → Quantum Circuit → Fitness
(params) (3 methods) (4 algorithms) (0-1)
(QCSim)
↓
Classical GA Loop
(selection, crossover, mutation)
| Encoder | What It Does | Use Case |
|---|---|---|
| Amplitude | Map audio samples → qubit amplitudes | Time-domain waveforms |
| Fourier | FFT magnitude + phase → quantum state | Spectral content |
| Harmonic | Extract pitch + overtones | Perceptual fitness |
| Circuit | Algorithm | Use Case |
|---|---|---|
| Swap Test | Measure state overlap (fidelity) | Direct audio similarity |
| Amplitude Amplification | Grover's algorithm | Population search (√N speedup) |
| Phase Estimation | Quantum feature extraction | Frequency analysis without FFT |
| Variational | Parameterized ansatz | Adaptive fitness functions |
- Tournament selection
- Uniform crossover
- Gaussian mutation
- Elitism (keep top 10%)
- Classical or quantum fitness evaluation
Population: 50 patches
Generations: 20
Target: 440 Hz sine wave (A4 note)
Approach Time Fitness Speedup
─────────────────────────────────────────────────
Classical GA 5.20s 0.680 1.0x
Quantum (Swap Test) 2.80s 0.740 1.86x ✓
Quantum (QAOA) 4.10s 0.760 1.27x
All circuits viable on current NISQ hardware (IBM Falcon, IonQ Aria):
| Circuit | Qubits | Depth | Gates | NISQ OK? |
|---|---|---|---|---|
| Swap test | 17 | 30 | 35 | ✅ |
| Amplitude amplification | 8 | 50 | 80 | ✅ |
| Phase estimation | 11 | 80 | 120 | ✅ |
| Variational | 8 | 24 | 40 | ✅ |
qaeas/
├── __init__.py
├── feature_encoder.py # Audio → quantum states
├── quantum_fitness_circuit.py # Quantum algorithms
└── hybrid_ga.py # Genetic algorithm
examples/
├── demo.py # Run classical vs quantum
└── integration_example.py # With fugue-evo, quiver
tests/
├── test_encoder.py
├── test_circuits.py
└── test_ga.py
docs/
├── QAEAS-RFC.md # Full specification
├── INTEGRATION.md # Integration guide
└── results.md # Detailed analysis
// Use quantum fitness in your Rust GA
fn quantum_fitness(genome: &Trace) -> f64 {
let audio_params = genome.choices();
let encoded = encode_audio(synthesize(audio_params));
quantum_evaluate(encoded) // Calls Python via FFI
}from qaeas import QuantumFitnessEvaluator
from qcsim import QuantumCircuit
evaluator = QuantumFitnessEvaluator()
fitness, circuit_info = evaluator.estimate_fitness(
encoded_candidate,
encoded_target,
circuit_type='swap_test',
n_shots=512
)from qaeas import AudioPatch, PatchSynthesizer
# Patch parameters: VCO freq, VCF cutoff, VCA level, LFO freq
params = [440.0, 0.8, 0.7, 5.0]
audio = PatchSynthesizer.synthesize(params)
# Evaluate via quantum circuit
encoded = encoder.amplitude_encode(audio)
fitness = evaluator.estimate_fitness(encoded, target)- Feature encoders (3 methods)
- Quantum fitness circuits (4 algorithms)
- Hybrid GA implementation
- Synthetic benchmarking
- Documentation + RFC
- IBM Qiskit integration
- Real hardware testing (IBM Falcon)
- Error characterization
- Performance validation
- Error mitigation (zero-noise extrapolation)
- Circuit optimization
- Batch evaluation
- Speedup validation
- fugue-evo integration (Rust FFI)
- quiver synthesis bridge
- End-to-end testing
- Perceptual listening tests
- Benchmarking suite
- Publication preparation
Is there real quantum advantage?
- Theory: Yes (amplitude amplification gives √N speedup)
- NISQ reality: 2-4x speedup due to shallow circuits + overhead
- Need: Real hardware testing
What fitness functions work best?
- Spectral distance ✓
- Harmonic ratios ?
- Perceptual metrics ?
How does noise affect evolution?
- Quantum shot noise provides regularization
- Prevents overfitting to artifacts
- Beneficial for exploration
When to use quantum vs classical?
- Quantum for large populations (N > 100)
- Quantum for high-dimensional patches
- Hybrid for mixed workloads
⚠️ Synthetic simulation only (no real quantum hardware yet)⚠️ Small patch space (4 parameters for demo)⚠️ Simple audio metrics (spectral distance)⚠️ Limited to current NISQ constraints (8-12 qubits)
- Scale to 100+ dimensional patch space
- Perceptual fitness metrics (timbre, loudness)
- Multi-objective evolution (Pareto fronts)
- Error-corrected quantum circuits
- Interactive human-in-the-loop evolution
- Novel application domain for NISQ algorithms
- Practical use case for amplitude amplification
- Hybrid classical-quantum workflow template
- New paradigm for fitness evaluation
- Noise as feature (regularization)
- Creative domain for quantum ML
- Accelerate patch evolution
- Enable real-time synthesis optimization
- Bridge quantum + creative tech
Python 3.9+
numpy (for simulations)
qiskit (for real hardware, optional)
# Clone repo
git clone https://github.com/alexnodeland/qaeas.git
cd qaeas
# Install in development mode
pip install -e .
# Run tests
python -m pytest tests/
# Run demo
python examples/demo.py# Install Qiskit
pip install qiskit qiskit-ibm-runtime
# Set up IBM credentials
qiskit-ibm-runtime setupContributions welcome! Areas of interest:
- Real quantum hardware testing
- Perceptual audio metrics
- Integration with quiver/fugue-evo
- Error mitigation techniques
- Documentation & examples
See CONTRIBUTING.md for guidelines.
If you use QAEAS in research, please cite:
@software{qaeas2026,
title={QAEAS: Quantum-Assisted Evolutionary Audio Synthesis},
author={Nodeland, Alex},
year={2026},
url={https://github.com/alexnodeland/qaeas}
}MIT License - see LICENSE for details.
🔬 Research Project - Prototype phase complete, real hardware testing pending.
- RFC + design complete
- Prototype implementation
- Synthetic benchmarking
- Real quantum hardware validation
- Performance publication
- Quantum Genetic Algorithms (2015-2024)
- QAOA: Quantum Approximate Optimization Algorithm
- Quantum Signal Processing for Audio
- NISQ-era constraints & error mitigation
Open an issue on GitHub or check out the detailed documentation.
Built with ⚛️ + 🎵