diff --git a/RL/.gitignore b/RL/.gitignore new file mode 100644 index 0000000..015566e --- /dev/null +++ b/RL/.gitignore @@ -0,0 +1,220 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[codz] +*$py.class +*.pyc + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py.cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +# Pipfile.lock + +# UV +# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control. +# This is especially recommended for binary packages to ensure reproducibility, and is more +# commonly ignored for libraries. +# uv.lock + +# poetry +# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. +# This is especially recommended for binary packages to ensure reproducibility, and is more +# commonly ignored for libraries. +# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control +# poetry.lock +# poetry.toml + +# pdm +# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. +# pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python. +# https://pdm-project.org/en/latest/usage/project/#working-with-version-control +# pdm.lock +# pdm.toml +.pdm-python +.pdm-build/ + +# pixi +# Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control. +# pixi.lock +# Pixi creates a virtual environment in the .pixi directory, just like venv module creates one +# in the .venv directory. It is recommended not to include this directory in version control. +.pixi + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# Redis +*.rdb +*.aof +*.pid + +# RabbitMQ +mnesia/ +rabbitmq/ +rabbitmq-data/ + +# ActiveMQ +activemq-data/ + +# SageMath parsed files +*.sage.py + +# Environments +.env +.envrc +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ + +# PyCharm +# JetBrains specific template is maintained in a separate JetBrains.gitignore that can +# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore +# and can be added to the global gitignore or merged into this file. For a more nuclear +# option (not recommended) you can uncomment the following to ignore the entire idea folder. +# .idea/ + +# Abstra +# Abstra is an AI-powered process automation framework. +# Ignore directories containing user credentials, local state, and settings. +# Learn more at https://abstra.io/docs +.abstra/ + +# Visual Studio Code +# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore +# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore +# and can be added to the global gitignore or merged into this file. However, if you prefer, +# you could uncomment the following to ignore the entire vscode folder +# .vscode/ + +# Ruff stuff: +.ruff_cache/ + +# PyPI configuration file +.pypirc + +# Marimo +marimo/_static/ +marimo/_lsp/ +__marimo__/ + +# Streamlit +.streamlit/secrets.toml + + +.env \ No newline at end of file diff --git a/RL/.python-version b/RL/.python-version new file mode 100644 index 0000000..e4fba21 --- /dev/null +++ b/RL/.python-version @@ -0,0 +1 @@ +3.12 diff --git a/RL/__pycache__/cfd_analysis.cpython-310.pyc b/RL/__pycache__/cfd_analysis.cpython-310.pyc deleted file mode 100644 index 995013d..0000000 Binary files a/RL/__pycache__/cfd_analysis.cpython-310.pyc and /dev/null differ diff --git a/RL/__pycache__/cfd_analysis.cpython-312.pyc b/RL/__pycache__/cfd_analysis.cpython-312.pyc deleted file mode 100644 index 443964c..0000000 Binary files a/RL/__pycache__/cfd_analysis.cpython-312.pyc and /dev/null differ diff --git a/RL/__pycache__/early_stopping.cpython-310.pyc b/RL/__pycache__/early_stopping.cpython-310.pyc deleted file mode 100644 index cb233be..0000000 Binary files a/RL/__pycache__/early_stopping.cpython-310.pyc and /dev/null differ diff --git a/RL/__pycache__/f1_endplate_generator.cpython-312.pyc b/RL/__pycache__/f1_endplate_generator.cpython-312.pyc deleted file mode 100644 index 8ae2707..0000000 Binary files a/RL/__pycache__/f1_endplate_generator.cpython-312.pyc and /dev/null differ diff --git a/RL/__pycache__/formula_constraints.cpython-310.pyc b/RL/__pycache__/formula_constraints.cpython-310.pyc deleted file mode 100644 index 4fdaa47..0000000 Binary files a/RL/__pycache__/formula_constraints.cpython-310.pyc and /dev/null differ diff --git a/RL/__pycache__/formula_constraints.cpython-312.pyc b/RL/__pycache__/formula_constraints.cpython-312.pyc deleted file mode 100644 index 2a37499..0000000 Binary files a/RL/__pycache__/formula_constraints.cpython-312.pyc and /dev/null differ diff --git a/RL/__pycache__/main_pipeline.cpython-310.pyc b/RL/__pycache__/main_pipeline.cpython-310.pyc deleted file mode 100644 index 7ef066e..0000000 Binary files a/RL/__pycache__/main_pipeline.cpython-310.pyc and /dev/null differ diff --git a/RL/__pycache__/main_pipeline.cpython-312.pyc b/RL/__pycache__/main_pipeline.cpython-312.pyc deleted file mode 100644 index 5eb0e09..0000000 Binary files a/RL/__pycache__/main_pipeline.cpython-312.pyc and /dev/null differ diff --git a/RL/__pycache__/wing_generator.cpython-310.pyc b/RL/__pycache__/wing_generator.cpython-310.pyc deleted file mode 100644 index 126b59e..0000000 Binary files a/RL/__pycache__/wing_generator.cpython-310.pyc and /dev/null differ diff --git a/RL/__pycache__/wing_generator.cpython-312.pyc b/RL/__pycache__/wing_generator.cpython-312.pyc deleted file mode 100644 index 007387d..0000000 Binary files a/RL/__pycache__/wing_generator.cpython-312.pyc and /dev/null differ diff --git a/RL/cfd_results/gen000_ind003_cfd_results.json b/RL/cfd_results/gen000_ind003_cfd_results.json new file mode 100644 index 0000000..57d023e --- /dev/null +++ b/RL/cfd_results/gen000_ind003_cfd_results.json @@ -0,0 +1,104 @@ +{ + "timestamp": "20251022_014725", + "generation": 0, + "individual_idx": 3, + "design_parameters": { + "total_span": 1600.0, + "root_chord": 250.0, + "flap_count": 3, + "endplate_height": 268.9881271806169 + }, + "performance": { + "total_downforce_N": -35933631498.22818, + "total_drag_N": 462260446446.7919, + "total_sideforce_N": 0.0, + "efficiency_ratio": -0.07773460129335183, + "center_of_pressure_m": 0.0, + "pitching_moment_Nm": -3237118437319.3486 + }, + "f1_metrics": { + "downforce_per_drag": -0.07773460129335183, + "downforce_to_weight_ratio": -2441972.918669941, + "drag_coefficient_total": 1158.148697844516, + "downforce_coefficient_total": -90.02822726535804, + "balance_coefficient": 0.0, + "yaw_sensitivity": 0, + "stall_margin": 52.74788911280214, + "performance_consistency": 0 + }, + "flow_characteristics": { + "dynamic_pressure_Pa": 1826.5759116976922, + "corrected_air_density": 1.1836211907801044, + "avg_reynolds_number": 1401938491.934083, + "max_mach_number": 0.15923014110819414, + "flow_attachment": "Excellent attachment", + "ground_effect_utilization": 1.5971795321514364, + "slot_effectiveness": 1.3587871774979856, + "environmental_impact": "Minimal environmental impact" + }, + "elements": [ + { + "element_number": 1, + "chord_length_mm": 421233.0322265625, + "effective_angle_deg": 0.0, + "reynolds_number": 1583825857.8193343, + "mach_number": 0.15923014110819414, + "lift_coefficient": -4.511530785072822, + "drag_coefficient": 20.901114819250584, + "sideforce_coefficient": 0, + "downforce_N": -5797242304.215023, + "drag_N": 26857586218.035564, + "sideforce_N": 0.0, + "moment_Nm": -610497488589.1495, + "ground_effect_factor": 2.199786341542295, + "slot_effect_factor": 1.0, + "slot_efficiency": 1.0, + "slot_velocity_ratio": 1.0, + "camber": 0.15134278858511838, + "thickness_ratio": 0.12894613160622603, + "element_area_m2": 703493.0342038975 + }, + { + "element_number": 2, + "chord_length_mm": 349353.0445098877, + "effective_angle_deg": 0.0, + "reynolds_number": 1313558869.0135434, + "mach_number": 0.15923014110819414, + "lift_coefficient": -13.310432947143163, + "drag_coefficient": 131.35718952482054, + "sideforce_coefficient": 0, + "downforce_N": -14185081047.954899, + "drag_N": 139988863400.65118, + "sideforce_N": 0.0, + "moment_Nm": -1238900312680.638, + "ground_effect_factor": 1.4398626031724806, + "slot_effect_factor": 1.5381807883074037, + "slot_efficiency": 1.1253517471925991e-07, + "slot_velocity_ratio": 1.4000000450140697, + "camber": 0.4032960859172186, + "thickness_ratio": 0.1557197938788581, + "element_area_m2": 583447.6750114952 + }, + { + "element_number": 3, + "chord_length_mm": 347989.17388916016, + "effective_angle_deg": 0.0, + "reynolds_number": 1308430748.9693716, + "mach_number": 0.15923014110819414, + "lift_coefficient": -15.02641795807399, + "drag_coefficient": 278.2852758130172, + "sideforce_coefficient": 0, + "downforce_N": -15951308146.058258, + "drag_N": 295413996828.1051, + "sideforce_N": 0.0, + "moment_Nm": -1387720636049.561, + "ground_effect_factor": 1.1518896517395334, + "slot_effect_factor": 1.5381807441865532, + "slot_efficiency": 1.6052280551856116e-09, + "slot_velocity_ratio": 1.4000000006420912, + "camber": 0.5586654125130391, + "thickness_ratio": 0.1563158103709367, + "element_area_m2": 581169.9014091602 + } + ] +} \ No newline at end of file diff --git a/RL/cfd_results/gen000_ind004_cfd_results.json b/RL/cfd_results/gen000_ind004_cfd_results.json new file mode 100644 index 0000000..dcb0f4f --- /dev/null +++ b/RL/cfd_results/gen000_ind004_cfd_results.json @@ -0,0 +1,125 @@ +{ + "timestamp": "20251022_014734", + "generation": 0, + "individual_idx": 4, + "design_parameters": { + "total_span": 1747.9365536896603, + "root_chord": 250.0, + "flap_count": 3, + "endplate_height": 267.2492028444034 + }, + "performance": { + "total_downforce_N": -36590961087.25334, + "total_drag_N": 473440409473.1255, + "total_sideforce_N": 0.0, + "efficiency_ratio": -0.07728736363669102, + "center_of_pressure_m": 0.0, + "pitching_moment_Nm": -3396425578140.3403 + }, + "f1_metrics": { + "downforce_per_drag": -0.07728736363669102, + "downforce_to_weight_ratio": -2486643.6348796017, + "drag_coefficient_total": 1114.9023352482313, + "downforce_coefficient_total": -86.16786220372605, + "balance_coefficient": 0.0, + "yaw_sensitivity": 0, + "stall_margin": 55.26097302624592, + "performance_consistency": 0 + }, + "flow_characteristics": { + "dynamic_pressure_Pa": 1826.5759116976922, + "corrected_air_density": 1.1836211907801044, + "avg_reynolds_number": 1434468489.288918, + "max_mach_number": 0.15923014110819414, + "flow_attachment": "Excellent attachment", + "ground_effect_utilization": 1.4282660724758123, + "slot_effectiveness": 1.40363556902622, + "environmental_impact": "Minimal environmental impact" + }, + "elements": [ + { + "element_number": 1, + "chord_length_mm": 403784.0270996094, + "effective_angle_deg": 0.0, + "reynolds_number": 1518218027.001294, + "mach_number": 0.15923014110819414, + "lift_coefficient": -4.254011690578432, + "drag_coefficient": 17.14367968256979, + "sideforce_coefficient": 0, + "downforce_N": -5657636905.142775, + "drag_N": 22800293444.624924, + "sideforce_N": 0.0, + "moment_Nm": -571115853356.4801, + "ground_effect_factor": 2.199777108568047, + "slot_effect_factor": 1.0, + "slot_efficiency": 1.0, + "slot_velocity_ratio": 1.0, + "camber": 0.14303689122562246, + "thickness_ratio": 0.09976891732001532, + "element_area_m2": 728112.8016381338 + }, + { + "element_number": 2, + "chord_length_mm": 428091.4306640625, + "effective_angle_deg": 0.0, + "reynolds_number": 1609613267.5367606, + "mach_number": 0.15923014110819414, + "lift_coefficient": -4.729294470586342, + "drag_coefficient": 19.560429456718612, + "sideforce_coefficient": 0, + "downforce_N": -6668377054.833106, + "drag_N": 27580502712.00228, + "sideforce_N": 0.0, + "moment_Nm": -713668768402.728, + "ground_effect_factor": 1.439887874419898, + "slot_effect_factor": 1.5381807883074037, + "slot_efficiency": 1.1253517471925991e-07, + "slot_velocity_ratio": 1.4000000450140697, + "camber": 0.16267411086635727, + "thickness_ratio": 0.09423218881213599, + "element_area_m2": 771944.4802634418 + }, + { + "element_number": 3, + "chord_length_mm": 330231.8859100342, + "effective_angle_deg": 0.0, + "reynolds_number": 1241663782.1970286, + "mach_number": 0.15923014110819414, + "lift_coefficient": -10.482470974962888, + "drag_coefficient": 123.68689468489498, + "sideforce_coefficient": 0, + "downforce_N": -11401708436.114468, + "drag_N": 134533347522.15344, + "sideforce_N": 0.0, + "moment_Nm": -941301919863.6068, + "ground_effect_factor": 1.151883718073153, + "slot_effect_factor": 1.5381807441865532, + "slot_efficiency": 1.6052280551856116e-09, + "slot_velocity_ratio": 1.4000000006420912, + "camber": 0.40772833637760564, + "thickness_ratio": 0.12230249505571078, + "element_area_m2": 595481.8603581958 + }, + { + "element_number": 4, + "chord_length_mm": 363932.9299926758, + "effective_angle_deg": 0.0, + "reynolds_number": 1368378880.420589, + "mach_number": 0.15923014110819414, + "lift_coefficient": -10.731037606123397, + "drag_coefficient": 240.70036193299748, + "sideforce_coefficient": 0, + "downforce_N": -12863238691.162996, + "drag_N": 288526265794.34485, + "sideforce_N": 0.0, + "moment_Nm": -1170339036517.5254, + "ground_effect_factor": 0.9215155888421513, + "slot_effect_factor": 1.5381807436109236, + "slot_efficiency": 1.3887943864964021e-11, + "slot_velocity_ratio": 1.400000000005555, + "camber": 0.5241625212097997, + "thickness_ratio": 0.11098022517863568, + "element_area_m2": 656252.3712706771 + } + ] +} \ No newline at end of file diff --git a/RL/cfd_temp_files/individual_3_wing.stl b/RL/cfd_temp_files/individual_3_wing.stl new file mode 100644 index 0000000..687b2d5 Binary files /dev/null and b/RL/cfd_temp_files/individual_3_wing.stl differ diff --git a/RL/cfd_temp_files/individual_4_wing.stl b/RL/cfd_temp_files/individual_4_wing.stl new file mode 100644 index 0000000..8c59851 Binary files /dev/null and b/RL/cfd_temp_files/individual_4_wing.stl differ diff --git a/RL/checkpoints/checkpoint_gen_000.json b/RL/checkpoints/checkpoint_gen_000.json index 7fcc872..f3c9dfe 100644 --- a/RL/checkpoints/checkpoint_gen_000.json +++ b/RL/checkpoints/checkpoint_gen_000.json @@ -1,63 +1,61 @@ { "generation": 0, - "timestamp": "2025-10-10T22:06:27.008495", + "timestamp": "2025-10-22T01:47:34.918330", "config": { - "max_generations": 3, - "population_size": 3, + "max_generations": 1, + "population_size": 5, "neural_network_enabled": true, "save_frequency": 1, - "max_runtime_hours": 1, + "max_runtime_hours": 24, "neural_network": { "learning_rate": 0.001, - "batch_size": 8, - "training_frequency": 10, - "epochs_per_training": 10, + "batch_size": 16, + "training_frequency": 4, + "epochs_per_training": 30, "save_best_model": true }, "cfd_analysis": { "enabled": true, - "parallel_processes": 1, + "parallel_processes": 3, "timeout_seconds": 60, - "smart_skipping": true, - "save_results": true, - "results_output_dir": "cfd_results" + "smart_skipping": true }, "early_stopping": { - "patience": 10, + "patience": 30, "min_delta": 0.005, "monitor": "fitness", - "stagnation_threshold": 10, + "stagnation_threshold": 20, "convergence_threshold": 0.001 }, "genetic_algorithm": { "crossover_rate": 0.85, "mutation_rate": 0.8, "elite_ratio": 0.15, - "tournament_size": 3 + "tournament_size": 5 }, "output": { - "save_all_stl": false, + "save_all_stl": true, "save_best_only": true, "generate_reports": true, - "create_visualizations": false + "create_visualizations": true } }, "current_population": [ { - "total_span": 1562.1533373404757, - "root_chord": 273.62357158832845, - "tip_chord": 278.48896342259894, + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, "chord_taper_ratio": 0.89, - "sweep_angle": 3.745137198030256, - "dihedral_angle": 2.2760946056221214, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.17793646219085565, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, "leading_edge_radius": 2.8, "trailing_edge_thickness": 2.5, "upper_surface_radius": 800, @@ -79,28 +77,28 @@ 120 ], "flap_cambers": [ - 0.15165174117856509, - 0.1183069457274416, - 0.08502357576454594 + 0.11650730925554174, + 0.08779492852048842, + 0.09368265052848934 ], "flap_slot_gaps": [ - 11.997258048354128, - 12.0, - 8.870195544962051 + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 ], "flap_vertical_offsets": [ - 30.68230323781692, - 53.180424513536465, - 63.90613554629242 + 32.87206503062215, + 42.00449125738019, + 61.698595280765176 ], "flap_horizontal_offsets": [ - 29.540070714602077, - 55.32375420710643, - 99.50073096296768 + 29.491138695433754, + 54.10213591434579, + 105.7687587052077 ], - "endplate_height": 238.8570067579504, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 32.16327790369894, + "endplate_height": 268.9881271806169, + "endplate_max_width": 100.0, + "endplate_min_width": 47.58740222598861, "endplate_thickness_base": 10, "endplate_forward_lean": 6, "endplate_rearward_sweep": 10, @@ -115,9 +113,9 @@ 35 ], "y250_width": 500, - "y250_step_height": 21.746442059503984, - "y250_transition_length": 86.9019208874379, - "central_slot_width": 30.797237922946643, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 80.0, + "central_slot_width": 30.0, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, @@ -147,23 +145,227 @@ }, { "total_span": 1400.0, - "root_chord": 273.1224753534797, - "tip_chord": 277.97895702874933, - "chord_taper_ratio": 1.0177813329678722, - "sweep_angle": 4.744283375840465, - "dihedral_angle": 2.2760946056221214, + "root_chord": 255.26166209235754, + "tip_chord": 252.01881806247627, + "chord_taper_ratio": 0.9872960004910257, + "sweep_angle": 3.420555644147321, + "dihedral_angle": 2.5923621901128135, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.08, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.11231791109176714, + "camber_ratio": 0.07524775408938868, + "camber_position": 0.37694417182934187, + "leading_edge_radius": 3.148435979517055, + "trailing_edge_thickness": 2.258706236748169, + "upper_surface_radius": 600.0, + "lower_surface_radius": 1100, + "flap_count": 3, + "flap_spans": [ + 1600, + 1500, + 1400 + ], + "flap_root_chords": [ + 220, + 180, + 140 + ], + "flap_tip_chords": [ + 200, + 160, + 120 + ], + "flap_cambers": [ + 0.09305113879388308, + 0.12066220177016289, + 0.09368265052848934 + ], + "flap_slot_gaps": [ + 12.775178833576852, + 6.0, + 12.0 + ], + "flap_vertical_offsets": [ + 32.87206503062215, + 42.00449125738019, + 15.0 + ], + "flap_horizontal_offsets": [ + 29.491138695433754, + 54.10213591434579, + 105.7687587052077 + ], + "endplate_height": 330.0, + "endplate_max_width": 109.6140642114693, + "endplate_min_width": 25.51667076279516, + "endplate_thickness_base": 7.878867873986792, + "endplate_forward_lean": 9.83327637391597, + "endplate_rearward_sweep": 8.13564747721745, + "endplate_outboard_wrap": 10.0, + "footplate_extension": 50.0, + "footplate_height": 20.0, + "arch_radius": 130, + "footplate_thickness": 5, + "primary_strake_count": 2, + "strake_heights": [ + 45, + 42.62020661171453 + ], + "y250_width": 500, + "y250_step_height": 15.79859606379967, + "y250_transition_length": 83.10657079873373, + "central_slot_width": 21.445581609267922, + "pylon_count": 2, + "pylon_spacing": 320, + "pylon_major_axis": 38, + "pylon_minor_axis": 25, + "pylon_length": 120, + "cascade_enabled": true, + "primary_cascade_span": 250, + "primary_cascade_chord": 55, + "secondary_cascade_span": 160, + "secondary_cascade_chord": 40, + "wall_thickness_structural": 4.637952357648936, + "wall_thickness_aerodynamic": 2.5, + "wall_thickness_details": 2.327297511883828, + "minimum_radius": 0.4770971100086223, + "mesh_resolution_aero": 0.4, + "mesh_resolution_structural": 0.6, + "resolution_span": 40, + "resolution_chord": 25, + "mesh_density": 1.5, + "surface_smoothing": true, + "material": "Standard Carbon Fiber", + "density": 1600, + "weight_estimate": 4.0, + "target_downforce": 4000, + "target_drag": 40, + "efficiency_factor": 1.0 + }, + { + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, + "chord_taper_ratio": 0.89, + "sweep_angle": 2.4035876595100003, + "dihedral_angle": 2.178665235563215, + "twist_distribution_range": [ + -1.5, + 0.5 + ], + "base_profile": "NACA_64A010_modified", + "max_thickness_ratio": 0.25, + "camber_ratio": 0.1489322730731213, + "camber_position": 0.429545483476326, + "leading_edge_radius": 1.729552963377614, + "trailing_edge_thickness": 4.0, + "upper_surface_radius": 800, + "lower_surface_radius": 1102.5412334325124, + "flap_count": 3, + "flap_spans": [ + 1600, + 1500, + 1400 + ], + "flap_root_chords": [ + 220, + 180, + 140 + ], + "flap_tip_chords": [ + 200, + 160, + 120 + ], + "flap_cambers": [ + 0.06912889386826897, + 0.14818959339706703, + 0.09368265052848934 + ], + "flap_slot_gaps": [ + 6.0, + 12.0, + 12.0 + ], + "flap_vertical_offsets": [ + 30.17487343362138, + 65.08057832130383, + 99.44184533919058 + ], + "flap_horizontal_offsets": [ + 29.491138695433754, + 54.10213591434579, + 21.27325763455096 + ], + "endplate_height": 268.9881271806169, + "endplate_max_width": 159.24261049582128, + "endplate_min_width": 40.73774221639681, + "endplate_thickness_base": 6.0, + "endplate_forward_lean": 6, + "endplate_rearward_sweep": 5.0, + "endplate_outboard_wrap": 18.650844412112836, + "footplate_extension": 70, + "footplate_height": 38.763267350636156, + "arch_radius": 130, + "footplate_thickness": 3.0, + "primary_strake_count": 2, + "strake_heights": [ + 45, + 46.55734629444335 + ], + "y250_width": 500, + "y250_step_height": 24.60235215991763, + "y250_transition_length": 80.0, + "central_slot_width": 30.0, + "pylon_count": 2, + "pylon_spacing": 320, + "pylon_major_axis": 38, + "pylon_minor_axis": 25, + "pylon_length": 120, + "cascade_enabled": true, + "primary_cascade_span": 250, + "primary_cascade_chord": 55, + "secondary_cascade_span": 160, + "secondary_cascade_chord": 40, + "wall_thickness_structural": 4, + "wall_thickness_aerodynamic": 2.5, + "wall_thickness_details": 2.0, + "minimum_radius": 0.2, + "mesh_resolution_aero": 0.4, + "mesh_resolution_structural": 0.6, + "resolution_span": 40, + "resolution_chord": 25, + "mesh_density": 1.5, + "surface_smoothing": true, + "material": "Standard Carbon Fiber", + "density": 1600, + "weight_estimate": 4.0, + "target_downforce": 4000, + "target_drag": 40, + "efficiency_factor": 1.0 + }, + { + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, + "chord_taper_ratio": 0.89, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, + "twist_distribution_range": [ + -1.5, + 0.5 + ], + "base_profile": "NACA_64A010_modified", + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.17543127956919433, + "camber_position": 0.429545483476326, "leading_edge_radius": 4.0, - "trailing_edge_thickness": 2.5, - "upper_surface_radius": 965.050412282125, - "lower_surface_radius": 1400.0, + "trailing_edge_thickness": 1.0, + "upper_surface_radius": 800, + "lower_surface_radius": 800.0, "flap_count": 3, "flap_spans": [ 1600, @@ -181,45 +383,45 @@ 120 ], "flap_cambers": [ - 0.07796539597255853, - 0.1183069457274416, - 0.06900798595778183 + 0.18, + 0.08779492852048842, + 0.09368265052848934 ], "flap_slot_gaps": [ - 9.471305987239184, + 12.775178833576852, 12.0, - 8.870195544962051 + 12.0 ], "flap_vertical_offsets": [ - 30.68230323781692, - 49.502546288268945, - 63.90613554629242 + 26.60137545574744, + 42.00449125738019, + 63.870011375826465 ], "flap_horizontal_offsets": [ - 29.540070714602077, - 24.779686364158216, - 85.74443131239319 + 29.491138695433754, + 29.737206707866328, + 38.760642636701625 ], - "endplate_height": 238.8570067579504, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 32.16327790369894, + "endplate_height": 307.7785657341142, + "endplate_max_width": 101.46239307322571, + "endplate_min_width": 53.18264386673445, "endplate_thickness_base": 10, - "endplate_forward_lean": 6, + "endplate_forward_lean": 7.5018290054689345, "endplate_rearward_sweep": 10, - "endplate_outboard_wrap": 15.6594974456134, - "footplate_extension": 74.42775456090389, - "footplate_height": 20.0, - "arch_radius": 100.0, - "footplate_thickness": 3.0, + "endplate_outboard_wrap": 25.0, + "footplate_extension": 70, + "footplate_height": 23.543873629375245, + "arch_radius": 130, + "footplate_thickness": 5, "primary_strake_count": 2, "strake_heights": [ - 45.627172833634674, - 42.21135723061897 + 72.65773100055279, + 39.92196977700821 ], "y250_width": 500, - "y250_step_height": 21.746442059503984, - "y250_transition_length": 127.17100671251566, - "central_slot_width": 33.15391975379407, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 98.46047694134059, + "central_slot_width": 30.0, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, @@ -230,10 +432,10 @@ "primary_cascade_chord": 55, "secondary_cascade_span": 160, "secondary_cascade_chord": 40, - "wall_thickness_structural": 3.0, + "wall_thickness_structural": 4.951361509270786, "wall_thickness_aerodynamic": 2.5, - "wall_thickness_details": 1.5, - "minimum_radius": 0.24008962630244005, + "wall_thickness_details": 2.0, + "minimum_radius": 0.8, "mesh_resolution_aero": 0.4, "mesh_resolution_structural": 0.6, "resolution_span": 40, @@ -248,22 +450,22 @@ "efficiency_factor": 1.0 }, { - "total_span": 1490.631704214383, - "root_chord": 202.49135005764532, - "tip_chord": 204.99560134532194, - "chord_taper_ratio": 1.0123672013000244, - "sweep_angle": 3.745137198030256, - "dihedral_angle": 0.4107936053633102, + "total_span": 1400.0, + "root_chord": 350.0, + "tip_chord": 300.0, + "chord_taper_ratio": 1.0931589916608728, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.12237676586232114, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, - "leading_edge_radius": 3.3056446051873483, - "trailing_edge_thickness": 3.412026273210116, + "max_thickness_ratio": 0.20975596282796294, + "camber_ratio": 0.12919750897423762, + "camber_position": 0.25, + "leading_edge_radius": 2.8, + "trailing_edge_thickness": 2.6706547566099603, "upper_surface_radius": 600.0, "lower_surface_radius": 1100, "flap_count": 3, @@ -283,45 +485,45 @@ 120 ], "flap_cambers": [ - 0.15165174117856509, - 0.06, - 0.0904175138419524 + 0.10131896870872245, + 0.08708306574501162, + 0.09368265052848934 ], "flap_slot_gaps": [ - 11.997258048354128, + 12.775178833576852, 12.0, - 8.870195544962051 + 13.383572526835122 ], "flap_vertical_offsets": [ - 41.296334831139916, - 53.180424513536465, - 93.5754833722736 + 44.3388751300264, + 16.806661721824398, + 61.698595280765176 ], "flap_horizontal_offsets": [ - 38.497596605092326, - 55.32375420710643, - 120.0 - ], - "endplate_height": 205.54602853682118, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 25.0, - "endplate_thickness_base": 10, - "endplate_forward_lean": 8.93442477801101, - "endplate_rearward_sweep": 12.192425586983738, - "endplate_outboard_wrap": 18, - "footplate_extension": 93.98036937951616, - "footplate_height": 20.460092568940834, + 36.191010460218145, + 32.53279949788835, + 110.531236766067 + ], + "endplate_height": 268.9881271806169, + "endplate_max_width": 80.0, + "endplate_min_width": 32.326903305091065, + "endplate_thickness_base": 6.0, + "endplate_forward_lean": 9.363465545561494, + "endplate_rearward_sweep": 10.817463749167231, + "endplate_outboard_wrap": 17.97554967852302, + "footplate_extension": 50.0, + "footplate_height": 30, "arch_radius": 130, - "footplate_thickness": 5.837105506554803, + "footplate_thickness": 5, "primary_strake_count": 2, "strake_heights": [ - 54.1147961102133, - 35 + 45, + 39.681200855336215 ], "y250_width": 500, - "y250_step_height": 18.163709985453192, - "y250_transition_length": 77.75444598619934, - "central_slot_width": 23.69082314648534, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 80.0, + "central_slot_width": 23.736623964246096, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, @@ -332,8 +534,8 @@ "primary_cascade_chord": 55, "secondary_cascade_span": 160, "secondary_cascade_chord": 40, - "wall_thickness_structural": 4, - "wall_thickness_aerodynamic": 2.0, + "wall_thickness_structural": 6.0, + "wall_thickness_aerodynamic": 3.361541050197091, "wall_thickness_details": 1.5, "minimum_radius": 0.4, "mesh_resolution_aero": 0.4, @@ -355,20 +557,20 @@ "generation": 0, "fitness": 70.0, "individual": { - "total_span": 1562.1533373404757, - "root_chord": 273.62357158832845, - "tip_chord": 278.48896342259894, + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, "chord_taper_ratio": 0.89, - "sweep_angle": 3.745137198030256, - "dihedral_angle": 2.2760946056221214, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.17793646219085565, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, "leading_edge_radius": 2.8, "trailing_edge_thickness": 2.5, "upper_surface_radius": 800, @@ -390,28 +592,28 @@ 120 ], "flap_cambers": [ - 0.15165174117856509, - 0.1183069457274416, - 0.08502357576454594 + 0.11650730925554174, + 0.08779492852048842, + 0.09368265052848934 ], "flap_slot_gaps": [ - 11.997258048354128, - 12.0, - 8.870195544962051 + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 ], "flap_vertical_offsets": [ - 30.68230323781692, - 53.180424513536465, - 63.90613554629242 + 32.87206503062215, + 42.00449125738019, + 61.698595280765176 ], "flap_horizontal_offsets": [ - 29.540070714602077, - 55.32375420710643, - 99.50073096296768 + 29.491138695433754, + 54.10213591434579, + 105.7687587052077 ], - "endplate_height": 238.8570067579504, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 32.16327790369894, + "endplate_height": 268.9881271806169, + "endplate_max_width": 100.0, + "endplate_min_width": 47.58740222598861, "endplate_thickness_base": 10, "endplate_forward_lean": 6, "endplate_rearward_sweep": 10, @@ -426,9 +628,9 @@ 35 ], "y250_width": 500, - "y250_step_height": 21.746442059503984, - "y250_transition_length": 86.9019208874379, - "central_slot_width": 30.797237922946643, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 80.0, + "central_slot_width": 30.0, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, @@ -463,20 +665,20 @@ "generation": 0, "best_fitness": 70.0, "best_individual": { - "total_span": 1562.1533373404757, - "root_chord": 273.62357158832845, - "tip_chord": 278.48896342259894, + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, "chord_taper_ratio": 0.89, - "sweep_angle": 3.745137198030256, - "dihedral_angle": 2.2760946056221214, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.17793646219085565, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, "leading_edge_radius": 2.8, "trailing_edge_thickness": 2.5, "upper_surface_radius": 800, @@ -498,28 +700,28 @@ 120 ], "flap_cambers": [ - 0.15165174117856509, - 0.1183069457274416, - 0.08502357576454594 + 0.11650730925554174, + 0.08779492852048842, + 0.09368265052848934 ], "flap_slot_gaps": [ - 11.997258048354128, - 12.0, - 8.870195544962051 + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 ], "flap_vertical_offsets": [ - 30.68230323781692, - 53.180424513536465, - 63.90613554629242 + 32.87206503062215, + 42.00449125738019, + 61.698595280765176 ], "flap_horizontal_offsets": [ - 29.540070714602077, - 55.32375420710643, - 99.50073096296768 + 29.491138695433754, + 54.10213591434579, + 105.7687587052077 ], - "endplate_height": 238.8570067579504, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 32.16327790369894, + "endplate_height": 268.9881271806169, + "endplate_max_width": 100.0, + "endplate_min_width": 47.58740222598861, "endplate_thickness_base": 10, "endplate_forward_lean": 6, "endplate_rearward_sweep": 10, @@ -534,9 +736,9 @@ 35 ], "y250_width": 500, - "y250_step_height": 21.746442059503984, - "y250_transition_length": 86.9019208874379, - "central_slot_width": 30.797237922946643, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 80.0, + "central_slot_width": 30.0, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, @@ -564,14 +766,14 @@ "target_drag": 40, "efficiency_factor": 1.0 }, - "average_fitness": 40.0, - "valid_individuals": 1, - "generation_time": 3.7478139400482178 + "average_fitness": 42.64, + "valid_individuals": 2, + "generation_time": 45.26136302947998 }, "generation_results": [], "pipeline_state": { "current_generation": 0, "total_runtime": 0 }, - "neural_network_checkpoint": "neural_networks\\network_gen_000.pth" + "neural_network_checkpoint": "neural_networks/network_gen_000.pth" } \ No newline at end of file diff --git a/RL/checkpoints/final_summary.json b/RL/checkpoints/final_summary.json index 306d863..1da2375 100644 --- a/RL/checkpoints/final_summary.json +++ b/RL/checkpoints/final_summary.json @@ -1,5 +1,5 @@ { - "total_generations": 2, + "total_generations": 0, "total_designs_generated": 3, "output_directories": { "checkpoints": "checkpoints", @@ -9,10 +9,10 @@ "stl_outputs": "stl_outputs" }, "best_designs_stl": [ - "generation_000_best_design.stl", "generation_001_best_design.stl", + "generation_000_best_design.stl", "generation_002_best_design.stl" ], - "final_population_size": 3, + "final_population_size": 5, "early_stopped": false } \ No newline at end of file diff --git a/RL/checkpoints/summary_gen_000.json b/RL/checkpoints/summary_gen_000.json index df3df10..177dc4c 100644 --- a/RL/checkpoints/summary_gen_000.json +++ b/RL/checkpoints/summary_gen_000.json @@ -1,9 +1,9 @@ { "generation": 0, - "timestamp": "2025-10-10T22:06:27.008495", + "timestamp": "2025-10-22T01:47:34.918330", "best_fitness": 70.0, - "average_fitness": 40.0, - "valid_individuals": 1, - "population_size": 3, + "average_fitness": 42.64, + "valid_individuals": 2, + "population_size": 5, "neural_network_enabled": true } \ No newline at end of file diff --git a/RL/config.json b/RL/config.json index 99f0830..d9c5601 100644 --- a/RL/config.json +++ b/RL/config.json @@ -1,5 +1,5 @@ { - "max_generations": 20, + "max_generations": 1, "population_size": 5, "neural_network_enabled": true, "save_frequency": 1, diff --git a/RL/distributed_pipeline.py b/RL/distributed_pipeline.py new file mode 100644 index 0000000..348b483 --- /dev/null +++ b/RL/distributed_pipeline.py @@ -0,0 +1,1348 @@ +import gc +import os +import json +import time +import torch +import torch.distributed as dist +import torch.multiprocessing as mp +from torch.nn.parallel import DistributedDataParallel as DDP +from torch.utils.data import DataLoader, DistributedSampler +import logging +from datetime import datetime +from typing import Dict, List, Any, Optional +from dataclasses import asdict +from tqdm import tqdm + +from alphadesign import load_base_parameters +from formula_constraints import F1FrontWingParams, F1FrontWingAnalyzer +from genetic_algo_components.initialize_population import F1PopulInit +from genetic_algo_components.fitness_evaluation import FitnessEval +from genetic_algo_components.crossover_ops import CrossoverOps +from genetic_algo_components.mutation_strategy import F1MutationOperator +from neural_network_components.forward_pass import NeuralNetworkForwardPass +from neural_network_components.network_initialization import NetworkInitializer +from neural_network_components.optimizer_integration import OptimizerManager +from neural_network_components.loss_calculation import AlphaDesignLoss +from neural_network_components.parameter_tweaking import ParamterTweaker +from wing_generator import UltraRealisticF1FrontWingGenerator + +class AlphaDesignPipeline: + def __init__(self, config_path: str = "config.json", rank: int = -1, world_size: int = 1): + self.config = self.load_config(config_path) + + # distributed training parameters + self.rank = rank + self.world_size = world_size + self.is_distributed = rank >= 0 + self.is_main_process = rank <= 0 # rank -1 (non-distributed) or rank 0 (main process) + + self.setup_logging() + self.setup_directories() + + self.current_generation = 0 + self.best_designs_history = [] + self.training_metrics = [] + self.neural_network = None + self.optimizer_manager = None + self.generation_results = [] + + # #as of now not using early stopping but will be used in future + # from early_stopping import EarlyStoppingManager # You need to create this + # self.early_stopping = EarlyStoppingManager( + # patience=self.config['early_stopping']['patience'], + # min_delta=self.config['early_stopping']['min_delta'], + # restore_best_weights=True + # ) + + #so this time took help of perplexity to write better logs and print statements + if self.is_main_process: + print("🚀 AlphaDesign Pipeline Initialized") + print(f"📊 Max Generations: {self.config['max_generations']}") + print(f"🧬 Population Size: {self.config['population_size']}") + print(f"🧠 Neural Network Enabled: {self.config['neural_network_enabled']}") + if self.is_distributed: + print(f"🌐 Distributed Training: {self.world_size} processes") + + @staticmethod + def setup_distributed(rank: int, world_size: int, backend: str = 'nccl'): + """ + initialize distributed process group for multi-gpu training + + args: + rank: unique identifier for each process (0 to world_size-1) + world_size: total number of processes participating in training + backend: communication backend ('nccl' for gpu, 'gloo' for cpu) + """ + os.environ['MASTER_ADDR'] = os.environ.get('MASTER_ADDR', 'localhost') + os.environ['MASTER_PORT'] = os.environ.get('MASTER_PORT', '12355') + + # initialize the process group + dist.init_process_group(backend=backend, rank=rank, world_size=world_size) + + # set device for this process + torch.cuda.set_device(rank) + + print(f"✅ Process {rank}/{world_size} initialized on GPU {rank}") + + @staticmethod + def cleanup_distributed(): + """ + cleanup distributed process group after training + """ + if dist.is_initialized(): + dist.destroy_process_group() + print("🧹 Distributed process group destroyed") + + def load_config(self, config_path: str): + default_config = { + "max_generations": 50, + "population_size": 20, + "neural_network_enabled": True, + "save_frequency": 5, + "max_runtime_hours": 24, + "neural_network": { + "learning_rate": 1e-3, + "batch_size": 16, + "training_frequency": 3 + }, + "cfd_analysis": { + "enabled": True, + "parallel_processes": 2 + } + } + + if os.path.exists(config_path): + with open(config_path, 'r') as f: + user_config = json.load(f) + default_config.update(user_config) + + return default_config + + def setup_logging(self): + log_dir = "logs" + os.makedirs(log_dir, exist_ok=True) + + # Create handlers with UTF-8 encoding to support emoji characters + file_handler = logging.FileHandler( + f"{log_dir}/alphadesign_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log", + encoding='utf-8' + ) + stream_handler = logging.StreamHandler() + + # Set UTF-8 encoding for console output on Windows + import sys + if hasattr(sys.stdout, 'reconfigure'): + try: + sys.stdout.reconfigure(encoding='utf-8') + sys.stderr.reconfigure(encoding='utf-8') + except Exception: + # If reconfigure fails, we'll use ASCII-safe logging + pass + + # Configure logging + logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + handlers=[file_handler, stream_handler] + ) + self.logger = logging.getLogger("AlphaDesign") + + def setup_directories(self): + self.output_dirs = { + 'checkpoints': 'checkpoints', + 'best_designs': 'best_designs', + 'neural_networks': 'neural_networks', + 'generation_data': 'generation_data', + 'stl_outputs': 'stl_outputs' + } + + for dir_name, dir_path in self.output_dirs.items(): + os.makedirs(dir_path, exist_ok=True) + # Verify directory was created + if not os.path.exists(dir_path): + self.logger.error(f"Failed to create directory: {dir_path}") + raise RuntimeError(f"Cannot create required directory: {dir_path}") + else: + self.logger.info(f"✅ Directory ready: {dir_path}") + + def run_complete_pipeline(self, base_params: F1FrontWingParams): + + self.logger.info("🏁 Starting AlphaDesign Complete Pipeline") + start_time = time.time() + + try: + # phase 1: Initialize components + results = self.initialize_pipeline_components(base_params) + + # phase 2: Main optimization loop + results.update(self.run_optimization_loop(base_params)) + + # phase 3: Final analysis and cleanup + results.update(self.finalize_pipeline()) + + total_time = time.time() - start_time + self.logger.info(f"✅ Pipeline completed in {total_time/3600:.2f} hours") + + return results + + except Exception as e: + self.logger.error(f"💥 Pipeline failed: {str(e)}") + raise + + def initialize_pipeline_components(self, base_params: F1FrontWingParams): + + self.logger.info("🔧 Initializing Pipeline Components") + + # 1. genetic algo components + self.population_init = F1PopulInit(base_params, self.config['population_size']) + + # Initialize fitness evaluator with CFD results directory from config + cfd_config = self.config.get('cfd_analysis', {}) + cfd_results_dir = cfd_config.get('results_output_dir', 'cfd_results') + self.fitness_eval = FitnessEval(cfd_results_dir=cfd_results_dir) + + self.crossover_ops = CrossoverOps() + self.mutation_ops = F1MutationOperator() + + # 2. neural network components (if enabled) + if self.config['neural_network_enabled']: + self.setup_neural_network(base_params) + + # 3. initialize population + self.current_population = self.population_init.create_initial_population() + + self.logger.info(f"✅ Components initialized. Population size: {len(self.current_population)}") + + return {"initialization": "success", "population_size": len(self.current_population)} + + def setup_neural_network(self, base_params: F1FrontWingParams): + param_dict = asdict(base_params) + scalar_params = sum(1 for v in param_dict.values() if isinstance(v, (int, float))) + list_params = sum(len(v) for v in param_dict.values() if isinstance(v, list)) + param_count = scalar_params + list_params + + # determine device based on distributed setup + if self.is_distributed: + device = torch.device(f'cuda:{self.rank}') + else: + device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') + + self.neural_network, total_params = NetworkInitializer.setup_network( + param_count, + device=device, + hidden_dim=min(512, max(256, param_count * 4)), + depth=3 + ) + + # wrap model with ddp for distributed training + if self.is_distributed: + self.neural_network = DDP( + self.neural_network, + device_ids=[self.rank], + output_device=self.rank, + find_unused_parameters=True # useful for complex models(need to research on this more) + ) + if self.is_main_process: + self.logger.info(f"🌐 Model wrapped with DistributedDataParallel") + + # initialize optimizer with the wrapped model + model_for_optimizer = self.neural_network.module if self.is_distributed else self.neural_network + self.optimizer_manager = OptimizerManager( + model_for_optimizer, + learning_rate=self.config['neural_network']['learning_rate'] + ) + + self.optimizer_manager.use_adamw_cosine( + t0=10, + t_mult=2, + lr=2e-4, #lower learning rate for stability + weight_decay=1e-3 + ) + + self.loss_calculator = AlphaDesignLoss() + self.param_tweaker = ParamterTweaker() + + if self.is_main_process: + self.logger.info(f"🧠 Neural Network initialized: {total_params} parameters") + if self.is_distributed: + self.logger.info(f"🌐 Distributed training on {self.world_size} GPUs") + + def run_optimization_loop(self, base_params: F1FrontWingParams): + self.logger.info("🔄 Starting Optimization Loop") + + generation_results = [] + max_runtime = self.config['max_runtime_hours'] * 3600 + start_time = time.time() + + # Main progress bar for generations + generation_pbar = tqdm( + total=self.config['max_generations'], + desc="🧬 Generations", + unit="gen", + position=0, + leave=True, + bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}]' + ) + + for generation in range(self.config['max_generations']): + self.current_generation = generation + + if time.time() - start_time > max_runtime: + self.logger.info(f"⏰ Runtime limit reached ({self.config['max_runtime_hours']} hours)") + break + + generation_pbar.set_description(f"🧬 Gen {generation + 1}") + + gen_result = self.run_single_generation(base_params) + generation_results.append(gen_result) + + generation_pbar.set_postfix({ + 'Best': f"{gen_result['best_fitness']:.2f}", + 'Avg': f"{gen_result['average_fitness']:.2f}", + 'Valid': f"{gen_result['valid_individuals']}" + }) + generation_pbar.update(1) + + if generation % self.config['save_frequency'] == 0: + self.save_checkpoint(generation, gen_result) + self.cleanup_generation() + + # Use extended training instead of basic training + if self.config['neural_network_enabled'] and generation % self.config['neural_network']['training_frequency'] == 0: + # Use extended training with curriculum learning + self.train_neural_network_extended(generation_results[-5:], generation) # Use more recent generations + + generation_pbar.close() + + # Final cleanup after optimization loop + self.cleanup_generation() + + return { + "total_generations": len(generation_results), + "generation_results": generation_results, + } + + def run_single_generation(self, base_params: F1FrontWingParams): + generation_start = time.time() + + # Set generation number for CFD result tracking + self.fitness_eval.set_generation(self.current_generation) + + # only show progress bar on main process + show_progress = self.is_main_process + + # Progress bar for population evaluation + eval_pbar = None + if show_progress: + eval_pbar = tqdm( + total=len(self.current_population), + desc="📊 Evaluating Population", + unit="individual", + position=1, + leave=False + ) + + # Modified fitness evaluation with progress + if show_progress: + fitness_scores = self.fitness_eval.evaluate_pop_with_progress( + self.current_population, eval_pbar + ) + eval_pbar.close() + else: + fitness_scores = self.fitness_eval.evaluate_pop(self.current_population) + + # synchronize fitness scores across all processes if distributed + if self.is_distributed: + # gather fitness scores from all processes to main process + gathered_scores = [None] * self.world_size + dist.all_gather_object(gathered_scores, fitness_scores) + if self.is_main_process: + # flatten the gathered scores + fitness_scores = [score for scores in gathered_scores for score in scores] + + # 2. find the best individual, so we can save and see how the design evolves + valid_scores = [score for score in fitness_scores if isinstance(score, dict) and score.get('valid', False)] + + # Fix: Handle case when no valid individuals exist + if not valid_scores: + if self.is_main_process: + self.logger.warning("⚠️ No valid individuals in population - creating recovery population") + + # Create recovery individuals with looser constraints + recovery_population = [] + from genetic_algo_components.initialize_population import F1PopulInit + + pop_init = F1PopulInit(base_params, min(5, len(self.current_population))) + recovery_individuals = pop_init.create_initial_population() + + # Add some variation to recovery individuals + for individual in recovery_individuals: + # Apply conservative mutations to ensure validity + individual['max_thickness_ratio'] = max(0.08, min(0.25, individual.get('max_thickness_ratio', 0.15))) + individual['camber_ratio'] = max(0.04, min(0.18, individual.get('camber_ratio', 0.08))) + individual['total_span'] = max(1400, min(1800, individual.get('total_span', 1600))) + recovery_population.append(individual) + + # Replace worst individuals with recovery individuals + num_to_replace = min(len(recovery_population), len(self.current_population)) + self.current_population[-num_to_replace:] = recovery_population[:num_to_replace] + + # Re-evaluate with recovery population + if show_progress: + eval_pbar = tqdm( + total=len(self.current_population), + desc="📊 Re-evaluating Population", + unit="individual", + position=1, + leave=False + ) + fitness_scores = self.fitness_eval.evaluate_pop_with_progress( + self.current_population, eval_pbar + ) + eval_pbar.close() + else: + fitness_scores = self.fitness_eval.evaluate_pop(self.current_population) + + valid_scores = [score for score in fitness_scores if isinstance(score, dict) and score.get('valid', False)] + + if not valid_scores: + # If still no valid individuals, use constraint compliance as fallback + valid_scores = [score for score in fitness_scores if isinstance(score, dict) and score.get('constraint_compliance', 0) > 0.3] + + if valid_scores: + best_score = max(valid_scores, key=lambda x: x.get('total_fitness', -1000)) + best_fitness = best_score['total_fitness'] + best_individual = self.current_population[fitness_scores.index(best_score)] + + # Save best design (only on main process) + if self.is_main_process and self.current_generation % self.config['save_frequency'] == 0: + self.save_best_design_stl(best_individual, self.current_generation) + + self.best_designs_history.append({ + 'generation': self.current_generation, + 'fitness': best_fitness, + 'individual': best_individual.copy() + }) + else: + # Absolute fallback - use best constraint compliance + best_score = max(fitness_scores, key=lambda x: x.get('constraint_compliance', 0) if isinstance(x, dict) else 0) + best_fitness = best_score.get('constraint_compliance', 0) * 50 # Scale up constraint compliance + best_individual = self.current_population[fitness_scores.index(best_score)] + + if self.is_main_process: + self.logger.warning(f"⚠️ Using constraint compliance fallback: {best_fitness:.2f}") + + gen_pbar = None + if show_progress: + gen_pbar = tqdm( + total=len(self.current_population), + desc="🔄 Creating Next Generation", + unit="individual", + position=1, + leave=False + ) + + new_population = self.generate_next_population_with_progress(gen_pbar) + if show_progress: + gen_pbar.close() + + # 5. neural network guidance, to be specific by the policy network and its output + if self.config['neural_network_enabled']: + nn_pbar = None + if show_progress: + nn_pbar = tqdm( + total=len(new_population), + desc="🧠 Neural Network Guidance", + unit="individual", + position=1, + leave=False + ) + new_population = self.apply_neural_guidance_with_progress(new_population, nn_pbar) + if show_progress: + nn_pbar.close() + + # synchronize new population across all processes + if self.is_distributed: + # broadcast new population from main process to all other processes + new_population = self.broadcast_population(new_population) + + self.current_population = new_population + + generation_time = time.time() - generation_start + + result = { + "generation": self.current_generation, + "best_fitness": best_fitness, + "best_individual": best_individual, + "average_fitness": sum(score.get('total_fitness', -1000) for score in fitness_scores) / len(fitness_scores), + "valid_individuals": len(valid_scores), + "generation_time": generation_time + } + + if self.is_main_process: + self.logger.info(f"✅ Generation {self.current_generation}: Best={best_fitness:.2f}, Avg={result['average_fitness']:.2f}, Time={generation_time:.1f}s") + + # Add memory cleanup at end of generation + import gc + + # Clear GPU memory if using CUDA + if torch.cuda.is_available(): + torch.cuda.empty_cache() + + # Force garbage collection + gc.collect() + + return result + + def broadcast_population(self, population: List[Dict[str, Any]]) -> List[Dict[str, Any]]: + """ + broadcast population from main process to all other processes + """ + if not self.is_distributed: + return population + + # broadcast population size first + pop_list = [population] if self.is_main_process else [None] + dist.broadcast_object_list(pop_list, src=0) + + return pop_list[0] + + def save_best_design_stl(self, individual: Dict[str, Any], generation: int): + try: + required_params = ['total_span', 'root_chord', 'tip_chord', 'flap_count'] + for param in required_params: + if param not in individual: + self.logger.error(f"❌ Missing required parameter: {param}") + return + + params = F1FrontWingParams(**individual) + + wing_generator = UltraRealisticF1FrontWingGenerator(**individual) + stl_filename = f"generation_{generation:03d}_best_design.stl" + stl_path = os.path.join(self.output_dirs['stl_outputs'], stl_filename) + + wing_mesh = wing_generator.generate_complete_wing(stl_filename) + + import shutil + generated_path = os.path.join("f1_wing_output", stl_filename) + if os.path.exists(generated_path): + shutil.copy2(generated_path, stl_path) + self.logger.info(f"💾 Best design saved: {stl_path}") + + json_path = stl_path.replace('.stl', '_params.json') + with open(json_path, 'w') as f: + json.dump(individual, f, indent=2) + + except Exception as e: + self.logger.error(f"❌ Failed to save STL: {str(e)}") + + def generate_next_population_with_progress(self, pbar: tqdm) -> List[Dict[str, Any]]: + new_population = [] + + # Elite selection + elite_count = max(1, len(self.current_population) // 5) + fitness_scores = self.fitness_eval.evaluate_pop(self.current_population) + + population_with_fitness = list(zip(self.current_population, fitness_scores)) + population_with_fitness.sort(key=lambda x: x[1].get('total_fitness', -1000) if isinstance(x[1], dict) else -1000, reverse=True) + + # Add elites + for i in range(elite_count): + new_population.append(population_with_fitness[i][0]) + pbar.update(1) + + # Generate offspring + while len(new_population) < len(self.current_population): + parent1 = self.tournament_selection(population_with_fitness) + parent2 = self.tournament_selection(population_with_fitness) + + child1, child2 = self.crossover_ops.f1_aero_crossover(parent1, parent2) + child1 = self.mutation_ops.f1_wing_mutation(child1) + child2 = self.mutation_ops.f1_wing_mutation(child2) + + new_population.extend([child1, child2]) + pbar.update(min(2, len(self.current_population) - len(new_population) + 2)) + + return new_population[:len(self.current_population)] + + def tournament_selection(self, population_with_fitness: List, tournament_size: int = 3): + import random + + tournament = random.sample(population_with_fitness, min(tournament_size, len(population_with_fitness))) + winner = max(tournament, key=lambda x: x[1].get('total_fitness', -1000) if isinstance(x[1], dict) else -1000) + return winner[0] + + def individual_to_tensor(self, individual: Dict[str, Any]): + values = [] + + scalar_params = [ + 'total_span', 'root_chord', 'tip_chord', 'chord_taper_ratio', + 'sweep_angle', 'dihedral_angle', 'max_thickness_ratio', + 'camber_ratio', 'camber_position', 'leading_edge_radius', + 'trailing_edge_thickness', 'upper_surface_radius', 'lower_surface_radius', + 'endplate_height', 'endplate_max_width', 'endplate_min_width', + 'endplate_thickness_base', 'endplate_forward_lean', 'endplate_rearward_sweep', + 'endplate_outboard_wrap', 'footplate_extension', 'footplate_height', + 'arch_radius', 'footplate_thickness', 'primary_strake_count', + 'y250_width', 'y250_step_height', 'y250_transition_length', + 'central_slot_width', 'pylon_count', 'pylon_spacing', + 'pylon_major_axis', 'pylon_minor_axis', 'pylon_length', + 'primary_cascade_span', 'primary_cascade_chord', + 'secondary_cascade_span', 'secondary_cascade_chord', + 'wall_thickness_structural', 'wall_thickness_aerodynamic', + 'wall_thickness_details', 'minimum_radius', 'mesh_resolution_aero', + 'mesh_resolution_structural', 'resolution_span', 'resolution_chord', + 'mesh_density', 'density', 'weight_estimate', + 'target_downforce', 'target_drag', 'efficiency_factor' + ] + + for param in scalar_params: + if param in individual: + values.append(float(individual[param])) + + list_params = [ + 'twist_distribution_range', 'flap_spans', 'flap_root_chords', + 'flap_tip_chords', 'flap_cambers', 'flap_slot_gaps', + 'flap_vertical_offsets', 'flap_horizontal_offsets', 'strake_heights' + ] + + for param in list_params: + if param in individual and isinstance(individual[param], list): + values.extend([float(x) for x in individual[param]]) + + return torch.tensor(values, dtype=torch.float32) + + def tensor_to_individual(self, tensor: torch.Tensor, template: Dict[str, Any]): + individual = template.copy() + tensor_values = tensor.cpu().numpy().tolist() + + idx = 0 + + scalar_params = [ + 'total_span', 'root_chord', 'tip_chord', 'chord_taper_ratio', + 'sweep_angle', 'dihedral_angle', 'max_thickness_ratio', + 'camber_ratio', 'camber_position', 'leading_edge_radius', + 'trailing_edge_thickness', 'upper_surface_radius', 'lower_surface_radius', + 'endplate_height', 'endplate_max_width', 'endplate_min_width', + 'endplate_thickness_base', 'endplate_forward_lean', 'endplate_rearward_sweep', + 'endplate_outboard_wrap', 'footplate_extension', 'footplate_height', + 'arch_radius', 'footplate_thickness', 'primary_strake_count', + 'y250_width', 'y250_step_height', 'y250_transition_length', + 'central_slot_width', 'pylon_count', 'pylon_spacing', + 'pylon_major_axis', 'pylon_minor_axis', 'pylon_length', + 'primary_cascade_span', 'primary_cascade_chord', + 'secondary_cascade_span', 'secondary_cascade_chord', + 'wall_thickness_structural', 'wall_thickness_aerodynamic', + 'wall_thickness_details', 'minimum_radius', 'mesh_resolution_aero', + 'mesh_resolution_structural', 'resolution_span', 'resolution_chord', + 'mesh_density', 'density', 'weight_estimate', + 'target_downforce', 'target_drag', 'efficiency_factor' + ] + + for param in scalar_params: + if param in individual and idx < len(tensor_values): + individual[param] = tensor_values[idx] + idx += 1 + + list_params = [ + 'twist_distribution_range', 'flap_spans', 'flap_root_chords', + 'flap_tip_chords', 'flap_cambers', 'flap_slot_gaps', + 'flap_vertical_offsets', 'flap_horizontal_offsets', 'strake_heights' + ] + + for param in list_params: + if param in individual and isinstance(individual[param], list): + param_length = len(individual[param]) + if idx + param_length <= len(tensor_values): + individual[param] = tensor_values[idx:idx + param_length] + idx += param_length + + return individual + + def apply_neural_guidance_with_progress(self, population: List[Dict[str, Any]], pbar: Optional[tqdm]): + if self.neural_network is None: + if pbar: + pbar.update(len(population)) + return population + + model = self.neural_network.module if self.is_distributed else self.neural_network + device = next(model.parameters()).device + + try: + guided_population = [] + + for individual in population: + param_tensor = self.individual_to_tensor(individual) + param_tensor = param_tensor.to(device) + + if param_tensor.shape[0] != model.param_count: + if pbar: + pbar.set_postfix({'Status': f'Shape mismatch: {param_tensor.shape[0]} vs {model.param_count}'}) + guided_population.append(individual) + if pbar: + pbar.update(1) + continue + + if param_tensor.dim() == 1: + param_tensor = param_tensor.unsqueeze(0) + + # Get neural network predictions + with torch.no_grad(): + policy_output, value_output = self.neural_network(param_tensor) + + # Apply parameter tweaks + guided_tensor = self.param_tweaker.apply_neural_tweaks( + param_tensor, policy_output, exploration=True + ) + + # Convert back to individual dict with proper mapping + guided_individual = self.tensor_to_individual(guided_tensor.squeeze(), individual) + + guided_population.append(guided_individual) + if pbar: + pbar.update(1) + + return guided_population + + except Exception as e: + if self.is_main_process: + self.logger.warning(f"⚠️ Neural guidance failed: {str(e)}") + if pbar: + pbar.update(len(population)) + return population + + def cleanup_generation(self): + if torch.cuda.is_available(): + torch.cuda.empty_cache() + gc.collect() + + import matplotlib.pyplot as plt + plt.close('all') + + import psutil + memory_percent = psutil.virtual_memory().percent + if memory_percent > 80: + self.logger.warning(f"⚠️ Memory usage high: {memory_percent}%") + + def cleanup_old_checkpoints(self, keep_count: int = 10): + + try: + checkpoint_dir = self.output_dirs['checkpoints'] + + checkpoint_files = [ + f for f in os.listdir(checkpoint_dir) + if f.startswith('checkpoint_gen_') and f.endswith('.json') + ] + + if len(checkpoint_files) <= keep_count: + return + + checkpoint_files.sort(key=lambda x: int(x.split('_')[2].split('.')[0])) + + files_to_remove = checkpoint_files[:-keep_count] + + for filename in files_to_remove: + file_path = os.path.join(checkpoint_dir, filename) + os.remove(file_path) + + gen_num = filename.split('_')[2].split('.')[0] + + summary_path = os.path.join(checkpoint_dir, f'summary_gen_{gen_num}.json') + if os.path.exists(summary_path): + os.remove(summary_path) + + nn_path = os.path.join(self.output_dirs['neural_networks'], f'network_gen_{gen_num}.pth') + if os.path.exists(nn_path): + os.remove(nn_path) + + if files_to_remove: + self.logger.info(f"🧹 Cleaned up {len(files_to_remove)} old checkpoints") + + except Exception as e: + self.logger.warning(f"⚠️ Checkpoint cleanup failed: {str(e)}") + + def save_checkpoint(self, generation: int, gen_result: Dict[str, Any]): + # only save on main process + if not self.is_main_process: + return + + try: + self.logger.info(f"💾 Saving checkpoint for generation {generation}...") + + checkpoint = { + 'generation': generation, + 'timestamp': datetime.now().isoformat(), + 'config': self.config, + 'current_population': self.current_population, + 'best_designs_history': self.best_designs_history, + 'training_metrics': self.training_metrics, + 'generation_result': gen_result, + 'generation_results' : self.generation_results, + 'pipeline_state': { + 'current_generation': self.current_generation, + 'total_runtime': time.time() - self.pipeline_start_time if hasattr(self, 'pipeline_start_time') else 0 + }, + 'distributed_config': { + 'world_size': self.world_size, + 'is_distributed': self.is_distributed + } + } + + if self.neural_network is not None: + nn_checkpoint_path = os.path.join( + self.output_dirs['neural_networks'], + f'network_gen_{generation:03d}.pth' + ) + + # save the unwrapped model state dict + model_to_save = self.neural_network.module if self.is_distributed else self.neural_network + + torch.save({ + 'model_state_dict': model_to_save.state_dict(), + 'optimizer_state_dict': self.optimizer_manager.get_optimizer().state_dict() if self.optimizer_manager else None, + 'generation': generation, + 'total_params': sum(p.numel() for p in model_to_save.parameters()) + }, nn_checkpoint_path) + + checkpoint['neural_network_checkpoint'] = nn_checkpoint_path + self.logger.info(f"🧠 Neural network saved: {nn_checkpoint_path}") + + checkpoint_filename = f'checkpoint_gen_{generation:03d}.json' + checkpoint_path = os.path.join(self.output_dirs['checkpoints'], checkpoint_filename) + + def json_serializer(obj): + if isinstance(obj, torch.Tensor): + return obj.tolist() + elif hasattr(obj, '__dict__'): + return str(obj) + return str(obj) + + with open(checkpoint_path, 'w') as f: + json.dump(checkpoint, f, indent=2, default=json_serializer) + + summary_checkpoint = { + 'generation': generation, + 'timestamp': checkpoint['timestamp'], + 'best_fitness': gen_result.get('best_fitness', -1000), + 'average_fitness': gen_result.get('average_fitness', -1000), + 'valid_individuals': gen_result.get('valid_individuals', 0), + 'population_size': len(self.current_population), + 'neural_network_enabled': self.config['neural_network_enabled'] + } + + summary_path = os.path.join(self.output_dirs['checkpoints'], f'summary_gen_{generation:03d}.json') + with open(summary_path, 'w') as f: + json.dump(summary_checkpoint, f, indent=2) + + self.cleanup_old_checkpoints() + + if gen_result.get('best_fitness', -1000) > -1000: + self.best_designs_history.append({ + 'generation': generation, + 'fitness': gen_result['best_fitness'], + 'individual': gen_result.get('best_individual', {}), + 'timestamp': datetime.now().isoformat() + }) + + self.logger.info(f"✅ Checkpoint saved successfully: {checkpoint_path}") + + checkpoint_size = os.path.getsize(checkpoint_path) / (1024 * 1024) # MB + self.logger.info(f"📊 Checkpoint size: {checkpoint_size:.2f} MB") + + except Exception as e: + self.logger.error(f"❌ Failed to save checkpoint: {str(e)}") + + def train_neural_network_extended(self, recent_generations: List[Dict[str, Any]], generation: int): + + if not recent_generations or self.neural_network is None: + return + + try: + # get the actual model for parameter access + model = self.neural_network.module if self.is_distributed else self.neural_network + + # Adaptive epochs based on generation + if generation < 30: + epochs = 40 # More training early on + elif generation < 80: + epochs = 25 # Standard training + else: + epochs = 15 # Fine-tuning later + + # Curriculum learning - focus on different aspects over time + if generation < 40: + # Early: Focus on constraint compliance + weight_constraints = 0.6 + weight_performance = 0.4 + training_phase = "Constraint Focus" + elif generation < 80: + # Middle: Balance constraints and performance + weight_constraints = 0.4 + weight_performance = 0.6 + training_phase = "Balanced Training" + else: + # Late: Focus on optimization + weight_constraints = 0.2 + weight_performance = 0.8 + training_phase = "Performance Focus" + + if self.is_main_process: + self.logger.info(f"📚 Curriculum Phase: {training_phase} (Epochs: {epochs})") + + # Progress bar for extended training (only on main process) + train_pbar = None + if self.is_main_process: + train_pbar = tqdm( + total=epochs, + desc=f"🧠 Extended Training ({training_phase})", + unit="epoch", + position=1, + leave=False + ) + + # Training loop with curriculum weights + optimizer = self.optimizer_manager.get_optimizer() + device = next(model.parameters()).device + + for epoch in range(epochs): + epoch_loss = 0 + batch_count = 0 + + # set model to training mode + self.neural_network.train() + + for gen_data in recent_generations: + param_tensor = self.param_tweaker.genetic_to_neural_params( + [list(gen_data['best_individual'].values())] + ) + + param_tensor = param_tensor.to(device) + + policy_output, value_output = self.neural_network(param_tensor) + + constraint_loss = self.calculate_constraint_loss(gen_data, value_output) + performance_loss = self.calculate_performance_loss(gen_data, value_output) + + total_loss = (weight_constraints * constraint_loss + + weight_performance * performance_loss) + + optimizer.zero_grad() + total_loss.backward() + + # gradient clipping + torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) + + optimizer.step() + + epoch_loss += total_loss.item() + batch_count += 1 + + # synchronize loss across all processes if distributed + if self.is_distributed: + epoch_loss_tensor = torch.tensor(epoch_loss, device=device) + dist.all_reduce(epoch_loss_tensor, op=dist.ReduceOp.SUM) + epoch_loss = epoch_loss_tensor.item() / self.world_size + + avg_epoch_loss = epoch_loss / max(batch_count, 1) + + if train_pbar: + train_pbar.set_postfix({ + 'Loss': f"{avg_epoch_loss:.4f}", + 'Phase': training_phase[:8] + }) + train_pbar.update(1) + + if epoch % 10 == 0 and self.is_main_process: + self.logger.info(f"🧠 Epoch {epoch}/{epochs}, Avg Loss: {avg_epoch_loss:.4f}") + + if avg_epoch_loss < 1e-6: + if self.is_main_process: + self.logger.info(f"🛑 Training converged early at epoch {epoch}") + break + + if train_pbar: + train_pbar.close() + + if self.is_main_process: + self.logger.info(f"✅ Extended neural network training completed - {epochs} epochs") + self.save_training_metrics(generation, epoch_loss, training_phase) + + # synchronize all processes after training + if self.is_distributed: + dist.barrier() + + self.cleanup_generation() + + except Exception as e: + if self.is_main_process: + self.logger.error(f"❌ Extended neural network training failed: {str(e)}") + + def calculate_constraint_loss(self, gen_data: Dict[str, Any], value_output: torch.Tensor): + try: + best_individual = gen_data['best_individual'] + + if isinstance(best_individual, list): + best_individual = best_individual[0] if len(best_individual) > 0 else {} + + params = F1FrontWingParams(**best_individual) + analyzer = F1FrontWingAnalyzer(params) + constraint_results = analyzer.run_complete_analysis() + + compliance_value = float(constraint_results['overall_compliance']) + target_compliance = torch.tensor([compliance_value], + dtype=torch.float32, + device=value_output.device) + + constraint_loss = torch.nn.functional.mse_loss(value_output, target_compliance) + + return constraint_loss + + except Exception as e: + self.logger.warning(f"⚠️ Constraint loss calculation failed: {e}") + return torch.tensor(0.01, dtype=torch.float32, requires_grad=True, device=value_output.device) + + def calculate_performance_loss(self, gen_data: Dict[str, Any], value_output: torch.Tensor): + try: + fitness_score = gen_data['best_fitness'] + + if isinstance(fitness_score, list): + fitness_score = fitness_score[0] if len(fitness_score) > 0 else 0.0 + + fitness_score = float(fitness_score) + normalized_fitness = min(1.0, max(0.0, fitness_score / 100.0)) + + target_performance = torch.tensor([normalized_fitness], + dtype=torch.float32, + device=value_output.device) + + performance_loss = torch.nn.functional.mse_loss(value_output, target_performance) + + return performance_loss + + except Exception as e: + self.logger.warning(f"⚠️ Performance loss calculation failed: {e}") + return torch.tensor(0.01, dtype=torch.float32, requires_grad=True, device=value_output.device) + + def save_training_metrics(self, generation: int, final_loss: float, training_phase: str): + try: + metrics = { + 'generation': generation, + 'final_loss': final_loss, + 'training_phase': training_phase, + 'timestamp': datetime.now().isoformat() + } + + metrics_path = os.path.join(self.output_dirs['neural_networks'], f'training_metrics_gen_{generation}.json') + with open(metrics_path, 'w') as f: + json.dump(metrics, f, indent=2) + + except Exception as e: + self.logger.warning(f"⚠️ Failed to save training metrics: {e}") + + def train_neural_network(self, recent_generations: List[Dict[str, Any]]): + if not recent_generations or self.neural_network is None: + return + + try: + self.logger.info("🧠 Training neural network...") + + training_data = [] + for gen_result in recent_generations: + training_data.append({ + 'individual': gen_result['best_individual'], + 'fitness': gen_result['best_fitness'] + }) + + optimizer = self.optimizer_manager.get_optimizer() + + train_pbar = tqdm( + total=10 * len(training_data), + desc="🧠 Neural Network Training", + unit="batch", + position=1, + leave=False + ) + + for epoch in range(10): + for data in training_data: + param_tensor = self.param_tweaker.genetic_to_neural_params([list(data['individual'].values())]) + + #forward pass + policy_output, value_output = self.neural_network(param_tensor) + + #calculare the loss + target_value = torch.tensor([data['fitness']], dtype=torch.float32) + loss = self.loss_calculator.cfd_reward_loss(value_output, target_value) + + #backprop + optimizer.zero_grad() + loss.backward() + optimizer.step() + + train_pbar.set_postfix({'Loss': f"{loss.item():.4f}"}) + train_pbar.update(1) + + train_pbar.close() + self.logger.info("✅ Neural network training completed") + + self.cleanup_generation() + + except Exception as e: + self.logger.error(f"❌ Neural network training failed: {str(e)}") + + def run_optimization_loop(self, base_params: F1FrontWingParams): + self.logger.info("🔄 Starting Optimization Loop") + + generation_results = [] + max_runtime = self.config['max_runtime_hours'] * 3600 + start_time = time.time() + + # Main progress bar for generations + generation_pbar = tqdm( + total=self.config['max_generations'], + desc="🧬 Generations", + unit="gen", + position=0, + leave=True, + bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}]' + ) + + for generation in range(self.config['max_generations']): + self.current_generation = generation + + if time.time() - start_time > max_runtime: + self.logger.info(f"⏰ Runtime limit reached ({self.config['max_runtime_hours']} hours)") + break + + generation_pbar.set_description(f"🧬 Gen {generation + 1}") + + gen_result = self.run_single_generation(base_params) + generation_results.append(gen_result) + + generation_pbar.set_postfix({ + 'Best': f"{gen_result['best_fitness']:.2f}", + 'Avg': f"{gen_result['average_fitness']:.2f}", + 'Valid': f"{gen_result['valid_individuals']}" + }) + generation_pbar.update(1) + + if generation % self.config['save_frequency'] == 0: + self.save_checkpoint(generation, gen_result) + self.cleanup_generation() + + # Use extended training instead of basic training + if self.config['neural_network_enabled'] and generation % self.config['neural_network']['training_frequency'] == 0: + # Use extended training with curriculum learning + self.train_neural_network_extended(generation_results[-5:], generation) # Use more recent generations + + generation_pbar.close() + + # Final cleanup after optimization loop + self.cleanup_generation() + + return { + "total_generations": len(generation_results), + "generation_results": generation_results, + } + + def finalize_pipeline(self) -> Dict[str, Any]: + + self.logger.info("🏁 Finalizing pipeline...") + + if self.neural_network is not None: + final_nn_path = os.path.join(self.output_dirs['neural_networks'], 'final_network.pth') + torch.save(self.neural_network.state_dict(), final_nn_path) + + summary = self.generate_summary_report() + + return { + "finalization": "success", + "summary": summary + } + + def generate_summary_report(self) -> Dict[str, Any]: + + stl_files = [f for f in os.listdir(self.output_dirs['stl_outputs']) if f.endswith('.stl')] + + summary = { + "total_generations": self.current_generation, + "total_designs_generated": len(stl_files), + "output_directories": self.output_dirs, + "best_designs_stl": stl_files, + "final_population_size": len(self.current_population) if hasattr(self, 'current_population') else 0, + "early_stopped": False, + # "generated_results" : self.generation_results if hasattr(self, 'generation_results') else [], + } + + summary_path = os.path.join(self.output_dirs['checkpoints'], 'final_summary.json') + with open(summary_path, 'w') as f: + json.dump(summary, f, indent=2) + + self.logger.info(f"📊 Summary report saved: {summary_path}") + + return summary + +def create_default_wing_params() -> F1FrontWingParams: + """ + create default f1 front wing parameters with regulatory compliant values + + returns: + f1frontwingparams: default wing configuration + """ + return F1FrontWingParams( + # basic wing dimensions + total_span=1600.0, + root_chord=400.0, + tip_chord=200.0, + chord_taper_ratio=0.5, + sweep_angle=5.0, + dihedral_angle=3.0, + twist_distribution_range=[0.0, -2.0, -4.0], + base_profile="NACA2412", + + # airfoil parameters + max_thickness_ratio=0.15, + camber_ratio=0.08, + camber_position=0.4, + leading_edge_radius=0.015, + trailing_edge_thickness=0.002, + upper_surface_radius=0.8, + lower_surface_radius=0.6, + + # flap configuration + flap_count=5, + flap_spans=[200.0, 250.0, 300.0, 350.0, 500.0], + flap_root_chords=[350.0, 320.0, 290.0, 260.0, 230.0], + flap_tip_chords=[180.0, 170.0, 160.0, 150.0, 140.0], + flap_cambers=[0.10, 0.09, 0.08, 0.07, 0.06], + flap_slot_gaps=[8.0, 8.0, 8.0, 8.0, 8.0], + flap_vertical_offsets=[0.0, -5.0, -10.0, -15.0, -20.0], + flap_horizontal_offsets=[0.0, 10.0, 20.0, 30.0, 40.0], + + # endplate configuration + endplate_height=300.0, + endplate_max_width=80.0, + endplate_min_width=40.0, + endplate_thickness_base=4.0, + endplate_forward_lean=15.0, + endplate_rearward_sweep=10.0, + endplate_outboard_wrap=5.0, + + # footplate configuration + footplate_extension=50.0, + footplate_height=20.0, + arch_radius=15.0, + footplate_thickness=3.0, + + # strakes and vortex generators + primary_strake_count=3, + strake_heights=[8.0, 10.0, 12.0], + + # y250 vortex region + y250_width=30.0, + y250_step_height=10.0, + y250_transition_length=100.0, + central_slot_width=50.0, + + # pylon configuration + pylon_count=2, + pylon_spacing=400.0, + pylon_major_axis=25.0, + pylon_minor_axis=15.0, + pylon_length=150.0, + + # cascade elements + cascade_enabled=True, + primary_cascade_span=100.0, + primary_cascade_chord=50.0, + secondary_cascade_span=80.0, + secondary_cascade_chord=40.0, + + # manufacturing parameters + wall_thickness_structural=3.0, + wall_thickness_aerodynamic=1.5, + wall_thickness_details=1.0, + minimum_radius=2.0, + + # mesh and resolution + mesh_resolution_aero=0.5, + mesh_resolution_structural=1.0, + resolution_span=50, + resolution_chord=30, + mesh_density=1.0, + surface_smoothing=True, + + # material properties + material="carbon_fiber", + density=1600.0, + weight_estimate=15.0, + + # performance targets + target_downforce=1500.0, + target_drag=200.0, + efficiency_factor=7.5 + ) + +def run_distributed_training(rank: int, world_size: int, config_path: str, base_params: F1FrontWingParams): + """ + main function for each distributed process + + args: + rank: process rank (0 to world_size-1) + world_size: total number of processes + config_path: path to configuration file + base_params: base wing parameters + """ + # setup distributed environment + AlphaDesignPipeline.setup_distributed(rank, world_size) + + try: + # create pipeline instance for this process + pipeline = AlphaDesignPipeline(config_path, rank=rank, world_size=world_size) + + # run the complete pipeline + results = pipeline.run_complete_pipeline(base_params) + + # only main process prints final results + if rank == 0: + print("✅ Distributed training completed successfully") + print(f"📊 Total generations: {results.get('total_generations', 0)}") + + finally: + # cleanup distributed environment + AlphaDesignPipeline.cleanup_distributed() + +def main_distributed(config_path: str = "config.json", world_size: int = None): + """ + launcher for distributed training + + args: + config_path: path to configuration file + world_size: number of gpus to use (defaults to all available) + """ + # determine world size + if world_size is None: + world_size = torch.cuda.device_count() + + if world_size < 1: + print("❌ No GPUs available for distributed training") + return + + print(f"🚀 Starting distributed training with {world_size} GPUs") + + # create default base parameters (not from config file) + base_params = create_default_wing_params() + + # spawn processes for distributed training + mp.spawn( + run_distributed_training, + args=(world_size, config_path, base_params), + nprocs=world_size, + join=True + ) + +if __name__ == "__main__": + # for distributed multi-gpu training: + main_distributed("config.json", world_size=2) + pass + diff --git a/RL/f1_wing_output/generation_000_best_design.stl b/RL/f1_wing_output/generation_000_best_design.stl index 2341c02..4199bee 100644 Binary files a/RL/f1_wing_output/generation_000_best_design.stl and b/RL/f1_wing_output/generation_000_best_design.stl differ diff --git a/RL/f1_wing_output/generation_000_best_design_cfd_params.json b/RL/f1_wing_output/generation_000_best_design_cfd_params.json index c447019..66cdfa6 100644 --- a/RL/f1_wing_output/generation_000_best_design_cfd_params.json +++ b/RL/f1_wing_output/generation_000_best_design_cfd_params.json @@ -1,6 +1,6 @@ { "metadata": { - "generated_date": "2025-10-10T22:06:25.347303", + "generated_date": "2025-10-22T01:47:16.267647", "generator_version": "UltraRealisticF1FrontWingGenerator v2.0", "description": "CFD analysis parameters for multi-element F1 front wing", "units": { @@ -13,17 +13,17 @@ }, "geometry": { "main_element": { - "span_mm": 1562.1533373404757, - "root_chord_mm": 273.62357158832845, - "tip_chord_mm": 278.48896342259894, + "span_mm": 1600.0, + "root_chord_mm": 250.0, + "tip_chord_mm": 273.28974791521824, "taper_ratio": 0.89, - "sweep_angle_deg": 3.745137198030256, - "dihedral_angle_deg": 2.2760946056221214, + "sweep_angle_deg": 3.7936541922950426, + "dihedral_angle_deg": 2.178665235563215, "twist_range_deg": [ -1.5, 0.5 ], - "reference_area_m2": 0.42744197553172786 + "reference_area_m2": 0.4 }, "flaps": [ { @@ -33,10 +33,10 @@ "tip_chord_mm": 200, "reference_area_m2": 0.352, "geometric_angle_deg": 8, - "slot_gap_mm": 11.997258048354128, - "vertical_offset_mm": 30.68230323781692, - "horizontal_offset_mm": 29.540070714602077, - "camber_ratio": 0.15165174117856509 + "slot_gap_mm": 12.775178833576852, + "vertical_offset_mm": 32.87206503062215, + "horizontal_offset_mm": 29.491138695433754, + "camber_ratio": 0.11650730925554174 }, { "flap_index": 2, @@ -45,10 +45,10 @@ "tip_chord_mm": 160, "reference_area_m2": 0.27, "geometric_angle_deg": 11, - "slot_gap_mm": 12.0, - "vertical_offset_mm": 53.180424513536465, - "horizontal_offset_mm": 55.32375420710643, - "camber_ratio": 0.1183069457274416 + "slot_gap_mm": 16.42889568935495, + "vertical_offset_mm": 42.00449125738019, + "horizontal_offset_mm": 54.10213591434579, + "camber_ratio": 0.08779492852048842 }, { "flap_index": 3, @@ -57,21 +57,21 @@ "tip_chord_mm": 120, "reference_area_m2": 0.196, "geometric_angle_deg": 14, - "slot_gap_mm": 8.870195544962051, - "vertical_offset_mm": 63.90613554629242, - "horizontal_offset_mm": 99.50073096296768, - "camber_ratio": 0.08502357576454594 + "slot_gap_mm": 13.383572526835122, + "vertical_offset_mm": 61.698595280765176, + "horizontal_offset_mm": 105.7687587052077, + "camber_ratio": 0.09368265052848934 } ], "total_elements": 4, - "total_reference_area_m2": 1.2454419755317279 + "total_reference_area_m2": 1.218 }, "airfoil_properties": { "main_element": { "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.17793646219085565, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, "leading_edge_radius_mm": 2.8, "trailing_edge_thickness_mm": 2.5, "upper_surface_radius_mm": 800, @@ -80,19 +80,19 @@ "flaps": [ { "flap_index": 1, - "camber_ratio": 0.15165174117856509, + "camber_ratio": 0.11650730925554174, "thickness_ratio": 0.1, "trailing_edge_thickness_mm": 2.5 }, { "flap_index": 2, - "camber_ratio": 0.1183069457274416, + "camber_ratio": 0.08779492852048842, "thickness_ratio": 0.115, "trailing_edge_thickness_mm": 2.2 }, { "flap_index": 3, - "camber_ratio": 0.08502357576454594, + "camber_ratio": 0.09368265052848934, "thickness_ratio": 0.13, "trailing_edge_thickness_mm": 1.9 } @@ -100,31 +100,31 @@ }, "multi_element_interactions": { "slot_gaps_mm": [ - 11.997258048354128, - 12.0, - 8.870195544962051 + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 ], "slot_gap_to_chord_ratios": [ - 0.0545329911288824, - 0.06666666666666667, - 0.0633585396068718 + 0.0580689946980766, + 0.09127164271863861, + 0.09559694662025087 ], "overlap_ratios": [ - 0.1342730487027367, - 0.30735419003948017, - 0.7107195068783406 + 0.13405063043378979, + 0.30056742174636547, + 0.7554911336086264 ], "vertical_separation_ratios": [ - 0.11213326052179085, - 0.19435615215763416, - 0.23355493525404436 + 0.1314882601224886, + 0.16801796502952077, + 0.2467943811230607 ], "progressive_angles_enabled": true }, "endplate_features": { - "height_mm": 238.8570067579504, - "max_width_mm": 124.92200103125977, - "min_width_mm": 32.16327790369894, + "height_mm": 268.9881271806169, + "max_width_mm": 100.0, + "min_width_mm": 47.58740222598861, "thickness_base_mm": 10, "forward_lean_deg": 6, "rearward_sweep_deg": 10, @@ -132,9 +132,9 @@ }, "y250_vortex_region": { "width_mm": 500, - "step_height_mm": 21.746442059503984, - "transition_length_mm": 86.9019208874379, - "central_slot_width_mm": 30.797237922946643 + "step_height_mm": 21.266846651941208, + "transition_length_mm": 80.0, + "central_slot_width_mm": 30.0 }, "footplate_features": { "extension_mm": 70, @@ -192,12 +192,12 @@ "design_speed_kmh": 300 }, "cfd_recommended_settings": { - "reference_length_m": 0.27362357158832845, - "reference_area_m2": 1.2454419755317279, + "reference_length_m": 0.25, + "reference_area_m2": 1.218, "reference_point_mm": [ 0, 0, - 119.4285033789752 + 134.49406359030846 ], "recommended_test_speeds_kmh": [ 50, @@ -229,7 +229,7 @@ 150, 200 ], - "reynolds_number_at_300kmh": 1520130.9532684912, + "reynolds_number_at_300kmh": 1388888.8888888888, "expected_downforce_coefficient_range": [ -2.5, -4.5 diff --git a/RL/f1_wing_output/individual_3_wing.stl b/RL/f1_wing_output/individual_3_wing.stl new file mode 100644 index 0000000..687b2d5 Binary files /dev/null and b/RL/f1_wing_output/individual_3_wing.stl differ diff --git a/RL/f1_wing_output/individual_3_wing_cfd_params.json b/RL/f1_wing_output/individual_3_wing_cfd_params.json new file mode 100644 index 0000000..30deff5 --- /dev/null +++ b/RL/f1_wing_output/individual_3_wing_cfd_params.json @@ -0,0 +1,252 @@ +{ + "metadata": { + "generated_date": "2025-10-22T01:47:21.351086", + "generator_version": "UltraRealisticF1FrontWingGenerator v2.0", + "description": "CFD analysis parameters for multi-element F1 front wing", + "units": { + "length": "mm", + "area": "m\u00b2", + "angle": "degrees", + "force": "N", + "weight": "kg" + } + }, + "geometry": { + "main_element": { + "span_mm": 1600.0, + "root_chord_mm": 250.0, + "tip_chord_mm": 273.28974791521824, + "taper_ratio": 0.89, + "sweep_angle_deg": 3.7936541922950426, + "dihedral_angle_deg": 2.178665235563215, + "twist_range_deg": [ + -1.5, + 0.5 + ], + "reference_area_m2": 0.4 + }, + "flaps": [ + { + "flap_index": 1, + "span_mm": 1600, + "root_chord_mm": 220, + "tip_chord_mm": 200, + "reference_area_m2": 0.352, + "geometric_angle_deg": 8, + "slot_gap_mm": 12.775178833576852, + "vertical_offset_mm": 32.87206503062215, + "horizontal_offset_mm": 29.491138695433754, + "camber_ratio": 0.11650730925554174 + }, + { + "flap_index": 2, + "span_mm": 1500, + "root_chord_mm": 180, + "tip_chord_mm": 160, + "reference_area_m2": 0.27, + "geometric_angle_deg": 11, + "slot_gap_mm": 16.42889568935495, + "vertical_offset_mm": 42.00449125738019, + "horizontal_offset_mm": 54.10213591434579, + "camber_ratio": 0.08779492852048842 + }, + { + "flap_index": 3, + "span_mm": 1400, + "root_chord_mm": 140, + "tip_chord_mm": 120, + "reference_area_m2": 0.196, + "geometric_angle_deg": 14, + "slot_gap_mm": 13.383572526835122, + "vertical_offset_mm": 61.698595280765176, + "horizontal_offset_mm": 105.7687587052077, + "camber_ratio": 0.09368265052848934 + } + ], + "total_elements": 4, + "total_reference_area_m2": 1.218 + }, + "airfoil_properties": { + "main_element": { + "base_profile": "NACA_64A010_modified", + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, + "leading_edge_radius_mm": 2.8, + "trailing_edge_thickness_mm": 2.5, + "upper_surface_radius_mm": 800, + "lower_surface_radius_mm": 1100 + }, + "flaps": [ + { + "flap_index": 1, + "camber_ratio": 0.11650730925554174, + "thickness_ratio": 0.1, + "trailing_edge_thickness_mm": 2.5 + }, + { + "flap_index": 2, + "camber_ratio": 0.08779492852048842, + "thickness_ratio": 0.115, + "trailing_edge_thickness_mm": 2.2 + }, + { + "flap_index": 3, + "camber_ratio": 0.09368265052848934, + "thickness_ratio": 0.13, + "trailing_edge_thickness_mm": 1.9 + } + ] + }, + "multi_element_interactions": { + "slot_gaps_mm": [ + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 + ], + "slot_gap_to_chord_ratios": [ + 0.0580689946980766, + 0.09127164271863861, + 0.09559694662025087 + ], + "overlap_ratios": [ + 0.13405063043378979, + 0.30056742174636547, + 0.7554911336086264 + ], + "vertical_separation_ratios": [ + 0.1314882601224886, + 0.16801796502952077, + 0.2467943811230607 + ], + "progressive_angles_enabled": true + }, + "endplate_features": { + "height_mm": 268.9881271806169, + "max_width_mm": 100.0, + "min_width_mm": 47.58740222598861, + "thickness_base_mm": 10, + "forward_lean_deg": 6, + "rearward_sweep_deg": 10, + "outboard_wrap_deg": 18 + }, + "y250_vortex_region": { + "width_mm": 500, + "step_height_mm": 21.266846651941208, + "transition_length_mm": 80.0, + "central_slot_width_mm": 30.0 + }, + "footplate_features": { + "extension_mm": 70, + "height_mm": 30, + "arch_radius_mm": 130, + "thickness_mm": 5 + }, + "strakes": { + "count": 2, + "heights_mm": [ + 45, + 35 + ] + }, + "mounting_system": { + "pylon_count": 2, + "pylon_spacing_mm": 320, + "pylon_major_axis_mm": 38, + "pylon_minor_axis_mm": 25, + "pylon_length_mm": 120 + }, + "cascade_elements": { + "enabled": true, + "primary_cascade": { + "span_mm": 250, + "chord_mm": 55 + }, + "secondary_cascade": { + "span_mm": 160, + "chord_mm": 40 + } + }, + "aerodynamic_features": { + "gurney_flaps_enabled": true, + "aerodynamic_slots_enabled": true, + "realistic_surface_curvature": true, + "enhanced_endplate_detail": true, + "wing_flex_simulation": false + }, + "manufacturing_parameters": { + "wall_thickness_structural_mm": 4, + "wall_thickness_aerodynamic_mm": 2.5, + "wall_thickness_details_mm": 2.0, + "minimum_radius_mm": 0.4 + }, + "material_properties": { + "material": "Standard Carbon Fiber", + "density_kg_m3": 1600, + "estimated_weight_kg": 4.0 + }, + "performance_targets": { + "target_downforce_N": 4000, + "target_drag_N": 40, + "efficiency_factor": 1.0, + "design_speed_kmh": 300 + }, + "cfd_recommended_settings": { + "reference_length_m": 0.25, + "reference_area_m2": 1.218, + "reference_point_mm": [ + 0, + 0, + 134.49406359030846 + ], + "recommended_test_speeds_kmh": [ + 50, + 100, + 150, + 200, + 250, + 300, + 350 + ], + "recommended_aoa_range_deg": [ + -8, + -5, + -2, + 0, + 2, + 5, + 8, + 12, + 15, + 20 + ], + "recommended_ground_clearances_mm": [ + 25, + 50, + 75, + 100, + 125, + 150, + 200 + ], + "reynolds_number_at_300kmh": 1388888.8888888888, + "expected_downforce_coefficient_range": [ + -2.5, + -4.5 + ], + "expected_drag_coefficient_range": [ + 0.4, + 0.8 + ], + "expected_efficiency_ld_ratio": [ + 3.0, + 6.0 + ] + }, + "mesh_quality_targets": { + "resolution_span": 40, + "resolution_chord": 25, + "mesh_density": 1.5, + "surface_smoothing_enabled": true + } +} \ No newline at end of file diff --git a/RL/f1_wing_output/individual_4_wing.stl b/RL/f1_wing_output/individual_4_wing.stl new file mode 100644 index 0000000..8c59851 Binary files /dev/null and b/RL/f1_wing_output/individual_4_wing.stl differ diff --git a/RL/f1_wing_output/individual_4_wing_cfd_params.json b/RL/f1_wing_output/individual_4_wing_cfd_params.json new file mode 100644 index 0000000..5e1409c --- /dev/null +++ b/RL/f1_wing_output/individual_4_wing_cfd_params.json @@ -0,0 +1,252 @@ +{ + "metadata": { + "generated_date": "2025-10-22T01:47:30.575447", + "generator_version": "UltraRealisticF1FrontWingGenerator v2.0", + "description": "CFD analysis parameters for multi-element F1 front wing", + "units": { + "length": "mm", + "area": "m\u00b2", + "angle": "degrees", + "force": "N", + "weight": "kg" + } + }, + "geometry": { + "main_element": { + "span_mm": 1747.9365536896603, + "root_chord_mm": 250.0, + "tip_chord_mm": 248.97567336097669, + "taper_ratio": 0.89, + "sweep_angle_deg": 4.239145447757582, + "dihedral_angle_deg": 2.7018685735908234, + "twist_range_deg": [ + -1.5, + 0.5 + ], + "reference_area_m2": 0.43698413842241507 + }, + "flaps": [ + { + "flap_index": 1, + "span_mm": 1600, + "root_chord_mm": 220, + "tip_chord_mm": 200, + "reference_area_m2": 0.352, + "geometric_angle_deg": 8, + "slot_gap_mm": 12.775178833576852, + "vertical_offset_mm": 32.87206503062215, + "horizontal_offset_mm": 29.491138695433754, + "camber_ratio": 0.11650730925554174 + }, + { + "flap_index": 2, + "span_mm": 1500, + "root_chord_mm": 180, + "tip_chord_mm": 160, + "reference_area_m2": 0.27, + "geometric_angle_deg": 11, + "slot_gap_mm": 16.42889568935495, + "vertical_offset_mm": 42.00449125738019, + "horizontal_offset_mm": 54.10213591434579, + "camber_ratio": 0.08779492852048842 + }, + { + "flap_index": 3, + "span_mm": 1400, + "root_chord_mm": 140, + "tip_chord_mm": 120, + "reference_area_m2": 0.196, + "geometric_angle_deg": 14, + "slot_gap_mm": 13.383572526835122, + "vertical_offset_mm": 61.698595280765176, + "horizontal_offset_mm": 105.7687587052077, + "camber_ratio": 0.09368265052848934 + } + ], + "total_elements": 4, + "total_reference_area_m2": 1.2549841384224152 + }, + "airfoil_properties": { + "main_element": { + "base_profile": "NACA_64A010_modified", + "max_thickness_ratio": 0.13012117856302347, + "camber_ratio": 0.09072176816921307, + "camber_position": 0.42430384511724367, + "leading_edge_radius_mm": 2.8, + "trailing_edge_thickness_mm": 2.5, + "upper_surface_radius_mm": 800, + "lower_surface_radius_mm": 1100 + }, + "flaps": [ + { + "flap_index": 1, + "camber_ratio": 0.11650730925554174, + "thickness_ratio": 0.1, + "trailing_edge_thickness_mm": 2.5 + }, + { + "flap_index": 2, + "camber_ratio": 0.08779492852048842, + "thickness_ratio": 0.115, + "trailing_edge_thickness_mm": 2.2 + }, + { + "flap_index": 3, + "camber_ratio": 0.09368265052848934, + "thickness_ratio": 0.13, + "trailing_edge_thickness_mm": 1.9 + } + ] + }, + "multi_element_interactions": { + "slot_gaps_mm": [ + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 + ], + "slot_gap_to_chord_ratios": [ + 0.0580689946980766, + 0.09127164271863861, + 0.09559694662025087 + ], + "overlap_ratios": [ + 0.13405063043378979, + 0.30056742174636547, + 0.7554911336086264 + ], + "vertical_separation_ratios": [ + 0.1314882601224886, + 0.16801796502952077, + 0.2467943811230607 + ], + "progressive_angles_enabled": true + }, + "endplate_features": { + "height_mm": 267.2492028444034, + "max_width_mm": 140.76000663210232, + "min_width_mm": 39.62926578578964, + "thickness_base_mm": 10, + "forward_lean_deg": 6, + "rearward_sweep_deg": 10, + "outboard_wrap_deg": 18 + }, + "y250_vortex_region": { + "width_mm": 500, + "step_height_mm": 15.0, + "transition_length_mm": 94.32874682354503, + "central_slot_width_mm": 30.0 + }, + "footplate_features": { + "extension_mm": 70, + "height_mm": 30, + "arch_radius_mm": 130, + "thickness_mm": 5 + }, + "strakes": { + "count": 2, + "heights_mm": [ + 45, + 35 + ] + }, + "mounting_system": { + "pylon_count": 2, + "pylon_spacing_mm": 320, + "pylon_major_axis_mm": 38, + "pylon_minor_axis_mm": 25, + "pylon_length_mm": 120 + }, + "cascade_elements": { + "enabled": true, + "primary_cascade": { + "span_mm": 250, + "chord_mm": 55 + }, + "secondary_cascade": { + "span_mm": 160, + "chord_mm": 40 + } + }, + "aerodynamic_features": { + "gurney_flaps_enabled": true, + "aerodynamic_slots_enabled": true, + "realistic_surface_curvature": true, + "enhanced_endplate_detail": true, + "wing_flex_simulation": false + }, + "manufacturing_parameters": { + "wall_thickness_structural_mm": 4, + "wall_thickness_aerodynamic_mm": 2.5, + "wall_thickness_details_mm": 2.0, + "minimum_radius_mm": 0.4 + }, + "material_properties": { + "material": "Standard Carbon Fiber", + "density_kg_m3": 1600, + "estimated_weight_kg": 4.0 + }, + "performance_targets": { + "target_downforce_N": 4000, + "target_drag_N": 40, + "efficiency_factor": 1.0, + "design_speed_kmh": 300 + }, + "cfd_recommended_settings": { + "reference_length_m": 0.25, + "reference_area_m2": 1.2549841384224152, + "reference_point_mm": [ + 0, + 0, + 133.6246014222017 + ], + "recommended_test_speeds_kmh": [ + 50, + 100, + 150, + 200, + 250, + 300, + 350 + ], + "recommended_aoa_range_deg": [ + -8, + -5, + -2, + 0, + 2, + 5, + 8, + 12, + 15, + 20 + ], + "recommended_ground_clearances_mm": [ + 25, + 50, + 75, + 100, + 125, + 150, + 200 + ], + "reynolds_number_at_300kmh": 1388888.8888888888, + "expected_downforce_coefficient_range": [ + -2.5, + -4.5 + ], + "expected_drag_coefficient_range": [ + 0.4, + 0.8 + ], + "expected_efficiency_ld_ratio": [ + 3.0, + 6.0 + ] + }, + "mesh_quality_targets": { + "resolution_span": 40, + "resolution_chord": 25, + "mesh_density": 1.5, + "surface_smoothing_enabled": true + } +} \ No newline at end of file diff --git a/RL/genetic_algo_components/__pycache__/__init__.cpython-310.pyc b/RL/genetic_algo_components/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index ff2eecf..0000000 Binary files a/RL/genetic_algo_components/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/__init__.cpython-312.pyc b/RL/genetic_algo_components/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 5c12035..0000000 Binary files a/RL/genetic_algo_components/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-310.pyc b/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-310.pyc deleted file mode 100644 index 98fd27d..0000000 Binary files a/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-310.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-312.pyc b/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-312.pyc deleted file mode 100644 index 1cb96d9..0000000 Binary files a/RL/genetic_algo_components/__pycache__/crossover_ops.cpython-312.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-310.pyc b/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-310.pyc deleted file mode 100644 index 9ef5be1..0000000 Binary files a/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-310.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-312.pyc b/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-312.pyc deleted file mode 100644 index 0baac98..0000000 Binary files a/RL/genetic_algo_components/__pycache__/fitness_evaluation.cpython-312.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/initialize_population.cpython-310.pyc b/RL/genetic_algo_components/__pycache__/initialize_population.cpython-310.pyc deleted file mode 100644 index 122d321..0000000 Binary files a/RL/genetic_algo_components/__pycache__/initialize_population.cpython-310.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/initialize_population.cpython-312.pyc b/RL/genetic_algo_components/__pycache__/initialize_population.cpython-312.pyc deleted file mode 100644 index 8774096..0000000 Binary files a/RL/genetic_algo_components/__pycache__/initialize_population.cpython-312.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-310.pyc b/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-310.pyc deleted file mode 100644 index dcd7630..0000000 Binary files a/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-310.pyc and /dev/null differ diff --git a/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-312.pyc b/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-312.pyc deleted file mode 100644 index e38ee0e..0000000 Binary files a/RL/genetic_algo_components/__pycache__/mutation_strategy.cpython-312.pyc and /dev/null differ diff --git a/RL/main.py b/RL/main.py new file mode 100644 index 0000000..f118498 --- /dev/null +++ b/RL/main.py @@ -0,0 +1,6 @@ +def main(): + print("Hello from rl!") + + +if __name__ == "__main__": + main() diff --git a/RL/neural_network_components/__pycache__/__init__.cpython-310.pyc b/RL/neural_network_components/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 591f98d..0000000 Binary files a/RL/neural_network_components/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/__init__.cpython-312.pyc b/RL/neural_network_components/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 64e7029..0000000 Binary files a/RL/neural_network_components/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/forward_pass.cpython-310.pyc b/RL/neural_network_components/__pycache__/forward_pass.cpython-310.pyc deleted file mode 100644 index 323ec2e..0000000 Binary files a/RL/neural_network_components/__pycache__/forward_pass.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/forward_pass.cpython-312.pyc b/RL/neural_network_components/__pycache__/forward_pass.cpython-312.pyc deleted file mode 100644 index b61fcfa..0000000 Binary files a/RL/neural_network_components/__pycache__/forward_pass.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/loss_calculation.cpython-310.pyc b/RL/neural_network_components/__pycache__/loss_calculation.cpython-310.pyc deleted file mode 100644 index cd713c5..0000000 Binary files a/RL/neural_network_components/__pycache__/loss_calculation.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/loss_calculation.cpython-312.pyc b/RL/neural_network_components/__pycache__/loss_calculation.cpython-312.pyc deleted file mode 100644 index 356487e..0000000 Binary files a/RL/neural_network_components/__pycache__/loss_calculation.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/network_initialization.cpython-310.pyc b/RL/neural_network_components/__pycache__/network_initialization.cpython-310.pyc deleted file mode 100644 index 24920a4..0000000 Binary files a/RL/neural_network_components/__pycache__/network_initialization.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/network_initialization.cpython-312.pyc b/RL/neural_network_components/__pycache__/network_initialization.cpython-312.pyc deleted file mode 100644 index 3d600b2..0000000 Binary files a/RL/neural_network_components/__pycache__/network_initialization.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/optimizer_integration.cpython-310.pyc b/RL/neural_network_components/__pycache__/optimizer_integration.cpython-310.pyc deleted file mode 100644 index a1c60be..0000000 Binary files a/RL/neural_network_components/__pycache__/optimizer_integration.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/optimizer_integration.cpython-312.pyc b/RL/neural_network_components/__pycache__/optimizer_integration.cpython-312.pyc deleted file mode 100644 index 39010cd..0000000 Binary files a/RL/neural_network_components/__pycache__/optimizer_integration.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-310.pyc b/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-310.pyc deleted file mode 100644 index 352c665..0000000 Binary files a/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-312.pyc b/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-312.pyc deleted file mode 100644 index e9df85f..0000000 Binary files a/RL/neural_network_components/__pycache__/parameter_tweaking.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/policy_head.cpython-310.pyc b/RL/neural_network_components/__pycache__/policy_head.cpython-310.pyc deleted file mode 100644 index d4d0794..0000000 Binary files a/RL/neural_network_components/__pycache__/policy_head.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/policy_head.cpython-312.pyc b/RL/neural_network_components/__pycache__/policy_head.cpython-312.pyc deleted file mode 100644 index b58aeb3..0000000 Binary files a/RL/neural_network_components/__pycache__/policy_head.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/value_head.cpython-310.pyc b/RL/neural_network_components/__pycache__/value_head.cpython-310.pyc deleted file mode 100644 index e3ebc77..0000000 Binary files a/RL/neural_network_components/__pycache__/value_head.cpython-310.pyc and /dev/null differ diff --git a/RL/neural_network_components/__pycache__/value_head.cpython-312.pyc b/RL/neural_network_components/__pycache__/value_head.cpython-312.pyc deleted file mode 100644 index d8f6f31..0000000 Binary files a/RL/neural_network_components/__pycache__/value_head.cpython-312.pyc and /dev/null differ diff --git a/RL/neural_networks/final_network.pth b/RL/neural_networks/final_network.pth index 5be84b1..f42f7b3 100644 Binary files a/RL/neural_networks/final_network.pth and b/RL/neural_networks/final_network.pth differ diff --git a/RL/neural_networks/network_gen_000.pth b/RL/neural_networks/network_gen_000.pth index 9fbd1f7..01db9a4 100644 Binary files a/RL/neural_networks/network_gen_000.pth and b/RL/neural_networks/network_gen_000.pth differ diff --git a/RL/neural_networks/training_metrics_gen_0.json b/RL/neural_networks/training_metrics_gen_0.json index 7c2a44e..15070c2 100644 --- a/RL/neural_networks/training_metrics_gen_0.json +++ b/RL/neural_networks/training_metrics_gen_0.json @@ -1,6 +1,6 @@ { "generation": 0, - "final_loss": 3.220587730407715, + "final_loss": 1.0808384418487549, "training_phase": "Constraint Focus", - "timestamp": "2025-10-10T22:06:27.714547" + "timestamp": "2025-10-22T01:47:35.856115" } \ No newline at end of file diff --git a/RL/pyproject.toml b/RL/pyproject.toml new file mode 100644 index 0000000..428369e --- /dev/null +++ b/RL/pyproject.toml @@ -0,0 +1,7 @@ +[project] +name = "rl" +version = "0.1.0" +description = "Add your description here" +readme = "README.md" +requires-python = ">=3.12" +dependencies = [] diff --git a/RL/readme_distributed.md b/RL/readme_distributed.md new file mode 100644 index 0000000..bf8adba --- /dev/null +++ b/RL/readme_distributed.md @@ -0,0 +1,203 @@ +# Distributed Pipeline - Multi-GPU Training + +## What's Different + +single gpu? nah. this version splits work across multiple gpus. + +main differences from `main_pipeline.py`: +- **parallel training**: multiple processes, one per gpu +- **synchronized gradients**: all gpus share learning +- **distributed data**: population split across workers +- **collective ops**: all processes coordinate together + +## Why Distributed + +- **faster training**: linear speedup with more gpus (ideally) +- **bigger populations**: more memory = larger population sizes +- **parallel eval**: fitness evaluation across multiple devices +- **scale up**: same code, just add more gpus + +## How It Works + +### Setup Phase + +```python +AlphaDesignPipeline.setup_distributed(rank, world_size) +``` + +- each gpu gets a unique **rank** (0 to N-1) +- **world_size** = total number of gpus +- uses NCCL backend for gpu communication +- sets `MASTER_ADDR` and `MASTER_PORT` for coordination + +### Process Groups + +- **rank 0** = main process. handles logging, checkpoints, etc. +- **other ranks** = worker processes. do computation, sync results. +- `dist.barrier()` keeps everyone in sync + +### Model Wrapping + +```python +self.neural_network = DDP( + self.neural_network, + device_ids=[self.rank], + output_device=self.rank +) +``` + +- wraps neural network with `DistributedDataParallel` (DDP) +- each gpu gets its own copy +- gradients averaged across all copies during backprop +- `find_unused_parameters=True` for complex models(need to research a bit more) + +### Data Distribution + +fitness evaluation: +- each process evaluates subset of population +- `all_gather_object()` collects results to main process +- main process aggregates and broadcasts back + +training: +- all processes compute gradients on their data +- `all_reduce()` averages gradients +- synchronized optimizer step + +### Synchronization Points + +key sync operations: +- `dist.broadcast_object_list()` - send data from rank 0 to all +- `dist.all_gather_object()` - collect data from all processes +- `dist.all_reduce()` - sum/average values across processes +- `dist.barrier()` - wait for all processes to reach this point + +## Training Flow + +1. **initialize**: rank 0 creates initial population +2. **broadcast**: population sent to all workers +3. **evaluate**: each worker evaluates subset +4. **gather**: fitness scores collected to rank 0 +5. **evolve**: rank 0 generates next generation +6. **broadcast**: new population distributed +7. **nn training**: all workers train neural network with synchronized gradients +8. **repeat**: back to step 3 + +## Key Differences + +### Population Management + +**main_pipeline**: +```python +new_population = self.generate_next_population() +``` + +**distributed_pipeline**: +```python +new_population = self.generate_next_population() +new_population = self.broadcast_population(new_population) +``` + +### Neural Network Training + +**main_pipeline**: +```python +loss.backward() +optimizer.step() +``` + +**distributed_pipeline**: +```python +loss.backward() # DDP handles gradient synchronization +optimizer.step() # all processes step together +``` + +### Logging + +**main_pipeline**: +```python +self.logger.info("training...") +``` + +**distributed_pipeline**: +```python +if self.is_main_process: + self.logger.info("training...") +``` + +only rank 0 logs. this avvoids spam from all processes. + +## Usage + +### Single GPU (fallback) +```python +pipeline = AlphaDesignPipeline("config.json", rank=-1, world_size=1) +results = pipeline.run_complete_pipeline(base_params) +``` + +### Multi-GPU +```python +main_distributed("config.json", world_size=2) +``` + +uses `torch.multiprocessing.spawn()` to launch processes: +```python +mp.spawn( + run_distributed_training, + args=(world_size, config_path, base_params), + nprocs=world_size, + join=True +) +``` + +### Environment Variables + +```bash +export MASTER_ADDR='localhost' +export MASTER_PORT='12355' +``` + +for multi-node (not implemented yet): +```bash +export MASTER_ADDR='192.168.1.1' +export MASTER_PORT='12355' +export WORLD_SIZE=8 +export RANK=0 +``` + +## Memory Management + +each gpu has its own: +- copy of neural network +- subset of population for evaluation +- local gradients + +shared across gpus: +- synchronized model weights +- aggregated fitness scores +- generation results (on rank 0) + + +## Limitations + +current implementation: +- only supports single-node multi-gpu +- fitness evaluation not fully parallelized (todo) +- population size should be divisible by world_size +- requires all gpus to have same memory + +## Cleanup + +```python +AlphaDesignPipeline.cleanup_distributed() +``` + +destroys process group when done. call this before exit. + + +## References + +- PyTorch DDP docs: https://pytorch.org/docs/stable/notes/ddp.html +- torch.distributed: https://pytorch.org/docs/stable/distributed.html +- NCCL backend: https://docs.nvidia.com/deeplearning/nccl/ + + diff --git a/RL/stl_outputs/generation_000_best_design.stl b/RL/stl_outputs/generation_000_best_design.stl index 2341c02..4199bee 100644 Binary files a/RL/stl_outputs/generation_000_best_design.stl and b/RL/stl_outputs/generation_000_best_design.stl differ diff --git a/RL/stl_outputs/generation_000_best_design_params.json b/RL/stl_outputs/generation_000_best_design_params.json index 2c550a9..ddec4ed 100644 --- a/RL/stl_outputs/generation_000_best_design_params.json +++ b/RL/stl_outputs/generation_000_best_design_params.json @@ -1,18 +1,18 @@ { - "total_span": 1562.1533373404757, - "root_chord": 273.62357158832845, - "tip_chord": 278.48896342259894, + "total_span": 1600.0, + "root_chord": 250.0, + "tip_chord": 273.28974791521824, "chord_taper_ratio": 0.89, - "sweep_angle": 3.745137198030256, - "dihedral_angle": 2.2760946056221214, + "sweep_angle": 3.7936541922950426, + "dihedral_angle": 2.178665235563215, "twist_distribution_range": [ -1.5, 0.5 ], "base_profile": "NACA_64A010_modified", - "max_thickness_ratio": 0.17793646219085565, - "camber_ratio": 0.09576187755552051, - "camber_position": 0.3989658971624493, + "max_thickness_ratio": 0.15900130894449482, + "camber_ratio": 0.09676803758655178, + "camber_position": 0.429545483476326, "leading_edge_radius": 2.8, "trailing_edge_thickness": 2.5, "upper_surface_radius": 800, @@ -34,28 +34,28 @@ 120 ], "flap_cambers": [ - 0.15165174117856509, - 0.1183069457274416, - 0.08502357576454594 + 0.11650730925554174, + 0.08779492852048842, + 0.09368265052848934 ], "flap_slot_gaps": [ - 11.997258048354128, - 12.0, - 8.870195544962051 + 12.775178833576852, + 16.42889568935495, + 13.383572526835122 ], "flap_vertical_offsets": [ - 30.68230323781692, - 53.180424513536465, - 63.90613554629242 + 32.87206503062215, + 42.00449125738019, + 61.698595280765176 ], "flap_horizontal_offsets": [ - 29.540070714602077, - 55.32375420710643, - 99.50073096296768 + 29.491138695433754, + 54.10213591434579, + 105.7687587052077 ], - "endplate_height": 238.8570067579504, - "endplate_max_width": 124.92200103125977, - "endplate_min_width": 32.16327790369894, + "endplate_height": 268.9881271806169, + "endplate_max_width": 100.0, + "endplate_min_width": 47.58740222598861, "endplate_thickness_base": 10, "endplate_forward_lean": 6, "endplate_rearward_sweep": 10, @@ -70,9 +70,9 @@ 35 ], "y250_width": 500, - "y250_step_height": 21.746442059503984, - "y250_transition_length": 86.9019208874379, - "central_slot_width": 30.797237922946643, + "y250_step_height": 21.266846651941208, + "y250_transition_length": 80.0, + "central_slot_width": 30.0, "pylon_count": 2, "pylon_spacing": 320, "pylon_major_axis": 38, diff --git a/RL/uv.lock b/RL/uv.lock new file mode 100644 index 0000000..895dd7f --- /dev/null +++ b/RL/uv.lock @@ -0,0 +1,8 @@ +version = 1 +revision = 3 +requires-python = ">=3.12" + +[[package]] +name = "rl" +version = "0.1.0" +source = { virtual = "." } diff --git a/learning/images/decentralized_training_issues.png b/learning/images/decentralized_training_issues.png new file mode 100644 index 0000000..063215e Binary files /dev/null and b/learning/images/decentralized_training_issues.png differ diff --git a/learning/images/diloco.png b/learning/images/diloco.png new file mode 100644 index 0000000..a023b2e Binary files /dev/null and b/learning/images/diloco.png differ diff --git a/learning/images/distributed_arch.png b/learning/images/distributed_arch.png new file mode 100644 index 0000000..1f1477c Binary files /dev/null and b/learning/images/distributed_arch.png differ diff --git a/learning/images/swarm_parallelism.png b/learning/images/swarm_parallelism.png new file mode 100644 index 0000000..71b432d Binary files /dev/null and b/learning/images/swarm_parallelism.png differ diff --git a/learning/pccl.md b/learning/pccl.md new file mode 100644 index 0000000..fc8f7f8 --- /dev/null +++ b/learning/pccl.md @@ -0,0 +1,111 @@ +# PCCL: Notes + +Paper link: https://arxiv.org/pdf/2505.14065 + +Read the Prime Intellect paper on PCCL and dug in. This is one of those rare “we actually built it to survive the real internet” comms stacks. Not a facelift on HPC assumptions. A rethink around churn, WAN latency, and training loops that must stay bit-aligned without babying the system. + +## Why this exists + +Classic collectives (MPI/NCCL) assume the world is kind: fixed membership, stable links, low jitter, symmetric throughput. Over the public internet, none of that holds. Peers arrive late, drop mid-iteration, and routes behave asymmetrically. PCCL is built for that world: dynamic peers, interruptible collectives, and state that remains bit-identical despite chaos. + +## Master–client model (and why “authoritarian” is good) + +There’s a master, and there are peers. The master doesn’t move bytes for your tensors; it coordinates. It tracks who’s in, who’s applying, what the ring is, and whether anyone is out of sync. That tight control isn’t overhead for the sake of it—it’s how you shrink the legal state space so failure handling is tractable instead of combinatorial. + +Peer lifecycle is strict for a reason: +- Request → Registered → Accepted (Active group) +- Only accepted peers participate in collectives and vote to admit newcomers +- No free-for-all joining in the middle of a collective + +That discipline lets you unwind errors deterministically and avoid “zombie” states that stall the run. + +## Topology: ring order by bandwidth, not vibes + +All-reduce is ring-based here. Order matters a lot when your links are WAN. PCCL measures pairwise bandwidth and treats ring construction like an asymmetric TSP. Asymmetric is key: A→B may not equal B→A thanks to routing reality. + +Two passes: +- Fast heuristic to get you moving now +- Background “moonshot” to improve the ring later + +When a better order appears, peers rewire their p2p links. Colocated machines naturally cluster; cross-DC hops get minimized. Practical, not precious. + +## Shared state: bit parity or it didn’t happen + +Every accepted peer is expected to have the exact same weights and optimizer state. PCCL enforces this by hashing the shared state and letting the master spot stragglers. Out-of-date peers pull the missing bytes p2p from an up-to-date one. The hashing kernel is deterministic across GPU generations and mirrored on CPU with the same reduction shape so the identity holds. Result: no silent drift, no accidental forked worlds. + +## One major thing at a time + +PCCL’s golden rule: a group does only one major operation at a time. +- Accept/remove peers (topology update) +- Sync shared state +- Run a collective + +Everything is gated via micro-consensus. That’s how you guarantee lockstep semantics and make abort-and-retry safe. If a peer dies mid-collective, the operation is canceled, the peer is marked gone, and you retry without it. Buffers roll back to pre-op contents—no half-applied garbage. + +## The all-reduce, built for aborts + +The implementation is the N−1 ring you expect, pipelined and chunked: +- Reduce-scatter: accumulate your slice over world_size−1 steps +- All-gather: circulate the completed slices until everyone has the full tensor +- Finalize (AVG divides by world_size, etc.) + +Crucially, the send/recv loop periodically checks for master abort signals without adding IO overhead. On abort, local state restores immediately to a snapshot taken at start-of-op. Zero-copy still applies: user buffers are used directly, but the system remains interruptible. + +## Make WAN your ally: concurrency and multi-connection + +Single TCP flow over a long fat pipe rarely hits line rate thanks to window growth and per-flow fairness. PCCL exploits multiple concurrent TCP connections and concurrent collectives: +- Multiple parameter tensors can reduce concurrently +- Multiple TCP flows per peer-to-peer link lift aggregate throughput on WAN +- Effective bandwidth is the sum over those overlapping ops + +This is the Internet version of “don’t starve the pipe.” + +## Algorithms this actually enables + +- DDP: straightforward with PCCL primitives, but chatty over WAN. Works, not ideal. +- DiLoCo: inner steps train locally for H steps, outer step reduces parameter deltas and applies with an outer optimizer. Turn H up to reduce comms; at H=1 with SGD(1.0) it approximates DDP. +- Async DiLoCo: overlap comms with compute. Compute the delta, launch reduce in the background, keep training. Apply the reduced update one outer step later. If compute time roughly matches comms, you effectively hide the comm cost. + +Peer churn with Async DiLoCo needs choreography: +- Don’t overlap major ops with collectives +- Drain in-flight collective, then accept newcomers and sync once +- New peers “eavesdrop” the just-finished outer state via an extra shared-state sync to land in exact bit-parity +- Resume the one-step-behind pipeline + +## Fault tolerance is a design constraint, not an afterthought + +Previous attempts failed because “join whenever” makes the state machine explode. PCCL narrows legal transitions, demands consensus, and tests hard: +- Long-running stress tests kill and respawn peers every ~0.5–1s +- Mixed OS socket behavior is handled by design and CI, not wishful thinking +- Threading primitives are tuned to saturate full-duplex links without introducing wakeup latency bottlenecks + +The result isn’t magic; it’s engineering guardrails. + +## What this unlocks in practice + +- Training across DCs and clouds without a VPN +- Spot-heavy fleets with a small reliable master keeping the run coherent +- Exact state recovery: peers restore to the last verified shared-state hash, avoiding “mystery divergence” after restart +- Research room for comm-lean optimizers and dynamic membership strategies without re-implementing fault tolerance from scratch + +## Mental model to keep in your head + +- Master is control plane only. Data planes are peer-to-peer. +- Groups advance via small unanimous votes. If anything’s off, abort quickly and retry from a clean, known state. +- Topology is a living thing. Measure, solve, rewire, repeat. +- Determinism beats heroics. Hash, verify, sync. Then go. + +![pccl_mental_model](./pccl.png) + +## Quick checklist if you’re integrating + +- Wire up the communicator per trainer process; connect to master early +- Admit peers only through the accept phase; never side-door them +- Call topology update at sensible cadence; let the solver improve the ring +- Enforce shared-state sync before first collective after membership change +- Batch parameter tensors to enable concurrent reduces +- For Async DiLoCo, build the one-step-behind application and the double-sync join path + +## Closing take + +PCCL treats the internet like the hostile substrate it is and still keeps model state bit-aligned across churn. The strict FSM and micro-consensus aren’t ceremony—they’re how you make abortable, zero-copy collectives safe at scale. If you need WAN-grade collectives with dynamic membership and you care about exactness, this is the right set of trade-offs. diff --git a/learning/pccl.png b/learning/pccl.png new file mode 100644 index 0000000..91f4713 Binary files /dev/null and b/learning/pccl.png differ diff --git a/learning/readme.md b/learning/readme.md new file mode 100644 index 0000000..f9f58ab --- /dev/null +++ b/learning/readme.md @@ -0,0 +1,46 @@ +# Understand distributed training + +So our goal is to cluster 4 of our laps, and then train them on 4 diff things, and also pool them for another a value network valuation, and then get the reward to optimize the policy in each one them. + +Detailed diagram: +![Distributed Training Architecture](images/distributed_arch.png +) + +The first blog I'm going through this from [primeintellect.ai](https://primeintellect.ai/). The link: https://www.primeintellect.ai/blog/our-approach-to-decentralized-training + +Coz i think they solve the same problem, pooling resources from around the world, and use them to train models. + +--- +## Important takeaways: + +> In the decentralized training paradigm, we have access to relatively cheap compute power, but communication between instances is costly. For instance, we could harness unused compute (e.g., cheap spot instances) from various sources, but with the drawback that these instances can be located all around the world. +![Decentralized Training Issues](images/decentralized_training_issues.png) + +> **Distributed Low-Communication Training (DiLoCo)**: instead of syncing the gradient at every step you would sync only every 500 steps? This would spread the 16 seconds of communication over minutes of computation, maximizing GPU utilization and reducing training time. DiLoCo introduces an inner-outer optimization algorithm that allows both local and global updates. Each worker independently updates its weights multiple times using a local AdamW optimizer (inner optimization). Every ~500 updates, the algorithm performs an outer optimization using the Nesterov momentum optimizer, which synchronizes all workers' pseudo gradients (the sum of all local gradients). +![DiLoCo Algorithm](https://cdn.prod.website-files.com/66239f0441b09824acb92c7e/66fd9e0d4fee5ccaa3a6b727_6626a65822c5cb4f9f252ecd_image%2520(5).png) +![DiLoCo](images/diloco.png) + +> **DiPaCo: Distributed Path Composition**:** DiPaCo use a coarse routing mechanism. This routing is made at the sequence level (to contrast with token level), greatly reducing the amount of communication needed at each step. Additionally, the routing decisions are made offline before training, allowing data to be pre-sharded. Each worker then processes data specific to one path only. In their experiments, they trained a sparse model with 256 possibles path, each having 150m active parameters which outperforms a 1B dense parameter model baseline. Each of these paths are not totally independent, otherwise the approach would be equivalent to train 256 different models. The paths share common blocks that are kept in sync using DiLoCo. +![DiPaCo](https://cdn.prod.website-files.com/66239f0441b09824acb92c7e/66fd9e0d4fee5ccaa3a6b73d_6626a66e6c6884f3d56c76dc_image%2520(6).png) + +> SWARM Parallelism: presents a more flexible form of pipeline parallelism. The path for completing a full forward and backward pass is not fixed and may vary over time. Each worker may send its output to any other worker in the subsequent stage. Faster devices receive more tasks to prevent GPU idle time, and also enabling the use on non-homogeneous hardware. If a worker dies, the tasks assigned to it are redirected to others, making the algorithm fault tolerant. Paths are determined stochastically and on the fly. Each worker in a stage is placed in a priority queue based on its recorded performance in the pipeline over time. Workers that consistently perform tasks faster—due to better hardware or are co-located with preceding workers—are more likely to be picked. If a worker fails to respond to a request, it is temporarily excluded from the queue until it re-announces itself. This dynamic allows the system to operate on preemptible instances or adaptively rebalance nodes within the swarms. +![SWARM PARALLELISM](images/swarm_parallelism.png) + +--- + +This blog goes on with one more methods like **Varuna**, and something. Why i thought i need this base is coz, 1. I want to understand how to pool resources from different machines, and use them for training. 2. I want to understand how to reduce communication overhead. + +But this blog has nothing to do with my thing, as my training is gonna happen as in diff models in each lap, not sharded or distributed training of same model. the only thing i need to do now is, how to establish a central connection for only the queue thing. + +--- +W*t**, i found the idea + +so need to **host only the queueing server in a central location**, and each need to just connect to it, by **websocket**, and then send or receive messages from it. + +--- + +But still, i need to centrally monitor the training, for that i need to collect the resources first and then deploy seperate docker containers in each of them, and then start the training. But **fault-tolerance** is not needed as such, coz if one training fails, only that lap is affected, other 3 can continue. It is not needed as of now. But need to think of it as well. And also need to establish a **p2p connection** between the central server and each of the training nodes. + + + +