- Build:
cargo build - Run default simulation:
cargo run - Fontana generator demo:
cargo run -- --config-file /path/to/fontana_config.json - Dump default config:
cargo run -- --dump-config - Dump default config with CL: cargo run -- --config-file /tmp/fontana_cfg.json --generate 3
- Run experiments:
cargo run -- --experiment <name>(not sure how to run this part)
src/generators.rs:FontanaGen::from_confignow usesmin_depth,max_depth, andfree_variable_probability, clamps probability ranges, checks depth bounds, and removed the unused local RNG binding.- Added a defensive helpers to avoid divide-by-zero (
max_depthguard), empty ranges when depth is zero, and runaway probabilities (clamped per depth). Generation can now use the configuredn_max_free_varssafely. - CLI
--generatenow obeys the active generator in the loaded config, so Fontana samples can be printed without editing source.
src/generators.rs:202-217:min_depthandfree_probare hard-coded to0, ignoringconfig::FontanaGen.min_depthand.free_variable_probability. (but it seems like in config.rs it matches the same values? do we hard code or use that?)src/generators.rs:197-200: the localrngbinding is never used; we instantiate the struct withChaCha8Rng::from_seed(seed)directly, so this variable only triggers warnings.src/generators.rs:197-200: division bycfg.max_depth - 1won't work ifmax_depth <= 1.src/generators.rs:236-279:p_abs/p_appcan exceed1.0, meaning leaf nodes might never be produced before hitting maximum depth; whendepth == 0,rng.gen_range(1..=depth)is an empty range.src/main.rs:156:--generatealways usesBTreeGen. Need to do for FontanaGen
- Removed unused helper functions and tightened imports in
src/experiments/magic_test_function.rs; wrapped tests in#[cfg(test)]to avoid dev-build noise. - Added documentation for
config::FontanaGen, so the#[warn(missing_docs)]lint is satisfied. cargo buildandcargo testnow run warning-free.
src/generators.rs:134-136:postfix_standardizecallsunimplemented!.- Any config selecting
Standardization::Postfixcrashes. - Need to add the transformation or disable the option in configs/CLI until ready.
src/lambda/recursive.rs:227-232: tuples are pushed as(expr, size, reductions)but collected as reductions →t.1, sizes →t.2, swapping the data.- **Does this imapct antyhing? does it affect the analytics and experiment outputs report incorrect reduction counts vs sizes.
src/analysis.rs:57-68: calculatesintersection / (|A| + |B|)- should be divded by A U B
scripts/discovery.sh:15(and similar files):cd ~/cwd/functional-supercollider, which doesn’t exist here.
Can be found here: https://github.com/mathis-group/Alchemy-Dashboard
pyenv shell 3.11.9
python -c 'import sys,platform; print(sys.executable); print(platform.python_version(), platform.machine())'
cargo clean python -m pip uninstall -y alchemy || true rm -rf target
python -m pip install -e .
python - <<'PY' import alchemy, sys print("OK :", alchemy.file) print("Py :", sys.executable) PY