-
Notifications
You must be signed in to change notification settings - Fork 11
Improve TUD W post-processing and try fixing the PR templates #471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe PR refactors benchmark configurations and updates the plotting infrastructure. It renames benchmark entries in run configuration (e.g., Oktavian to ASPIS-PCA-Replica, introduces new FNG/IPPE/RCR benchmarks), updates benchmark post-processing YAML files with flux-to-spectra naming and new comparative evaluation sections, refactors the plotter for index-aware alignment and data-driven ratio computation, adds an index comparison utility, and includes OpenMC availability checking. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@mcampos16 FYI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
.github/PULL_REQUEST_TEMPLATE/new_feature_template.md (1)
1-8: Remove YAML front matter from PR template.Same issue as in
bug_fix_template.md- GitHub PR templates don't support YAML front matter. Remove lines 1-8 to make this template work correctly..github/PULL_REQUEST_TEMPLATE/new_benchmark.md (1)
1-8: Remove YAML front matter from PR template.Same issue as the other PR templates - YAML front matter is not supported. Remove lines 1-8.
🧹 Nitpick comments (2)
src/jade/post/plotter.py (1)
619-629: Suppressing warnings is reasonable but consider adding a comment explaining why.The
warnings.catch_warnings()context manager suppresses UserWarnings during scatter plotting. While this cleans up benign warnings from matplotlib, a brief comment explaining the reason would improve maintainability.else: + # Suppress benign UserWarnings from matplotlib when plotting + # with categorical x-axis data with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) axes[i].scatter(tests/post/test_excel_processor.py (1)
136-147: Tests now aligned with MCNP‑only TUD‑W workflowUpdating
codelibsto just("exp", "exp")and("mcnp", "FENDL 3.1d")plus the single‑file length assertion matches the revised data setup and removal of the openmc case. If you want to harden this later, you could additionally assert on expected columns (e.g., presence of the new C/E and error information), but it’s not required for this PR.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (40)
tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos1 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos1 Photon flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos2 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos2 Photon flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos3 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos3 Photon flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos4 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos4 Photon flux.csvis excluded by!**/*.csv
📒 Files selected for processing (14)
.github/PULL_REQUEST_TEMPLATE/bug_fix_template.md(1 hunks).github/PULL_REQUEST_TEMPLATE/new_benchmark.md(2 hunks).github/PULL_REQUEST_TEMPLATE/new_feature_template.md(1 hunks)docs/source/dev/insertbenchmarks.rst(1 hunks)src/jade/post/plotter.py(8 hunks)src/jade/resources/default_cfg/benchmarks_pp/atlas/TUD-W.yaml(2 hunks)src/jade/resources/default_cfg/benchmarks_pp/excel/TUD-W.yaml(2 hunks)src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/Oktavian.yaml(1 hunks)src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/TUD-W.yaml(1 hunks)src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml(1 hunks)src/jade/resources/default_cfg/run_cfg.yml(3 hunks)tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/metadata.json(1 hunks)tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/metadata.json(0 hunks)tests/post/test_excel_processor.py(1 hunks)
💤 Files with no reviewable changes (1)
- tests/dummy_structure/raw_data/mcnp-FENDL 3.2c/TUD-W/metadata.json
🧰 Additional context used
🧬 Code graph analysis (1)
src/jade/post/plotter.py (2)
src/jade/post/manipulate_tally.py (1)
compare_data(491-550)src/jade/config/excel_config.py (1)
ComparisonType(22-28)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.10, false)
🔇 Additional comments (15)
docs/source/dev/insertbenchmarks.rst (1)
248-251: LGTM!Minor text reflow for the
fwhm_fracparameter documentation. The content is unchanged and formatting is appropriate.src/jade/post/plotter.py (7)
6-6: LGTM!The
warningsimport is appropriately added to support suppressing benign UserWarnings during scatter plot operations.
21-23: LGTM!Good use of the existing
compare_datafunction andComparisonTypeenum for computing ratios with proper error propagation in CEPlot.
530-548: Good refactor: ratio computation now uses proper error propagation.The refactored flow correctly uses
compare_datawithComparisonType.RATIOto compute C/E values with properly propagated errors. This is a solid improvement over manual ratio computation.
643-645: Good addition: categorical X-axis handling.Correctly detects categorical/string indices and manually sets xticks to ensure proper rendering when matplotlib doesn't auto-detect them.
551-563: LGTM!The refactored data flow cleanly handles the new
(codelib, values, errors)tuples and correctly manages subcase extraction using the multi-level index.
1027-1045: LGTM!Consistent use of
warnings.catch_warnings()to suppress benign matplotlib UserWarnings during scatter operations.
1073-1073: LGTM!Legend positioning changed to
bbox_to_anchor=(1, 1)for consistent placement across CE-related plots.src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/Oktavian.yaml (1)
10-12: Coarse photon flux now consistent with fine photon flux binningSwitching the first operator to
by_energyin the coarse photon flux pipeline aligns it with the fine Photon flux block and keepscondense_groupsoperating on the same energy variable; this looks correct and self‑consistent.tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/metadata.json (1)
4-7: Confirm intended semantics betweenjade_run_versionandjade_versionHere
jade_run_versionstays at"3.1.0"whilejade_versionis bumped to"4.2.0". Ifjade_run_versiondenotes the producer of the raw data andjade_versionthe consumer/tooling version, this is fine; otherwise, please double‑check that you don’t also want to updatejade_run_version.src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (1)
1-26: OpenMC TUD‑W spectra and coarse sections are consistent with downstream configsThe new
Neutron/Photon spectraand corresponding*_coarsesections are well‑structured: they useby_energy+ formatting for normal spectra andcondense_groupswith bins that match the CE plot ranges configured in atlas/excel. Naming is consistent across raw, excel, and atlas files.src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/TUD-W.yaml (1)
1-22: MCNP TUD‑W spectra configuration mirrors OpenMC and Excel/Atlas expectationsThe
Neutron/Photon spectraand*_coarseblocks are consistent with the OpenMC TUD‑W raw config and with the names referenced in the Excel and atlas configurations. Tally IDs (4 and 44) are reused coherently for both fine and coarse processing, and the bin edges align with the CE plot energy intervals.src/jade/resources/default_cfg/benchmarks_pp/excel/TUD-W.yaml (1)
1-33: Excel TUD‑W now cleanly exposes fine spectra percentages and coarse C/E with errorsLinking Neutron(%)/Photon(%) to
Neutron spectra/Photon spectrakeeps naming consistent, and re‑adding conditional formatting on the photon sheet restores the expected heatmap. The newCE tablesection correctly pulls fromNeutron/Photon spectra coarse, usescomparison_type: ratio, and renamesValuetoC/Ewhile also surfacing the associatedErrorcolumn, which matches the intent of the new CE error‑bar support.src/jade/resources/default_cfg/benchmarks_pp/atlas/TUD-W.yaml (1)
1-74: Atlas TUD‑W plots correctly wired to spectra and coarse C/E dataThe binned
Neutron/Photon spectraplots now use consistent terminology and explicitly enable error display, aligning with the CEPlot error‑bar changes. The three new C/E plots referenceNeutron/Photon spectra coarseand their subcase energy ranges match the condense_groups bins, so the plotted intervals should correspond cleanly to the coarse groups.src/jade/resources/default_cfg/run_cfg.yml (1)
90-308: Experimental benchmark catalog looks consistent; please confirm FNG‑BKTnpsThe new/updated experimental entries (ASPIS‑PCA‑Replica, ASPIS‑Fe88, FNG*, IPPE*, RCR‑Fe+Ni, Tiara*, TUD‑*) follow the existing schema and their names/descriptions match the corresponding Excel configs and tests, so they should integrate smoothly into the run configuration.
One detail to double‑check:
FNG-BKTis configured withnps: 1e9while most other new FNG and TUD entries use1e8. If that higher particle count is intentional for this particular benchmark, then this block is fine as‑is; otherwise, you may want to align it with the others.
|
@dodu94 Thanks Davide! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/jade/post/plotter.py (1)
562-607: Subcase axis mapping can desynchronize when some libraries miss subcases.Because
dfsis rebuilt per library and you enumerate it to pickaxes[i], if a given library lacks some of the configured subcases (KeyError branch), theiindices for that library will no longer align with the same logical subcase/axis as for the first library. Axes created from the first library's subcase count will remain partially unfilled for other libraries, causing incomplete or misaligned plots.It would be safer to:
- Derive the list of actually-present subcases once (e.g., from the reference / first target), build axes from that list, and
- For each library, iterate over that same ordered subcase list, plotting only when data exists but always using the same
i→ subcase mapping.This avoids mislabeling plots and ensures all axes are consistently populated across libraries.
♻️ Duplicate comments (1)
src/jade/post/plotter.py (1)
620-651: Convert relative errors fromcompare_datato absolute errors before passing toerrorbar.Here:
axes[i].errorbar( dfv.index, dfv.values, yerr=dfe.values, ecolor=COLORS[idx], )
dfvholds the C/E ratios anddfecomes fromcompare_data(..., ComparisonType.RATIO), which returns relative errors (per the implementation incompare_data).matplotlib.errorbarexpects absolute errors in the same units asy, so this is under‑scaling all error bars.Scale the relative error by the ratio values (mirroring how
BinnedPlotconverts fractional errors to absolute):- axes[i].errorbar( - dfv.index, - dfv.values, - yerr=dfe.values, + axes[i].errorbar( + dfv.index, + dfv.values, + yerr=np.abs(dfv.values * dfe.values), # fmt="none", ecolor=COLORS[idx],This preserves
dfvwhile putting the error bars in correct absolute units.
🧹 Nitpick comments (2)
src/jade/post/plotter.py (2)
531-559: Strengthen index alignment handling before forcing target indices to match reference.The new
ref/targetsetup withcompare_data(..., ComparisonType.RATIO)looks conceptually right for C/E, but this block:if not val1.index.equals(val2.index): logging.debug("Indices do not match, substituting with experiment") val2.index = val1.index err2.index = err1.indexcan silently misalign data if the two series differ in order or length (length mismatch will even raise at assignment). Consider:
- Asserting
len(val1) == len(val2) == len(err2)before reindexing, and- Either sorting both by x, or at least logging a warning (not debug) including shapes and maybe a max index delta when indices are numeric.
That will surface genuine data/energy-bin issues instead of quietly reshuffling values.
630-641: Scoped suppression of matplotlibUserWarnings looks fine; consider a module logger for consistency.Wrapping the scatter calls in
warnings.catch_warnings()withUserWarningsuppression in both the CE point style and_apply_CE_limitsis a reasonable way to hide benign matplotlib categoricals/marker warnings without affecting the rest of the code.Two minor follow‑ups you might consider:
- Use a module logger (
logger = logging.getLogger(__name__)) instead of the rootlogging.debug(...)to keep logging consistent and configurable per module.- If specific warning messages are known, you might filter on message text or warning subclass to avoid hiding unrelated
UserWarnings, though the current approach is acceptable for a plotting-only context.Also applies to: 1038-1056, 1084-1084
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (7)
tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Al Reaction rate.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Fe Reaction rate.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_In Reaction rate.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Mn Reaction rate.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Ni-n2n Reaction rate.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-W/FNG-W_In Activation Foils.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_exp_-_exp_/FNG-W/FNG-W_Ni-n2n Activation Foils.csvis excluded by!**/*.csv
📒 Files selected for processing (2)
JADE-RAW-RESULTS(1 hunks)src/jade/post/plotter.py(8 hunks)
✅ Files skipped from review due to trivial changes (1)
- JADE-RAW-RESULTS
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-22T09:51:20.734Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:152-158
Timestamp: 2025-09-22T09:51:20.734Z
Learning: In the JADE codebase, when dealing with tally data in src/jade/post/manipulate_tally.py, avoid silently handling divide-by-zero cases in error propagation calculations. If tally["Value"].sum() == 0, the resulting warning/error serves as a useful diagnostic that the tally recorded no results, which users should be made aware of rather than having it masked.
Applied to files:
src/jade/post/plotter.py
🧬 Code graph analysis (1)
src/jade/post/plotter.py (2)
src/jade/post/manipulate_tally.py (1)
compare_data(491-550)src/jade/config/excel_config.py (1)
ComparisonType(22-28)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.11, false)
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
Yes, now I changed the coarse binning in order to ignore the first bin as you suggested offline, much better: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/jade/app/app.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/jade/app/app.py (2)
src/jade/run/benchmark.py (5)
code(75-76)code(271-272)code(300-301)code(322-323)code(336-337)src/jade/helper/constants.py (1)
CODE(4-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
🔇 Additional comments (1)
src/jade/app/app.py (1)
21-21: LGTM: Import addition is correct.The import of
OMC_AVAILfollows the existing pattern for optional dependency checks and is used appropriately in the conditional logic below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/app/test_app.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/app/test_app.py (1)
src/jade/app/app.py (1)
raw_process(194-263)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.11, false)
🔇 Additional comments (1)
tests/app/test_app.py (1)
79-81: LGTM! Comment clarification improves readability.The comment update makes the test's intent clearer.
| # try case where openmc is not available on windows | ||
| app.raw_process(subset=["TUD-W"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test is incomplete and doesn't verify the OpenMC availability guard.
This test invocation lacks both setup and verification:
- No mocking:
OMC_AVAILis not mocked toFalse, so the test doesn't actually simulate OpenMC being unavailable. - No assertions: Without any assertions, this test will always pass regardless of whether the availability guard works correctly.
- No verification: The expected behavior (skipping processing with a warning) is not validated.
Apply this diff to properly test the OpenMC availability guard:
- # try case where openmc is not available on windows
- app.raw_process(subset=["TUD-W"])
+ # Test case where OpenMC is not available
+ import jade.app.app as app_module
+ original_omc_avail = app_module.OMC_AVAIL
+ try:
+ app_module.OMC_AVAIL = False
+ # TUD-W should be skipped if it's an OpenMC benchmark
+ with pytest.raises(SystemExit) as exc_info:
+ # Or use caplog to verify warning message
+ app.raw_process(subset=["TUD-W"])
+ # Verify that TUD-W was not processed
+ # (adjust assertion based on expected behavior)
+ finally:
+ app_module.OMC_AVAIL = original_omc_availAlternatively, use pytest.caplog to capture and verify the warning message:
- # try case where openmc is not available on windows
- app.raw_process(subset=["TUD-W"])
+ # Test case where OpenMC is not available
+ import jade.app.app as app_module
+ original_omc_avail = app_module.OMC_AVAIL
+ try:
+ app_module.OMC_AVAIL = False
+ app.raw_process(subset=["TUD-W"])
+ # Verify the benchmark was skipped (check status or caplog)
+ # Example: assert ("TUD-W", ...) not in app.status.raw_data
+ finally:
+ app_module.OMC_AVAIL = original_omc_availCommittable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to change as it is universal but the d1s option is redundant for many benchmarks. As is serpent also currently!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know but a table is built by the GUI using this. GUI is clearly not the best piece of code in JADE, when we will have some effort to improve it we could tackle also this issue
|
Can't spot what's happening with the PR templates? |
|
Did you perform the rebinning for TUD-W? I can only see experimental data for the fine group structure which neutrons up to 15MeV |
What I understood is that we cannot have multiple automatic template like for issues. If we want templates to be used automatically we can only have one, otherwise users should load the template manually in a way I did not fully understand. Maybe we should some trials on a test repository |
nono I did not change the bins, I just added further post-processing to also create a raw result that collapses the fine structure in a coarser energy bins one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/jade/helper/aux_functions.py (1)
142-162: Consider narrowing the exception handling scope.The function logic is sound, but catching a broad
Exceptionon line 160 may mask unexpected errors. Consider catching specific exceptions related to numeric conversion.Apply this diff to narrow the exception handling:
try: idx1 = np.array(subindex1, dtype=float) idx2 = np.array(subindex2, dtype=float) if np.allclose(idx1, idx2, rtol=1e-3): continue else: return False - except Exception: + except (ValueError, TypeError): return FalseThis catches the specific exceptions that occur when converting non-numeric data to float, while allowing other unexpected exceptions to propagate.
src/jade/post/plotter.py (1)
548-553: Consider extracting the error message to a constant.While the error message is concise, extracting it to a module-level constant would align with static analysis recommendations and improve maintainability.
Apply this diff if you prefer:
+# Near the top of the file with other constants +INDEX_MISMATCH_ERROR = "Indices do not match between reference and target data." + # In the function... if same_index(val1.index, val2.index) is False: logging.error( - f"Indices do not match between reference and {codelib}: " + f"{INDEX_MISMATCH_ERROR} Reference vs {codelib}: " f"{val1.index}, {val2.index}" ) - raise RuntimeError("Indices do not match.") + raise RuntimeError(INDEX_MISMATCH_ERROR)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (20)
tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon flux.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra coarse.csvis excluded by!**/*.csvtests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra.csvis excluded by!**/*.csv
📒 Files selected for processing (5)
src/jade/helper/aux_functions.py(2 hunks)src/jade/post/plotter.py(8 hunks)src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml(1 hunks)tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/metadata.json(1 hunks)tests/helper/test_auxiliary_func.py(2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-22T09:51:20.734Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:152-158
Timestamp: 2025-09-22T09:51:20.734Z
Learning: In the JADE codebase, when dealing with tally data in src/jade/post/manipulate_tally.py, avoid silently handling divide-by-zero cases in error propagation calculations. If tally["Value"].sum() == 0, the resulting warning/error serves as a useful diagnostic that the tally recorded no results, which users should be made aware of rather than having it masked.
Applied to files:
src/jade/post/plotter.py
📚 Learning: 2025-09-22T09:52:06.987Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:265-271
Timestamp: 2025-09-22T09:52:06.987Z
Learning: In the JADE codebase, for the cumulative_sum function in src/jade/post/manipulate_tally.py, do not add divide-by-zero guards during normalization. If the last value of a cumulative sum is 0, the resulting divide-by-zero error serves as a useful diagnostic that the tally recorded no results in any bins, which users should be made aware of rather than having it masked by silent error handling.
Applied to files:
src/jade/post/plotter.py
🧬 Code graph analysis (2)
src/jade/post/plotter.py (3)
src/jade/helper/aux_functions.py (1)
same_index(142-162)src/jade/config/excel_config.py (1)
ComparisonType(22-28)src/jade/post/manipulate_tally.py (1)
compare_data(491-550)
tests/helper/test_auxiliary_func.py (1)
src/jade/helper/aux_functions.py (3)
get_code_lib(24-39)print_code_lib(42-68)same_index(142-162)
🪛 Ruff (0.14.8)
src/jade/post/plotter.py
553-553: Avoid specifying long messages outside the exception class
(TRY003)
src/jade/helper/aux_functions.py
160-160: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.12, false)
- GitHub Check: test (ubuntu-latest, 3.11, false)
- GitHub Check: test (windows-latest, 3.12, false)
- GitHub Check: test (windows-latest, 3.10, false)
- GitHub Check: test (windows-latest, 3.11, false)
- GitHub Check: test (ubuntu-latest, 3.10, false)
🔇 Additional comments (9)
src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (4)
1-8: LGTM: Neutron spectra post-processing looks good.The renaming from "Neutron flux" to "Neutron spectra" aligns with the PR objectives, and the addition of scale factor and column cleanup are appropriate post-processing steps.
Please verify the scale factor
0.02284is correct for this benchmark. This appears to be a normalization constant, but it should be double-checked against the benchmark specifications.
9-15: LGTM: Photon spectra changes are consistent.The changes mirror the neutron spectra updates and maintain consistency. The increase in decimal precision for Energy (2→3) provides better resolution for photon energies.
25-32: Same preprocessing verification needed as neutron coarse.The "Photon spectra coarse" section has the same structure as the neutron coarse section (missing
by_energy,format_decimals,delete_colsoperations). The same verification concerns apply.Confirm that the preprocessing requirements are consistent between neutron and photon coarse sections, and that the photon bins starting at 0.4 MeV (not 0) is intentional and handles lower-energy photons appropriately.
17-24: No issues found with the "Neutron spectra coarse" configuration.The absence of
by_energy,format_decimals, anddelete_colsoperations is intentional. Thecondense_groupsoperation works independently and doesn't requireby_energypreprocessing—it directly aggregates values from binned tallies into coarser energy groups. The bin structure starting at 1.15 MeV aligns with the PR's discussed intent to exclude the problematic lower energy range, and thepd.cutbinning operation withright=Falsewill naturally exclude any data below this threshold. This configuration is consistent with howcondense_groupsis used elsewhere in the codebase (e.g., MCNP TUD-W, IPPE-DT benchmarks).tests/helper/test_auxiliary_func.py (1)
22-37: LGTM! Comprehensive test coverage for index comparison.The test function covers a good range of scenarios including identical indices, numerical differences, type mismatches, and length differences. This provides solid validation for the new
same_indexutility function.src/jade/post/plotter.py (3)
530-565: Excellent refactoring of ratio computation logic.The data-driven approach with index alignment is a significant improvement:
- Index comparison with tolerance handles floating-point differences
- Clear error logging when indices don't match
- Use of
compare_datacentralizes error propagation logic- Structured data storage in
to_plotimproves maintainability
636-657: Clarify the rationale for suppressing UserWarnings.The error bar calculation is correct (converting relative errors to absolute by multiplying with the ratio values). However, suppressing
UserWarningaround the scatter and errorbar plots may hide legitimate issues such as:
- Empty data sequences
- Invalid data values (NaN, inf)
- Axis limit problems
Could you clarify:
- What specific warnings are you encountering that require suppression?
- Have you verified these warnings are benign?
Consider using
warnings.filterwarningswith a specific warning message pattern instead of suppressing all UserWarnings:# More targeted suppression with warnings.catch_warnings(): warnings.filterwarnings("ignore", message="specific warning text", category=UserWarning) # ... plotting code
1044-1062: Same concern: blanket UserWarning suppression may hide important issues.The warnings context around the scatter plots in
_apply_CE_limitshas the same concern as the earlier suppression. Consider whether all UserWarnings should be suppressed or if a more targeted approach is appropriate.Based on learnings from the codebase, diagnostic warnings (like divide-by-zero) serve important purposes. Ensure that suppressing UserWarnings here doesn't mask meaningful diagnostics that users should be aware of.
tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/metadata.json (1)
6-8: No action needed. This metadata.json is test fixture data that documents the code version used to generate the test data, not a production dependency specification. OpenMC 0.15.0 is a valid released version, and thecode_versionfield serves as informational metadata only—the codebase does not enforce version compatibility checks against this field. Different test fixtures use various versions without issue.Likely an incorrect or invalid review comment.
|
@dodu94 and I spotted an issue related to how index mismatch between the dataframes used to compute a ratio was handled for the CE plots. |
|
@alexvalentine94 as you can see there was a lot going on here but we should be ready for merging, codecov is very close to target anyway |
alexvalentine94
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job both, nice improvements!

New excel table and plot have been added for the TUD-W grouped spectra tallies C/E:

The CEPlot object was modified to introduce error bars
While I was at it I updated the run cfg file that was missing many of the new benchmarks and I tried to make changes to the PR templates that are currently not appearing when opening a new PR
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.