Skip to content

Conversation

@dodu94
Copy link
Member

@dodu94 dodu94 commented Dec 5, 2025

New excel table and plot have been added for the TUD-W grouped spectra tallies C/E:
image

image

The CEPlot object was modified to introduce error bars

While I was at it I updated the run cfg file that was missing many of the new benchmarks and I tried to make changes to the PR templates that are currently not appearing when opening a new PR

Summary by CodeRabbit

Release Notes

  • New Features

    • Added 15+ new benchmark experiments (ASPIS-PCA-Replica, FNG series, IPPE series, RCR series, Tiara series, ISIS-800MeV-C)
    • Enhanced Calculation/Experiment (C/E) plotting with improved data alignment and error handling
  • Bug Fixes

    • Improved handling when OpenMC is unavailable
    • Fixed ratio computation and index alignment in plots
  • Documentation

    • Updated benchmark terminology ("flux" → "spectra")

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

Walkthrough

The PR refactors benchmark configurations and updates the plotting infrastructure. It renames benchmark entries in run configuration (e.g., Oktavian to ASPIS-PCA-Replica, introduces new FNG/IPPE/RCR benchmarks), updates benchmark post-processing YAML files with flux-to-spectra naming and new comparative evaluation sections, refactors the plotter for index-aware alignment and data-driven ratio computation, adds an index comparison utility, and includes OpenMC availability checking.

Changes

Cohort / File(s) Summary
PR Template & Documentation
.github/PULL_REQUEST_TEMPLATE/new_benchmark.md, docs/source/dev/insertbenchmarks.rst
Updated PR template checklist for YAML file requirements; minor reflow and indentation adjustments in benchmark documentation.
Plotting Refactoring
src/jade/post/plotter.py
Reworked CE plotting with index alignment: added same_index import, introduced subcase-aware target indexing, replaced inline ratio calculations with compare_data(comparison_type=ComparisonType.RATIO), added warnings suppression contexts, and restructured plotting loop to handle aligned (values, errors) tuples.
Benchmark Post-Processing Configurations
src/jade/resources/default_cfg/benchmarks_pp/(atlas, excel, raw/mcnp, raw/openmc)/TUD-W.yaml, src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/Oktavian.yaml
Renamed flux sections to spectra; added new C/E plot sections with subcase grouping (Energy bins); updated Excel tables with ratio comparison type and composite axes; changed mcnp aggregation from lethargy to by_energy; added coarse-binned spectra sections with condense_groups.
Benchmark Run Configuration
src/jade/resources/default_cfg/run_cfg.yml
Major restructuring: renamed Oktavian → ASPIS-PCA-Replica; reorganized benchmarks into FNG (SDDR, BKT, W, SiC, SS, HCPB), IPPE (DT, CF), RCR (Fe+Ni, SS), and Tiara families; updated nps values and detector descriptions; added ISIS-800MeV-C entry.
Helper Utilities
src/jade/helper/aux_functions.py
Added new public function same_index() to compare pandas Index/MultiIndex objects with numeric tolerance support (rtol=1e-3).
App & OpenMC Handling
src/jade/app/app.py
Added import of OMC_AVAIL; skip raw processing for OPENMC benchmarks when OMC_AVAIL is False (non-forced mode).
Tests
tests/app/test_app.py, tests/helper/test_auxiliary_func.py, tests/post/test_excel_processor.py, tests/dummy_structure/raw_data/_*/TUD-W/metadata.json
Added test invocation for OpenMC unavailability scenario; added test_same_index() covering Index/MultiIndex equality checks; removed openmc codelib from excel processor test; updated metadata jade_version to "4.2.0" and openmc metadata code/version fields.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • Index alignment logic in src/jade/post/plotter.py: Verify the error-raising on index mismatch and the subcase-aware indexing realignment correctly handle both simple and MultiIndex cases
  • Ratio computation via compare_data(): Confirm the use of ComparisonType.RATIO produces expected results and error bars are properly propagated through aligned data
  • Run configuration restructuring in src/jade/resources/default_cfg/run_cfg.yml: Validate benchmark mappings, nps values, and detector renaming/reclassification don't introduce inconsistencies
  • Benchmark YAML structural changes: Ensure new C/E sections with Energy subcase grouping align with plotting expectations and post-processing pipeline
  • Test expectations alignment: Verify test metadata updates and removed openmc entry in excel processor test reflect intended benchmark configuration changes

Possibly related PRs

  • Add isis bench #449: Adds ISIS-800MeV-C benchmark configuration and documentation that complements the run_cfg restructuring in this PR
  • v4.3.0 additions #462: Modifies src/jade/post/plotter.py and related post-processing helpers, including index alignment utilities that overlap with this PR's plotter refactoring
  • Update PR and issue templates #436: Updates .github/PULL_REQUEST_TEMPLATE/new_benchmark.md template in parallel with this PR's template changes

Suggested reviewers

  • alexvalentine94
  • Radiation-Transport

Poem

🐰 Benchmarks renamed with scientific care,
Spectra shine where fluxes were rare,
Indices align in perfect tune,
C/E ratios plotted beneath the moon,
New families grouped—FNG, IPPE soon! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Improve TUD W post-processing and try fixing the PR templates' directly relates to the primary changes: enhanced TUD-W post-processing (new C/E plots and Excel tables) and attempted PR template updates.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch TUD-W-plots-improvement

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dodu94
Copy link
Member Author

dodu94 commented Dec 5, 2025

@mcampos16 FYI

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
.github/PULL_REQUEST_TEMPLATE/new_feature_template.md (1)

1-8: Remove YAML front matter from PR template.

Same issue as in bug_fix_template.md - GitHub PR templates don't support YAML front matter. Remove lines 1-8 to make this template work correctly.

.github/PULL_REQUEST_TEMPLATE/new_benchmark.md (1)

1-8: Remove YAML front matter from PR template.

Same issue as the other PR templates - YAML front matter is not supported. Remove lines 1-8.

🧹 Nitpick comments (2)
src/jade/post/plotter.py (1)

619-629: Suppressing warnings is reasonable but consider adding a comment explaining why.

The warnings.catch_warnings() context manager suppresses UserWarnings during scatter plotting. While this cleans up benign warnings from matplotlib, a brief comment explaining the reason would improve maintainability.

                     else:
+                        # Suppress benign UserWarnings from matplotlib when plotting
+                        # with categorical x-axis data
                         with warnings.catch_warnings():
                             warnings.simplefilter("ignore", category=UserWarning)
                             axes[i].scatter(
tests/post/test_excel_processor.py (1)

136-147: Tests now aligned with MCNP‑only TUD‑W workflow

Updating codelibs to just ("exp", "exp") and ("mcnp", "FENDL 3.1d") plus the single‑file length assertion matches the revised data setup and removal of the openmc case. If you want to harden this later, you could additionally assert on expected columns (e.g., presence of the new C/E and error information), but it’s not required for this PR.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba828bf and fb4262c.

⛔ Files ignored due to path filters (40)
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos1 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos2 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos3 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/TUD-W/TUD-W_pos4 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos1 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos1 Photon flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos2 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos2 Photon flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos3 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos3 Photon flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos4 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/TUD-W_pos4 Photon flux.csv is excluded by !**/*.csv
📒 Files selected for processing (14)
  • .github/PULL_REQUEST_TEMPLATE/bug_fix_template.md (1 hunks)
  • .github/PULL_REQUEST_TEMPLATE/new_benchmark.md (2 hunks)
  • .github/PULL_REQUEST_TEMPLATE/new_feature_template.md (1 hunks)
  • docs/source/dev/insertbenchmarks.rst (1 hunks)
  • src/jade/post/plotter.py (8 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/atlas/TUD-W.yaml (2 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/excel/TUD-W.yaml (2 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/Oktavian.yaml (1 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/TUD-W.yaml (1 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (1 hunks)
  • src/jade/resources/default_cfg/run_cfg.yml (3 hunks)
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/metadata.json (1 hunks)
  • tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.2c_/TUD-W/metadata.json (0 hunks)
  • tests/post/test_excel_processor.py (1 hunks)
💤 Files with no reviewable changes (1)
  • tests/dummy_structure/raw_data/mcnp-FENDL 3.2c/TUD-W/metadata.json
🧰 Additional context used
🧬 Code graph analysis (1)
src/jade/post/plotter.py (2)
src/jade/post/manipulate_tally.py (1)
  • compare_data (491-550)
src/jade/config/excel_config.py (1)
  • ComparisonType (22-28)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.10, false)
🔇 Additional comments (15)
docs/source/dev/insertbenchmarks.rst (1)

248-251: LGTM!

Minor text reflow for the fwhm_frac parameter documentation. The content is unchanged and formatting is appropriate.

src/jade/post/plotter.py (7)

6-6: LGTM!

The warnings import is appropriately added to support suppressing benign UserWarnings during scatter plot operations.


21-23: LGTM!

Good use of the existing compare_data function and ComparisonType enum for computing ratios with proper error propagation in CEPlot.


530-548: Good refactor: ratio computation now uses proper error propagation.

The refactored flow correctly uses compare_data with ComparisonType.RATIO to compute C/E values with properly propagated errors. This is a solid improvement over manual ratio computation.


643-645: Good addition: categorical X-axis handling.

Correctly detects categorical/string indices and manually sets xticks to ensure proper rendering when matplotlib doesn't auto-detect them.


551-563: LGTM!

The refactored data flow cleanly handles the new (codelib, values, errors) tuples and correctly manages subcase extraction using the multi-level index.


1027-1045: LGTM!

Consistent use of warnings.catch_warnings() to suppress benign matplotlib UserWarnings during scatter operations.


1073-1073: LGTM!

Legend positioning changed to bbox_to_anchor=(1, 1) for consistent placement across CE-related plots.

src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/Oktavian.yaml (1)

10-12: Coarse photon flux now consistent with fine photon flux binning

Switching the first operator to by_energy in the coarse photon flux pipeline aligns it with the fine Photon flux block and keeps condense_groups operating on the same energy variable; this looks correct and self‑consistent.

tests/dummy_structure/raw_data/_mcnp_-_FENDL 3.1d_/TUD-W/metadata.json (1)

4-7: Confirm intended semantics between jade_run_version and jade_version

Here jade_run_version stays at "3.1.0" while jade_version is bumped to "4.2.0". If jade_run_version denotes the producer of the raw data and jade_version the consumer/tooling version, this is fine; otherwise, please double‑check that you don’t also want to update jade_run_version.

src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (1)

1-26: OpenMC TUD‑W spectra and coarse sections are consistent with downstream configs

The new Neutron/Photon spectra and corresponding *_coarse sections are well‑structured: they use by_energy + formatting for normal spectra and condense_groups with bins that match the CE plot ranges configured in atlas/excel. Naming is consistent across raw, excel, and atlas files.

src/jade/resources/default_cfg/benchmarks_pp/raw/mcnp/TUD-W.yaml (1)

1-22: MCNP TUD‑W spectra configuration mirrors OpenMC and Excel/Atlas expectations

The Neutron/Photon spectra and *_coarse blocks are consistent with the OpenMC TUD‑W raw config and with the names referenced in the Excel and atlas configurations. Tally IDs (4 and 44) are reused coherently for both fine and coarse processing, and the bin edges align with the CE plot energy intervals.

src/jade/resources/default_cfg/benchmarks_pp/excel/TUD-W.yaml (1)

1-33: Excel TUD‑W now cleanly exposes fine spectra percentages and coarse C/E with errors

Linking Neutron(%)/Photon(%) to Neutron spectra/Photon spectra keeps naming consistent, and re‑adding conditional formatting on the photon sheet restores the expected heatmap. The new CE table section correctly pulls from Neutron/Photon spectra coarse, uses comparison_type: ratio, and renames Value to C/E while also surfacing the associated Error column, which matches the intent of the new CE error‑bar support.

src/jade/resources/default_cfg/benchmarks_pp/atlas/TUD-W.yaml (1)

1-74: Atlas TUD‑W plots correctly wired to spectra and coarse C/E data

The binned Neutron/Photon spectra plots now use consistent terminology and explicitly enable error display, aligning with the CEPlot error‑bar changes. The three new C/E plots reference Neutron/Photon spectra coarse and their subcase energy ranges match the condense_groups bins, so the plotted intervals should correspond cleanly to the coarse groups.

src/jade/resources/default_cfg/run_cfg.yml (1)

90-308: Experimental benchmark catalog looks consistent; please confirm FNG‑BKT nps

The new/updated experimental entries (ASPIS‑PCA‑Replica, ASPIS‑Fe88, FNG*, IPPE*, RCR‑Fe+Ni, Tiara*, TUD‑*) follow the existing schema and their names/descriptions match the corresponding Excel configs and tests, so they should integrate smoothly into the run configuration.

One detail to double‑check: FNG-BKT is configured with nps: 1e9 while most other new FNG and TUD entries use 1e8. If that higher particle count is intentional for this particular benchmark, then this block is fine as‑is; otherwise, you may want to align it with the others.

@mcampos16
Copy link
Contributor

@dodu94 Thanks Davide!
Just one question, have you also plotted the C/E for the neutron grouped spectra?
By looking at the excel screenshot that you have attached, the values for the first energy range (1-3MeV) seem off.
I can double check if something happened while grouping the experimental results for that range specifically.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/jade/post/plotter.py (1)

562-607: Subcase axis mapping can desynchronize when some libraries miss subcases.

Because dfs is rebuilt per library and you enumerate it to pick axes[i], if a given library lacks some of the configured subcases (KeyError branch), the i indices for that library will no longer align with the same logical subcase/axis as for the first library. Axes created from the first library's subcase count will remain partially unfilled for other libraries, causing incomplete or misaligned plots.

It would be safer to:

  • Derive the list of actually-present subcases once (e.g., from the reference / first target), build axes from that list, and
  • For each library, iterate over that same ordered subcase list, plotting only when data exists but always using the same i → subcase mapping.

This avoids mislabeling plots and ensures all axes are consistently populated across libraries.

♻️ Duplicate comments (1)
src/jade/post/plotter.py (1)

620-651: Convert relative errors from compare_data to absolute errors before passing to errorbar.

Here:

axes[i].errorbar(
    dfv.index,
    dfv.values,
    yerr=dfe.values,
    ecolor=COLORS[idx],
)

dfv holds the C/E ratios and dfe comes from compare_data(..., ComparisonType.RATIO), which returns relative errors (per the implementation in compare_data). matplotlib.errorbar expects absolute errors in the same units as y, so this is under‑scaling all error bars.

Scale the relative error by the ratio values (mirroring how BinnedPlot converts fractional errors to absolute):

-                    axes[i].errorbar(
-                        dfv.index,
-                        dfv.values,
-                        yerr=dfe.values,
+                    axes[i].errorbar(
+                        dfv.index,
+                        dfv.values,
+                        yerr=np.abs(dfv.values * dfe.values),
                         # fmt="none",
                         ecolor=COLORS[idx],

This preserves dfv while putting the error bars in correct absolute units.

🧹 Nitpick comments (2)
src/jade/post/plotter.py (2)

531-559: Strengthen index alignment handling before forcing target indices to match reference.

The new ref/target setup with compare_data(..., ComparisonType.RATIO) looks conceptually right for C/E, but this block:

if not val1.index.equals(val2.index):
    logging.debug("Indices do not match, substituting with experiment")
    val2.index = val1.index
    err2.index = err1.index

can silently misalign data if the two series differ in order or length (length mismatch will even raise at assignment). Consider:

  • Asserting len(val1) == len(val2) == len(err2) before reindexing, and
  • Either sorting both by x, or at least logging a warning (not debug) including shapes and maybe a max index delta when indices are numeric.

That will surface genuine data/energy-bin issues instead of quietly reshuffling values.


630-641: Scoped suppression of matplotlib UserWarnings looks fine; consider a module logger for consistency.

Wrapping the scatter calls in warnings.catch_warnings() with UserWarning suppression in both the CE point style and _apply_CE_limits is a reasonable way to hide benign matplotlib categoricals/marker warnings without affecting the rest of the code.

Two minor follow‑ups you might consider:

  • Use a module logger (logger = logging.getLogger(__name__)) instead of the root logging.debug(...) to keep logging consistent and configurable per module.
  • If specific warning messages are known, you might filter on message text or warning subclass to avoid hiding unrelated UserWarnings, though the current approach is acceptable for a plotting-only context.

Also applies to: 1038-1056, 1084-1084

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb4262c and fb6bc45.

⛔ Files ignored due to path filters (7)
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Al Reaction rate.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Fe Reaction rate.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_In Reaction rate.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Mn Reaction rate.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-BKT/FNG-BKT_Ni-n2n Reaction rate.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-W/FNG-W_In Activation Foils.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_exp_-_exp_/FNG-W/FNG-W_Ni-n2n Activation Foils.csv is excluded by !**/*.csv
📒 Files selected for processing (2)
  • JADE-RAW-RESULTS (1 hunks)
  • src/jade/post/plotter.py (8 hunks)
✅ Files skipped from review due to trivial changes (1)
  • JADE-RAW-RESULTS
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-22T09:51:20.734Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:152-158
Timestamp: 2025-09-22T09:51:20.734Z
Learning: In the JADE codebase, when dealing with tally data in src/jade/post/manipulate_tally.py, avoid silently handling divide-by-zero cases in error propagation calculations. If tally["Value"].sum() == 0, the resulting warning/error serves as a useful diagnostic that the tally recorded no results, which users should be made aware of rather than having it masked.

Applied to files:

  • src/jade/post/plotter.py
🧬 Code graph analysis (1)
src/jade/post/plotter.py (2)
src/jade/post/manipulate_tally.py (1)
  • compare_data (491-550)
src/jade/config/excel_config.py (1)
  • ComparisonType (22-28)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.11, false)

@codecov
Copy link

codecov bot commented Dec 5, 2025

Codecov Report

❌ Patch coverage is 87.50000% with 8 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/jade/post/plotter.py 88.09% 5 Missing ⚠️
src/jade/app/app.py 50.00% 2 Missing ⚠️
src/jade/helper/aux_functions.py 94.44% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/jade/helper/aux_functions.py 95.12% <94.44%> (-0.20%) ⬇️
src/jade/app/app.py 82.75% <50.00%> (-0.25%) ⬇️
src/jade/post/plotter.py 94.19% <88.09%> (-0.45%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@dodu94
Copy link
Member Author

dodu94 commented Dec 5, 2025

@dodu94 Thanks Davide! Just one question, have you also plotted the C/E for the neutron grouped spectra? By looking at the excel screenshot that you have attached, the values for the first energy range (1-3MeV) seem off. I can double check if something happened while grouping the experimental results for that range specifically.

Yes, now I changed the coarse binning in order to ignore the first bin as you suggested offline, much better:
image

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ce86f4 and 9e144e7.

📒 Files selected for processing (1)
  • src/jade/app/app.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/jade/app/app.py (2)
src/jade/run/benchmark.py (5)
  • code (75-76)
  • code (271-272)
  • code (300-301)
  • code (322-323)
  • code (336-337)
src/jade/helper/constants.py (1)
  • CODE (4-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
🔇 Additional comments (1)
src/jade/app/app.py (1)

21-21: LGTM: Import addition is correct.

The import of OMC_AVAIL follows the existing pattern for optional dependency checks and is used appropriately in the conditional logic below.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e144e7 and 67efea2.

📒 Files selected for processing (1)
  • tests/app/test_app.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/app/test_app.py (1)
src/jade/app/app.py (1)
  • raw_process (194-263)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.11, false)
🔇 Additional comments (1)
tests/app/test_app.py (1)

79-81: LGTM! Comment clarification improves readability.

The comment update makes the test's intent clearer.

Comment on lines +83 to +84
# try case where openmc is not available on windows
app.raw_process(subset=["TUD-W"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Test is incomplete and doesn't verify the OpenMC availability guard.

This test invocation lacks both setup and verification:

  1. No mocking: OMC_AVAIL is not mocked to False, so the test doesn't actually simulate OpenMC being unavailable.
  2. No assertions: Without any assertions, this test will always pass regardless of whether the availability guard works correctly.
  3. No verification: The expected behavior (skipping processing with a warning) is not validated.

Apply this diff to properly test the OpenMC availability guard:

-    # try case where openmc is not available on windows
-    app.raw_process(subset=["TUD-W"])
+    # Test case where OpenMC is not available
+    import jade.app.app as app_module
+    original_omc_avail = app_module.OMC_AVAIL
+    try:
+        app_module.OMC_AVAIL = False
+        # TUD-W should be skipped if it's an OpenMC benchmark
+        with pytest.raises(SystemExit) as exc_info:
+            # Or use caplog to verify warning message
+            app.raw_process(subset=["TUD-W"])
+        # Verify that TUD-W was not processed
+        # (adjust assertion based on expected behavior)
+    finally:
+        app_module.OMC_AVAIL = original_omc_avail

Alternatively, use pytest.caplog to capture and verify the warning message:

-    # try case where openmc is not available on windows
-    app.raw_process(subset=["TUD-W"])
+    # Test case where OpenMC is not available
+    import jade.app.app as app_module
+    original_omc_avail = app_module.OMC_AVAIL
+    try:
+        app_module.OMC_AVAIL = False
+        app.raw_process(subset=["TUD-W"])
+        # Verify the benchmark was skipped (check status or caplog)
+        # Example: assert ("TUD-W", ...) not in app.status.raw_data
+    finally:
+        app_module.OMC_AVAIL = original_omc_avail

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to change as it is universal but the d1s option is redundant for many benchmarks. As is serpent also currently!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know but a table is built by the GUI using this. GUI is clearly not the best piece of code in JADE, when we will have some effort to improve it we could tackle also this issue

@alexvalentine94
Copy link
Collaborator

Can't spot what's happening with the PR templates?

@alexvalentine94
Copy link
Collaborator

alexvalentine94 commented Dec 13, 2025

Did you perform the rebinning for TUD-W? I can only see experimental data for the fine group structure which neutrons up to 15MeV

@dodu94
Copy link
Member Author

dodu94 commented Dec 15, 2025

Can't spot what's happening with the PR templates?

What I understood is that we cannot have multiple automatic template like for issues. If we want templates to be used automatically we can only have one, otherwise users should load the template manually in a way I did not fully understand. Maybe we should some trials on a test repository

@dodu94
Copy link
Member Author

dodu94 commented Dec 15, 2025

Did you perform the rebinning for TUD-W? I can only see experimental data for the fine group structure which neutrons up to 15MeV

nono I did not change the bins, I just added further post-processing to also create a raw result that collapses the fine structure in a coarser energy bins one.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/jade/helper/aux_functions.py (1)

142-162: Consider narrowing the exception handling scope.

The function logic is sound, but catching a broad Exception on line 160 may mask unexpected errors. Consider catching specific exceptions related to numeric conversion.

Apply this diff to narrow the exception handling:

             try:
                 idx1 = np.array(subindex1, dtype=float)
                 idx2 = np.array(subindex2, dtype=float)
                 if np.allclose(idx1, idx2, rtol=1e-3):
                     continue
                 else:
                     return False
-            except Exception:
+            except (ValueError, TypeError):
                 return False

This catches the specific exceptions that occur when converting non-numeric data to float, while allowing other unexpected exceptions to propagate.

src/jade/post/plotter.py (1)

548-553: Consider extracting the error message to a constant.

While the error message is concise, extracting it to a module-level constant would align with static analysis recommendations and improve maintainability.

Apply this diff if you prefer:

+# Near the top of the file with other constants
+INDEX_MISMATCH_ERROR = "Indices do not match between reference and target data."
+
 # In the function...
             if same_index(val1.index, val2.index) is False:
                 logging.error(
-                    f"Indices do not match between reference and {codelib}: "
+                    f"{INDEX_MISMATCH_ERROR} Reference vs {codelib}: "
                     f"{val1.index}, {val2.index}"
                 )
-                raise RuntimeError("Indices do not match.")
+                raise RuntimeError(INDEX_MISMATCH_ERROR)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ead1a0d and 77a1c9c.

⛔ Files ignored due to path filters (20)
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos1 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon flux.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos2 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos3 Photon spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Neutron spectra.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra coarse.csv is excluded by !**/*.csv
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/TUD-W_pos4 Photon spectra.csv is excluded by !**/*.csv
📒 Files selected for processing (5)
  • src/jade/helper/aux_functions.py (2 hunks)
  • src/jade/post/plotter.py (8 hunks)
  • src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (1 hunks)
  • tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/metadata.json (1 hunks)
  • tests/helper/test_auxiliary_func.py (2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-22T09:51:20.734Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:152-158
Timestamp: 2025-09-22T09:51:20.734Z
Learning: In the JADE codebase, when dealing with tally data in src/jade/post/manipulate_tally.py, avoid silently handling divide-by-zero cases in error propagation calculations. If tally["Value"].sum() == 0, the resulting warning/error serves as a useful diagnostic that the tally recorded no results, which users should be made aware of rather than having it masked.

Applied to files:

  • src/jade/post/plotter.py
📚 Learning: 2025-09-22T09:52:06.987Z
Learnt from: mcampos16
Repo: JADE-V-V/JADE PR: 421
File: src/jade/post/manipulate_tally.py:265-271
Timestamp: 2025-09-22T09:52:06.987Z
Learning: In the JADE codebase, for the cumulative_sum function in src/jade/post/manipulate_tally.py, do not add divide-by-zero guards during normalization. If the last value of a cumulative sum is 0, the resulting divide-by-zero error serves as a useful diagnostic that the tally recorded no results in any bins, which users should be made aware of rather than having it masked by silent error handling.

Applied to files:

  • src/jade/post/plotter.py
🧬 Code graph analysis (2)
src/jade/post/plotter.py (3)
src/jade/helper/aux_functions.py (1)
  • same_index (142-162)
src/jade/config/excel_config.py (1)
  • ComparisonType (22-28)
src/jade/post/manipulate_tally.py (1)
  • compare_data (491-550)
tests/helper/test_auxiliary_func.py (1)
src/jade/helper/aux_functions.py (3)
  • get_code_lib (24-39)
  • print_code_lib (42-68)
  • same_index (142-162)
🪛 Ruff (0.14.8)
src/jade/post/plotter.py

553-553: Avoid specifying long messages outside the exception class

(TRY003)

src/jade/helper/aux_functions.py

160-160: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.12, false)
  • GitHub Check: test (ubuntu-latest, 3.11, false)
  • GitHub Check: test (windows-latest, 3.12, false)
  • GitHub Check: test (windows-latest, 3.10, false)
  • GitHub Check: test (windows-latest, 3.11, false)
  • GitHub Check: test (ubuntu-latest, 3.10, false)
🔇 Additional comments (9)
src/jade/resources/default_cfg/benchmarks_pp/raw/openmc/TUD-W.yaml (4)

1-8: LGTM: Neutron spectra post-processing looks good.

The renaming from "Neutron flux" to "Neutron spectra" aligns with the PR objectives, and the addition of scale factor and column cleanup are appropriate post-processing steps.

Please verify the scale factor 0.02284 is correct for this benchmark. This appears to be a normalization constant, but it should be double-checked against the benchmark specifications.


9-15: LGTM: Photon spectra changes are consistent.

The changes mirror the neutron spectra updates and maintain consistency. The increase in decimal precision for Energy (2→3) provides better resolution for photon energies.


25-32: Same preprocessing verification needed as neutron coarse.

The "Photon spectra coarse" section has the same structure as the neutron coarse section (missing by_energy, format_decimals, delete_cols operations). The same verification concerns apply.

Confirm that the preprocessing requirements are consistent between neutron and photon coarse sections, and that the photon bins starting at 0.4 MeV (not 0) is intentional and handles lower-energy photons appropriately.


17-24: No issues found with the "Neutron spectra coarse" configuration.

The absence of by_energy, format_decimals, and delete_cols operations is intentional. The condense_groups operation works independently and doesn't require by_energy preprocessing—it directly aggregates values from binned tallies into coarser energy groups. The bin structure starting at 1.15 MeV aligns with the PR's discussed intent to exclude the problematic lower energy range, and the pd.cut binning operation with right=False will naturally exclude any data below this threshold. This configuration is consistent with how condense_groups is used elsewhere in the codebase (e.g., MCNP TUD-W, IPPE-DT benchmarks).

tests/helper/test_auxiliary_func.py (1)

22-37: LGTM! Comprehensive test coverage for index comparison.

The test function covers a good range of scenarios including identical indices, numerical differences, type mismatches, and length differences. This provides solid validation for the new same_index utility function.

src/jade/post/plotter.py (3)

530-565: Excellent refactoring of ratio computation logic.

The data-driven approach with index alignment is a significant improvement:

  • Index comparison with tolerance handles floating-point differences
  • Clear error logging when indices don't match
  • Use of compare_data centralizes error propagation logic
  • Structured data storage in to_plot improves maintainability

636-657: Clarify the rationale for suppressing UserWarnings.

The error bar calculation is correct (converting relative errors to absolute by multiplying with the ratio values). However, suppressing UserWarning around the scatter and errorbar plots may hide legitimate issues such as:

  • Empty data sequences
  • Invalid data values (NaN, inf)
  • Axis limit problems

Could you clarify:

  1. What specific warnings are you encountering that require suppression?
  2. Have you verified these warnings are benign?

Consider using warnings.filterwarnings with a specific warning message pattern instead of suppressing all UserWarnings:

# More targeted suppression
with warnings.catch_warnings():
    warnings.filterwarnings("ignore", message="specific warning text", category=UserWarning)
    # ... plotting code

1044-1062: Same concern: blanket UserWarning suppression may hide important issues.

The warnings context around the scatter plots in _apply_CE_limits has the same concern as the earlier suppression. Consider whether all UserWarnings should be suppressed or if a more targeted approach is appropriate.

Based on learnings from the codebase, diagnostic warnings (like divide-by-zero) serve important purposes. Ensure that suppressing UserWarnings here doesn't mask meaningful diagnostics that users should be aware of.

tests/dummy_structure/raw_data/_openmc_-_FENDL 3.1d_/TUD-W/metadata.json (1)

6-8: No action needed. This metadata.json is test fixture data that documents the code version used to generate the test data, not a production dependency specification. OpenMC 0.15.0 is a valid released version, and the code_version field serves as informational metadata only—the codebase does not enforce version compatibility checks against this field. Different test fixtures use various versions without issue.

Likely an incorrect or invalid review comment.

@mcampos16
Copy link
Contributor

@dodu94 and I spotted an issue related to how index mismatch between the dataframes used to compute a ratio was handled for the CE plots.
Before, the approach taken could potentially cause the computation of incorrect ratios.
Now, the indexes of the dataframes are sorted and a function same_indexes was added with the purpose of identifying any possible index mismatches both for single-level index and multi-index (which could be caused, for example, by slight differences in numerical indexes due to rounding). After that, a re-indexation is applied to ensure the match between the dataframes used to compute the ratio.

@dodu94
Copy link
Member Author

dodu94 commented Dec 18, 2025

@alexvalentine94 as you can see there was a lot going on here but we should be ready for merging, codecov is very close to target anyway

Copy link
Collaborator

@alexvalentine94 alexvalentine94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job both, nice improvements!

@dodu94 dodu94 merged commit da6851f into developing Dec 18, 2025
18 of 32 checks passed
@dodu94 dodu94 deleted the TUD-W-plots-improvement branch December 18, 2025 16:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants