-
Notifications
You must be signed in to change notification settings - Fork 66
Refactor build configuration #1204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Refactor build configuration #1204
Conversation
nuclearkevin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few notes about changed default behaviour. April will need to add you to the neams-th-coe organization for tests to run.
|
@ahnaf-tahmid-chowdhury, would you please re-push to this PR? This will trigger the test suite. |
b4a9b0c to
1892f46
Compare
|
Not sure how to start the workflows. |
|
@ahnaf-tahmid-chowdhury are the recent commits trying to get around the test failure? If so, I can take a closer look to see if I can figure it out |
b9ee08e to
937781e
Compare
Yes, but I have undone this. Still not sure why the test is failing with the same version of NekRS. Maybe tweaking the iteration number will fix it. I have merged the upstream branch; let’s see what the difference is. |
|
Sometimes NekRS is mysterious - but it looks like the tests passed now? Is this PR ready for review, or are there more changes coming? |
|
Yes, it’s ready for review. I’ll be happy to open another PR if I find any new tweaks later. |
|
Thanks @ahnaf-tahmid-chowdhury ! Starting to review and test this across some different systems |
|
Apologies for the wait - I've been slammed over the last few weeks and I've only just started clawing through my backlog. I'm testing this on a few different machines at Argonne to ensure our build instructions still function on their systems, and building with these changes on my personal laptop. Will drop a review once that's done! |
nuclearkevin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one minor comment from me. I've been able to build on Improv @ ANL (all dependencies) and my personal machine (everything other then NekRS) with no issues.
nuclearkevin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a surface level this looks good to me now - thanks for your patience over the last month!
Someone in our research group was running into issues building this branch on WindRiver @ INL HPC. I'll give it shot over there and report back if it has anything to do with this PR or not (INL has a habit of getting rid of modules without letting people know, so it could be that).
|
@nuclearkevin any issues on INL HPC? This looks good to me for merge if you can confirm this test was done successfully |
|
@ahnaf-tahmid-chowdhury actually, would you please add something to the |
doc/content/with_conda.md
Outdated
| that follow. | ||
|
|
||
| !alert! tip title=Using OpenMC from your Conda Environment | ||
| You can install OpenMC directly into your conda environment. See the "Using OpenMC from your Conda Environment" section for more details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahnaf-tahmid-chowdhury are you using the MOOSE conda environment with OpenMC installed into it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking into it more deeply, I realized that the conda version of the cardinal build isn’t actually being used yet. It only pulls in hdf5 from conda, and Cardinal seems to build all the moose packages by itself anyway. So there’s really no need for a separate moose environment with MOOSE already installed. We can still use openmc from the conda environment, as long as we choose a version that’s compatible with cardinal. In short, I haven’t found any real benefit to having a moose environment, since the current build script doesn’t use any moose deps from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess my request would be to clarify the difference between conda environments in the Cardinal documentation. The "with/without" conda is referring to MOOSE's conda environment, and then building OpenMC from source as part of that. But I think you've written the docs also assuming someone is using OpenMC's conda. I'd suggest we leave it as just MOOSE's conda environemnt, then perhaps have a hyperlink to another page which clarifies that you can also use OpenMC's conda environment on top of that?
The MOOSE conda environment allows us to skip building libMesh, PETSc, and wasp. These take significant amount of time.
nuclearkevin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything works as expected on INL HPC (WindRiver/Bitterroot). The 67 Pebble CHT case runs with no issues, and the pincell multiphysics problem also runs fine. Tested with 2 nodes (224 MPI ranks).
doc/content/with_conda.md
Outdated
| ``` | ||
| conda activate moose | ||
| conda activate cardinal | ||
| conda install -c conda-forge openmc # Install OpenMC from conda-forge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please update this documentation so that it does not rely on the OpenMC conda environment? Let's stick with the original MOOSE conda environment since it includes libMesh & wasp & petsc and simultaneously has all the OpenMC dependencies already
doc/content/with_conda.md
Outdated
| ./scripts/get-dependencies.sh | ||
| export ENABLE_NEK=false | ||
| export HDF5_ROOT=$CONDA_PREFIX | ||
| export OPENMC_DIR=$CONDA_PREFIX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggest to move this to a "tip" box, if anyone wants to use OpenMC conda and MOOSE conda simultaneously
!alert tip
doc/content/with_conda.md
Outdated
| !include access.md | ||
|
|
||
| To get MOOSE's conda environment, follow the instructions [here](https://mooseframework.inl.gov/getting_started/installation/index.html). | ||
| To get started, you'll need to create a dedicated conda environment for Cardinal. This ensures that all the necessary dependencies are isolated and managed correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a conda expert, but I don't think this is truly necessary? I'd prefer to avoid any customized instructions beyond just what MOOSE requires (but please educate me if there is an advantage to doing this).
|
Sorry for the delay. I’ve been busy. I should be able to start working on this next week. |
|
I have undone my changes to |
Update the build configuration to include additional directories and streamline the inclusion of dependencies. Remove outdated checks and ensure proper environment variable usage for NEKRS_HOME. Introduce new build options for related components like MOAB, DAGMC, and OpenMC. This enhances the overall build process and dependency management.