Skip to content

Random ray in a SCONEy way #125

@ChasingNeutrons

Description

@ChasingNeutrons

The current PR on random ray is dead for good reason - it doesn't look like SCONEy code and it isn't easy to extend to the many variations on RR we want to make.

I'd like to have some discussion on how a generic RR code that can handle all these variations can look. These variations include eigenvalue/fixed-source/noise and flat source/linear source and isotropic/anisotropic.

How a given RR solver looks is as follows:

  1. Initialise all of the necessary structures (fluxes, sources)
  2. Generate a source (RHS of the transport equation)
  3. Perform the transport sweep (sample rays, move rays, attenuate fluxes, accumulate to the flux vector)
  4. Accumulate quantities (fluxes, k)
  5. Check for some measure of convergence (iterative or statistical)
  6. Finalise statistics
  7. Output quantities of interest

I have a handful of ideas for how to make this a bit more general. I would appreciate any feedback on this.

First, we make source and fluxes objects. By default these would contain the current flux, previous flux, accumulated flux (for the flux object) and the current source for the source object. However, these can allocate additional components as necessary, such as linear components, fixed sources, etc. A given physics package will tell these objects to allocate the necessary components. These can have procedures for accumulation, finalisation, and output. We should probably do similar for geometry estimates. For example, this would contain the estimated volume (which will be common to everything) but could optionally allocate spatial moment matrices also. Data should also be packaged more neatly... However, it will essentially need to be a flat array of each XS in the end. The question is whether to do this internally in nuclear data or whether to make a separate object that is constructed from nuclear data.

Second, we make the transport sweep and source update both into objects. Each of these varies dramatically, depending on the calculation performed, but there will be some overlap (flat isotropic versions will be identical for fixed source and eigenvalue). Additionally, the call to movement and its settings will be the same across all physics packages. Each of these objects would be fed pointers to the relevant flux/source/data objects. Likewise they can be constructed depending on the exact scheme being used. For initialising rays, we could reuse the standard fixed source particle sampling with uniformity in space and angle? I think that's what OpenMC have gone for.

Third, we can define a convergence metric object. This could be more or less sophisticated, e.g., Shannon entropy or simply the number of iterations. This is only to allow for code reuse across all the physics packages.

Fourth, it might also make sense to define a k object, since it will be common across eigenvalue and noise. I don't have strong feelings about this - it has relatively little to do other than accumulate flux estimates, which is only a few lines.

Thoughts? I don't think this should be so hard to do and I suspect this will preserve performance while making things tidier and more easily modifiable in future.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions