Skip to content

Rework div flux operators#283

Open
janvdijk wants to merge 20 commits intomainfrom
rework_div_flux_operators
Open

Rework div flux operators#283
janvdijk wants to merge 20 commits intomainfrom
rework_div_flux_operators

Conversation

@janvdijk
Copy link
Collaborator

@janvdijk janvdijk commented Feb 19, 2026

Firstly, this changes the matrix-less overloads of the evaluate members of the operators to take the parameters that are also needed by the versions that accept a matrix reference. While not used in all cases, this will simplify the implementation of new discretization strategies where we need the actual convection/diffusion coefficients, rather than scaled ones (e.g. assuming E/N=1). It adds comments to locations where the scaled field operator is calculated and it is assumed that the factor (E/N)^2 is applied afterwards.

Secondly, this adds test_cd_disc.cpp.

This declares a class ConvectionDiffusionDiscretizer, to which ConvectionDiffusionOperator objects can be registered. The class provides members for aggregating the convection and diffusion coefficients, calculating the grid Peclet numbers and for discretizing one of the registered contributions, are the combination. Both the central difference scheme and the Scharfetter-Gummel (exponential) scheme are supported. Note that for the latter scheme, the individual contributions are coupled via the grid Peclet number, which is calculated for the combination of all registered operators (see the cpp_notes document).

The test discretizes the same elastic+field problem for all available discretization strategies and write the results to files f_xxx.dat. These files contain three columns: the energy, the EEDF and the Druyvesteyn EEDF (calculated for the mean energy that is calculated from the EEDF). Since Tgas=0 in the examples (this can be changed in the .cpp file), this is the expected solution.

For the present settings, the exponential scheme is much better able to reproduce the analytical solution than the cd scheme (and the 'old' scheme, the CD scheme presently available in LoKI-B). This can be shown by running the example, starting gnuplot, then doing:

      > set logscale y
      > plot 'f_old.dat' u 1:2 w l, '' u 1:3 w l, 'f_cd.dat' u 1:2 w l, '' u 1:3 w l, 'f_sg.dat' u 1:2 w l, '' u 1:3 w l, 'f_sg_sep.dat' u 1:2 w l, '' u 1:3 w l

The two overlapping 'low' curves are the calculated EEDFS for the CD scheme (old and new implementation). The others (all overlapping) are the results for the SG scheme, the sum-of-contributions version of the SG scheme, and
the analytical (Druyvesteyn) results.

@janvdijk janvdijk requested a review from daanboer February 19, 2026 21:24
@janvdijk janvdijk self-assigned this Feb 19, 2026
@janvdijk janvdijk added the enhancement New feature or request label Feb 19, 2026
@janvdijk janvdijk marked this pull request as draft February 19, 2026 21:24
@janvdijk janvdijk marked this pull request as ready for review February 25, 2026 21:18
Jan van Dijk added 12 commits February 25, 2026 22:24
 * Pass parameter Tg to the matrix-less overload of member 'evaluate'
of the CAR and elastic operators. At present, these are not needed
(only when g is used to assemble the convection and diffusion
coefficients these are needed); this prepares for a new discretization
strategy where the flux terms are considered as an ensemble, which is
needed to apply a single flux boundary condition.

 * Pass parameter E/N to the field operator's 'evaluate' member. At present,
the operator always assumes E/N=1 [Vm2], and the factor (E/N)^2 needs to
be applied when the result is used (power evaluation, adding the matrix
to the Boltzmann matrix). This change makes this practice more explicit
(passing 1 instead of EoN where appropriate), and allows a more straighforward
implementation of code that wants to apply the factor (E/N)^2 right from the
start. As an example of the latter, test_nonuniform_field.cpp and
test_nonuniform_druyvesteyn.cpp looked like:

    fieldOperator1.evaluate(grid1, fieldCrossSection, won, CIEff, M1);
    M1 *= eon*eon;
    fieldOperator2.evaluate(grid2, fieldCrossSection, won, CIEff, M2);
    M2 *= eon*eon;

and are now simply implemented as

    fieldOperator1.evaluate(grid1, fieldCrossSection, eon, won, CIEff, M1);
    fieldOperator2.evaluate(grid2, fieldCrossSection, eon, won, CIEff, M2);

Comments have been added at call sites where E/N=1 is passed.
 * This manages the convection and diffusion coefficients of an operator.
 * Derive ElasticOperator and FieldOperator from this class
 * Equip those classes with member updateCD, which updates the convection
   and diffusion coefficients. (These are related to the pre-existing members
   evaluate.)
 * Note that the coefficients have their actual (non-zero) values also at the
   upper grid boundary (unike member g). At u=0, the values are really zero.
This declares a class ConvectionDiffusionDiscretizer, to which
ConvectionDiffusionOperator objects can be registered. The class
provides members for aggregating the convection and diffusion
coefficients, calculating the grid Peclet numbers and for discretizing
one of the registered contributions, are the combination. Bot the
central difference scheme and the Scharfetter-Gummel (exponential) scheme
are supported. Note that for the latter scheme, the individual contributions
are coupled via the grid Peclet number, which is calculated for the
combination of all registered operators (see the cpp_notes document).

The test discretizes the same elastic+field problem for all available
discretization strategies and write the results to files f_xxx.dat. These
files contain three columns: the energy, the EEDF and the Druyvesteyn
EEDF (calculated for the mean energy that is calculated from the EEDF).
Since Tgas=0 in the examples (this can be changed in the .cpp file),
this is the expected solution.

For the present settings, the exponential scheme is much better able to
reproduce the analytical solution than the cd scheme (and the 'old' scheme,
the CD scheme presently available in LoKI-B). This can be shown by running
the example, starting gnuplot, then doing:

  > set logscale y
  > plot 'f_old.dat' u 1:2 w l, '' u 1:3 w l, 'f_cd.dat' u 1:2 w l, '' u 1:3 w l, 'f_sg.dat' u 1:2 w l, '' u 1:3 w l, 'f_sg_sep.dat' u 1:2 w l, '' u 1:3 w l

The two overlapping 'low' curves are the calculated EEDFS for the CD scheme
(old and new implementation). The others (all overlapping) are the results
for the SG scheme, the sum-of-contributions version of the SG scheme, and
the analytical (Druyvesteyn) results.
 * add functions for evaluating the partial and complete flux
 * implement the usage of a ghost cell ot obtain the true fluxes
   (partial) at the upper grid boundary. See #define LOKI_DISCRETIZE_BOUNDARY_FLUX
   in the code
 * add more tests
Move these functions outside the Schemes namespace.
And call that for Schemes::CD and Schemes::SG.
Note: There is no test for this function yet.
@janvdijk janvdijk force-pushed the rework_div_flux_operators branch from b7c5462 to 346e855 Compare February 25, 2026 21:26
Jan van Dijk added 8 commits February 26, 2026 20:41
Fixed assert(coefs.A==std::numeric_limits<double>::quiet_NaN()). NaN
comparisons always fail, even nan==Nan. Use std::isnan(coefs.A) instead.
This fixes the debug builds.
This includes the classes ConvectionDiffusionOperator, ConvectionDiffusionTerms,
the CD and SG schemes and function templates discretize_dflux_du and
evaluate_flux_density.
Add explicit instantiations of function templates for Schemes::CD and Schemes::SG.
Consistently handle calc_coefs for boundary faces, use the results
also for boundary faces (the contribution may simply be zero). This
results in a more symmetric implementation.
* In the Operators' updateCD members, do *not* apply the scaling factor 1/gamma.
* in discretize_dflux_du: do just that, do not multiply with -1. This  means
  that the resulting discretization matrices are positive semi-definite, instead
  of negative semi-definite.
* Comments have been added that make this choice clear.
Use camelCase for function names.
@janvdijk
Copy link
Collaborator Author

janvdijk commented Mar 4, 2026

@daanboer, as discussed this morning, IMHO this is now ready to be merged. This change does not change the existing behavior of the code. A review would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant