__ __ __
/ / ____ _____ _/ /_ ____ _____/ /_
/ / / __ `/ __ `/ __ \/ __ \/ ___/ __/
/ /___/ /_/ / /_/ / / / / /_/ (__ ) /_
/_____/\__,_/\__, /_/ /_/\____/____/\__/
/____/
Lagrangian High-order Solver for Tectonics
Laghost (LAGrangian High-Order Solver for Tectnoics) solves the time-dependent momentum balance of geological media in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping.
Laghost extends the capabilities of the Laghos (Lagrangian High-Order Solver) one of mini-apps of MFEM, a modular parallel C++ library to enable high-performance scalable finite element discretization. Laghos solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using high-order finite element spatial discretization and explicit time-stepping (Runge-Kutta method). Laghost inherits most of these features.
Veselin A. Dobrev, Tzanio V. Kolev, and Robert N. Riebenn
High-order curvilinear finite element methods for Lagrangian hydrodynamics
SIAM Journal on Scientific Computing, (34) 2012, pp. B606–B641.
Robert W. Anderson, Veselin A. Dobrev, Tzanio V. Kolev, Robert N. Rieben, and Vladimir Z.
High-Order Multi-Material ALE Hydrodynamics
Computational Methods in Science and Engineering, (40) 2018.
The problem that Laghost is solving is formulated as a big (block) system of ordinary differential equations (ODEs) for the unknown (high-order) velocity, internal energy, stress and mesh nodes (position). The left-hand side of this system of ODEs is controlled by mass matrices (one for velocity and one for energy and stress), while the right-hand side is constructed from a force matrix.
Laghost supports two options for deriving and solving the ODE system, namely the full assembly and the partial assembly methods. Partial assembly is the main algorithm of interest for high orders. For low orders (e.g. 2nd order in 3D), both algorithms are of interest.
The full assembly option relies on constructing and utilizing global mass and force matrices stored in compressed sparse row (CSR) format. In contrast, the partial assembly option defines only the local action of those matrices, which is then used to perform all necessary operations. As the local action is defined by utilizing the tensor structure of the finite element spaces, the amount of data storage, memory transfers, and FLOPs are lower (especially for higher orders).
Like the parent code, Laghos, Laghost can support, in principle, hardware devices, such
as GPUs, and programming models, such as CUDA, OCCA, RAJA and OpenMP,
based on MFEM, version 4.1 or later. These device
backends are selectable at runtime, see the -d/--device command-line
option. So, Laghost share those capability, however, they are not tested enough yet.
Other computational motives in Laghost include the following:
- Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag.
- Explicit time-stepping loop with a specialized Runge-Kutta method of order 2 that ensures exact energy conservation on fully discrete level (RK2Avg).
- Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order.
- Moving (high-order) meshes.
- Separation between the assembly and the quadrature point-based computations.
- Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient.
- Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator.
- Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per "assembly". Both the preparation and the application costs are important for this operator.
- Domain-decomposed MPI parallelism.
- Data output for visualization and data analysis with VisIt and ParaView.
- Rock rhelogies : Compressible elastic medium, Mohr-Coulomb rate-independnt and rate-independent plasticity, plastic softening based on accumulated plastic strain for cohesion, friction coefficient, and dilation coefficient.
- Mass scaling for mass matrices to achieve year-length time step size.
- Dynamic relaxation (a.k.a. Cundall's damping).
- Enabling the application of a Winkler foundation or spring boundary condition for the bottom boundary.
- Multi-material tracking based on composition field
- Remeshing and improving the quality of high-order finite element meshes based on the TMOP (Target-Matrix Optimization Paradigm)
- Remapping high-order continuous (velocity and mesh nodes) and discontinous variables (energy, stress, composition, plastic strain) from source mesh (before remeshing) to new mesh (after remeshing) using GSLIB and Remhos.
- Input file system (default.cfg) based on boost library (1.42 or newer version).
- The file
laghost.cppcontains the main driver with the time integration loop. - In each time step, the ODE system of interest is constructed and solved by
the class
LagrangianGeoOperator, defined inlaghost.cppand implemented in fileslaghost_solver.hppandlaghost_solver.cpp. - In
LagrangianGeoOperator::RK2AvgSolver::Step,UpdateMesh,SolveVelocity,SolveEnergy, andSolveStressare sequentially called. - All quadrature-based computations are performed in the function
LagrangianGeoOperator::UpdateQuadratureDatainlaghost_solver.cpp. - In
UpdateQuadratureData, total stress and stress increment based on objective stress rate (Jaumann stress rate) are calculated to construct work matrixF_ij(force x length; i and j for continous and discontinous space). - In
SolveVelocity, a vector,rhs, is assembled by multiplying the work matrixF_ijand the unity vector of the discontinuous space. Then, taking the negative sign on therhsvector and adding damping force, which is stored in a new vector based on the current force vector, therhs. - Depending on the chosen option (
-pafor partial assembly or-fafor full assembly), the functionLagrangianGeoOperator::Multuses the corresponding method to construct and solve the final ODE system. - The full assembly computations for all mass matrices are performed by the MFEM
library, e.g., classes
MassIntegratorandVectorMassIntegrator. Full assembly of the ODE's right hand side is performed by utilizing the classForceIntegratordefined inlaghost_assembly.hpp. - The partial assembly computations are performed by the classes
ForcePAOperatorandMassPAOperatordefined inlaghost_assembly.hpp. - When partial assembly is used, the main computational kernels are the
Mult*functions of the classesMassPAOperatorandForcePAOperatorimplemented in filelaghost_assembly.cpp. These functions have specific versions for quadrilateral and hexahedral elements. - The orders of the velocity and position (continuous kinematic space)
and the internal energy, stress, composition and plastic strain
(discontinuous thermodynamic space) are given by the
-okand-otinput parameters, respectively.
The parallel build of MFEM has the following external dependencies:
- MPI compiler
- hypre: https://github.com/hypre-space/hypre
- METIS: See below
Laghost has these additional dependencies:
- GSLIb, used for remeshing. See below
- MFEM, core library for arbitrary-order finite elements
https://github.com/mfem/mfem - boost-program-options, used for input file system
https://www.boost.org/
The MFEM library has a serial and an MPI-based parallel version, which largely share the same code base. The only prerequisite for building the serial version of MFEM is a (modern) C++ compiler, such as g++. The parallel version of MFEM requires an MPI C++ compiler, hypre and METIS.
git clone https://github.com/mfem/mfem.githypre is expected to be on the same level as the Laghost directory: e.g.,
Option 1: From Git Repository
$ ls
mfem
$ git clone https://github.com/hypre-space/hypre
$ ls
hypre mfem
$ cd hypre/src
$ ./configure --disable-fortran
$ make -jOption 2: From Tarball
$ ls
mfem
$ wget https://github.com/hypre-space/hypre/archive/v2.28.0.tar.gz
$ tar -xzf v2.28.0.tar.gz
$ ls
hypre-2.28.0 mfem v2.28.0.tar.gz
$ cd hypre-2.28.0/src
$ ./configure --disable-fortran
$ make -j
$ cd ../..
$ ln -sf hypre-2.28.0 hypre
$ ls
hypre hypre-2.28.0 mfem v2.28.0.tar.gzAfter building, ensure the hypre symbolic link points to the source directory so MFEM can find it.
From mfem INSTALL document:
-
METIS (a family of multilevel partitioning algorithms) https://github.com/mfem/tpls
Note: We recommend using the MFEM third-party libraries mirror at https://github.com/mfem/tpls/raw/gh-pages/metis-4.0.3.tar.gz because the original METIS webpage, http://glaros.dtc.umn.edu/gkhome/metis/metis/overview, is often down and we don't support yet the new repo https://github.com/KarypisLab/METIS.
-
Follow https://mfem.org/building/#parallel-mpi-version-of-mfem
$ ls hypre mfem $ git https://github.com/mfem/tpls.git mfem-tpls $ ls hypre mfem mfem-tpls $ cd mfem-tpls $ tar -zxvf metis-4.0.3.tar.gz $ cd metis-4.0.3 $ make OPTFLAGS=-Wno-error=implicit-function-declaration $ cd ../.. $ ln -s mfem-tpls/metis-4.0.3 metis-4.0
-
This build is optional but recommended.
From mfem INSTALL document:
GSLIB (optional), used when MFEM_USE_GSLIB = YES. The gslib library must be built prior to the MFEM build, as follows: download gslib-1.0.9, untar it at the same level as MFEM and create a symbolic link: "ln -s gslib-1.0.9 gslib". Build gslib in parallel or in serial based on the desired MFEM build: "make clean; make CC=mpicc" or "make clean; make CC=gcc MPI=0". Build MFEM with MFEM_USE_GSLIB=YES.
Follow the above instruction. The whole process might be as follows:
$ ls
hypre metis-4.0 mfem mfem-tpls
$ git clone https://github.com/Nek5000/gslib.git
$ cd gslib
$ make CC=mpiccBuild the parallel version of MFEM:
$ ls
gslib-1.0.9 gslib hypre metis-4.0 mfem mfem-tpls
$ cd mfem
$ make parallel -j MFEM_USE_GSLIB=YES MFEM_USE_METIS_5=NOTo build the cuda version of MFEM:
$ make pcuda -j MFEM_USE_GSLIB=YES MFEM_USE_METIS_5=NOThe above uses the master branch of MFEM.
See the MFEM building page for additional details.
apt install libboost-program-options-devOr download a release package and install it locally: e.g.,
$ tar xzvf boost_1_88_0.tar.gz
$ cd boost_1_88_0
$ ./bootstrap.sh
$ ./b2 --with-program_options -q$ git clone https://github.com/GeoFLAC/Laghost.git
$ ls
Laghost gslib-1.0.9 gslib hypre metis-4.0 mfem mfem-tpls$ cd Laghost/
$ make -jIf libboost-program-options.so is locally installed, specify its location as follows:
make -j PROGRAMOPTIONS_LIBDIR=../boost_1_88_0/stage/lib./laghost Parameters in defaults.cfg will be used.
mpirun -np 8 ./laghost -i ./input_parameters.cfgto use a user-provided input file, input_parameters.cfg and run laghost on 8 cores.
For other available command-line options,
./laghost -hUse ParaView to load results/Laghost/Laghost.pvd
Leave a comment or ask a question in the issue tracker.
The following copyright applies to each file in the CEED software suite, unless otherwise stated in the file:
Copyright (c) 2017, Lawrence Livermore National Security, LLC. Produced at the Lawrence Livermore National Laboratory. LLNL-CODE-734707. All Rights reserved.
See files LICENSE and NOTICE for details.