IGAnets is a novel approach to combine the concept of deep operator learning with the mathematical framework of isogeometric analysis.
IGAnets require a C++20 compiler, CMake and LibTorch (the C++ API of PyTorch).
Supported CMake flags:
-
-DIGANET_BUILD_CPUONLY=ONbuilds IGAnets in CPU mode even if CUDA, ROCm, etc. is found (defaultOFF). -
-DIGANET_BUILD_DOCS=ONbuilds the documentation (defaultOFF). To build the documentation you need Doxygen and Sphinx installed on you system. -
-DIGANET_BUILD_PCH=ONbuilds IGAnets with precompiled headers (defaultON). -
-DIGANET_OPTIONAL="module1[branch];module2[branch];..."builds optional modules (defaultNONE)Optional modules are downloaded into the directory
optional. If the current IGAnets checkout is a git repository (i.e. if CMake finds the directory.git) optional modules are also checked out as git repositories. Otherwise, CMake downloads the ZIP archive of the optional module.The following optional modules are available:
- Examples
examples[main] - Unit tests
unittests[main] - Performance tests
perftests[main] - Python bindings
python[main] - MATLAB bindings
matlab[main]
If
[branch]is not given then[main]is assumed by default. There might exist further optional modules that are not visible publicly. - Examples
-
-DIGANET_WITH_GISMO=ONcompiles IGAnets with support for the open-source Geometry plus Simulation Modules library G+Smo enabled (defaultOFF). -
-DIGANET_WITH_MATPLOT=ONcompiles IGAnets with support for the open-source library Matplot-cpp enabled (defaultOFF). Note that this option can cause compilation errors with GCC. -
-DIGANET_WITH_MPI=ONcompiles IGAnets with MPI support enabled (defaultOFF). -
-DIGANET_WITH_OPENMP=ONcompiles IGAnets with OpenMP support enabled (defaultON). Note that this option can cause compilation errors with Clang.
-
Install prerequisites (CMake and LibTorch)
apt-get install build-essential cmake unzip wget
yum install make cmake gcc gcc-c++ unzip wget
Pre-compiled versions of LibTorch are available at PyTorch.org. Depending on your compiler toolchain you need to choose between the pre-cxx11 and the cxx11 ABI, i.e.
wget https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.7.1%2Bcpu.zip -O libtorch.zip unzip libtorch.zip -d $HOME/ rm -f libtorchor
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.7.1%2Bcpu.zip -O libtorch.zip unzip libtorch.zip -d $HOME/ rm -f libtorchNote that there might be a newer LibTorch version available than indicated in the above code snippet.
-
Configure
cmake .. -DTorch_DIR=${HOME}/libtorch/share/cmake/Torch -
Compile
make -j 8
Depending on the number of cores of your CPU you may want to change 8 to a different number.
-
Install prerequisites (CMake and LibTorch)
brew install cmake pytorch
Note that since version 2.2.0, official builds of the LibTorch library for ARM64 and X86_64 can be downloaded from PyTorch.org:
- https://download.pytorch.org/libtorch/cpu/libtorch-macos-x86_64-2.7.1.zip
- https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.7.1.zip
If you decide to use these version download and unzip them as shown for the Linux installation. It is, however, recommended to install LibTorch through
brewas described above since this method is tested regularly by the IGAnets authors.Note that there might be a newer LibTorch version available than indicated in the above code snippet.
-
Configure
cmake .. -DTorch_DIR=/opt/homebrew/Cellar/pytorch/2.7.1/share/cmake/Torch
Note that the specific version of PyTorch and/or protobuf might be different on your system.
-
Compile
make -j 8
Depending on the number of cores of your CPU you may want to change 8 to a different number.
-
Install the CUDA-enabled version of LibTorch
- https://download.pytorch.org/libtorch/cu121/libtorch-shared-with-deps-2.7.1%2Bcu128.zip
- https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.7.1%2Bcu128.zip
Note that the version must be compatible with the CUDA version installed on your system.
-
Configure and compile
All further steps are the same as described above (Linux)
-
Install the ROCm-enabled version of LibTorch
- https://download.pytorch.org/libtorch/rocm6.3/libtorch-shared-with-deps-2.7.1%2Brocm6.3.zip
- https://download.pytorch.org/libtorch/rocm6.3/libtorch-cxx11-abi-shared-with-deps-2.7.1%2Brocm6.3.zip
Note that the version must be compatible with the ROCm version installed on your system.
-
Configure and compiled
All further steps are the same as described above (Linux)
-
Install the Intel GPU drivers and PyTorch version as decribed here. If you do not own an Intel GPU, you can create a free account at the Intel Tiber AI Cloud, which provides a free access to Intel datacenter GPU for testing purposes.
-
Install the XPU-enabled version of PyTorch in a virtual python environment
python3 -m venv $HOME/.venv/torch-xpu source $HOME/.venv/torch-xpu/bin/activate pip install torch --index-url https://download.pytorch.org/whl/xpu
-
Configure
cmake .. -DTorch_DIR=$HOME/.venv/torch-xpu/lib/python3.11/site-packages/torch/share/cmake/Torch/Note that on the latest Intel Tiber AI Cloud installation, ZLib is not found by default. This can be corrected by calling CMake with the additional parameters
-DZLIB_LIBRARY=/usr/lib/x86_64-linux-gnu/libz.so.1 -DZLIB_INCLUDE_DIR=/usr/include
-
Install the Intel Extensions for PyTorch as described here.
-
Add the CMake option
-DIPEX_DIR=<path/to/IPEX/installation>