Skip to content

iganets/iganet

Repository files navigation

IGAnets: Isogeometric analysis networks

GitlabSync CI Documentation

GitHub Releases GitHub Downloads GitHub Issues

IGAnets is a novel approach to combine the concept of deep operator learning with the mathematical framework of isogeometric analysis.

Installation instructions

IGAnets require a C++20 compiler, CMake and LibTorch (the C++ API of PyTorch).

Supported CMake flags:

  • -DIGANET_BUILD_CPUONLY=ON builds IGAnets in CPU mode even if CUDA, ROCm, etc. is found (default OFF).

  • -DIGANET_BUILD_DOCS=ON builds the documentation (default OFF). To build the documentation you need Doxygen and Sphinx installed on you system.

  • -DIGANET_BUILD_PCH=ON builds IGAnets with precompiled headers (default ON).

  • -DIGANET_OPTIONAL="module1[branch];module2[branch];..." builds optional modules (default NONE)

    Optional modules are downloaded into the directory optional. If the current IGAnets checkout is a git repository (i.e. if CMake finds the directory .git) optional modules are also checked out as git repositories. Otherwise, CMake downloads the ZIP archive of the optional module.

    The following optional modules are available:

    If [branch] is not given then [main] is assumed by default. There might exist further optional modules that are not visible publicly.

  • -DIGANET_WITH_GISMO=ON compiles IGAnets with support for the open-source Geometry plus Simulation Modules library G+Smo enabled (default OFF).

  • -DIGANET_WITH_MATPLOT=ON compiles IGAnets with support for the open-source library Matplot-cpp enabled (default OFF). Note that this option can cause compilation errors with GCC.

  • -DIGANET_WITH_MPI=ON compiles IGAnets with MPI support enabled (default OFF).

  • -DIGANET_WITH_OPENMP=ON compiles IGAnets with OpenMP support enabled (default ON). Note that this option can cause compilation errors with Clang.

Linux

  1. Install prerequisites (CMake and LibTorch)

    Ubuntu

    apt-get install build-essential cmake unzip wget

    RedHat

    yum install make cmake gcc gcc-c++ unzip wget

    Install LibTorch

    Pre-compiled versions of LibTorch are available at PyTorch.org. Depending on your compiler toolchain you need to choose between the pre-cxx11 and the cxx11 ABI, i.e.

    wget https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.7.1%2Bcpu.zip -O libtorch.zip
    unzip libtorch.zip -d $HOME/
    rm -f libtorch

    or

    wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.7.1%2Bcpu.zip -O libtorch.zip
    unzip libtorch.zip -d $HOME/
    rm -f libtorch

    Note that there might be a newer LibTorch version available than indicated in the above code snippet.

  2. Configure

    cmake .. -DTorch_DIR=${HOME}/libtorch/share/cmake/Torch
  3. Compile

    make -j 8

    Depending on the number of cores of your CPU you may want to change 8 to a different number.

macOS

  1. Install prerequisites (CMake and LibTorch)

    brew install cmake pytorch

    Note that since version 2.2.0, official builds of the LibTorch library for ARM64 and X86_64 can be downloaded from PyTorch.org:

    If you decide to use these version download and unzip them as shown for the Linux installation. It is, however, recommended to install LibTorch through brew as described above since this method is tested regularly by the IGAnets authors.

    Note that there might be a newer LibTorch version available than indicated in the above code snippet.

  2. Configure

    cmake .. -DTorch_DIR=/opt/homebrew/Cellar/pytorch/2.7.1/share/cmake/Torch

    Note that the specific version of PyTorch and/or protobuf might be different on your system.

  3. Compile

    make -j 8

    Depending on the number of cores of your CPU you may want to change 8 to a different number.

Compilation with CUDA support (only Linux)

  1. Install the CUDA-enabled version of LibTorch

    Note that the version must be compatible with the CUDA version installed on your system.

  2. Configure and compile

    All further steps are the same as described above (Linux)

Compilation with ROCm support (only Linux)

  1. Install the ROCm-enabled version of LibTorch

    Note that the version must be compatible with the ROCm version installed on your system.

  2. Configure and compiled

    All further steps are the same as described above (Linux)

Compilation with Intel GPU support (only Linux)

  1. Install the Intel GPU drivers and PyTorch version as decribed here. If you do not own an Intel GPU, you can create a free account at the Intel Tiber AI Cloud, which provides a free access to Intel datacenter GPU for testing purposes.

  2. Install the XPU-enabled version of PyTorch in a virtual python environment

    python3 -m venv $HOME/.venv/torch-xpu
    source $HOME/.venv/torch-xpu/bin/activate
    pip install torch --index-url https://download.pytorch.org/whl/xpu
  3. Configure

    cmake .. -DTorch_DIR=$HOME/.venv/torch-xpu/lib/python3.11/site-packages/torch/share/cmake/Torch/

    Note that on the latest Intel Tiber AI Cloud installation, ZLib is not found by default. This can be corrected by calling CMake with the additional parameters

    -DZLIB_LIBRARY=/usr/lib/x86_64-linux-gnu/libz.so.1 -DZLIB_INCLUDE_DIR=/usr/include

Compilation with Intel Extensions for PyTorch support (only Linux)

  1. Install the Intel Extensions for PyTorch as described here.

  2. Add the CMake option -DIPEX_DIR=<path/to/IPEX/installation>

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •