AMRaCut is a high-performance MPI-based graph partitioning library designed for mesh partitioning in distributed-parallel memory systems. AMRaCut is designed for large scale scalability while allowing a maximum imbalance of 1.5x-2x of the optimal partitioning load. AMRaCut is ideal for use cases where we have a crude initial partitioning (from an SFC method or a previous refinement iteration) and we need a better partitioning. AMRaCut is optimized for partitioning to large number (1000+, 10000+) of mesh partitions for numerical solvers / simulations in distributed-memory. In such cases slight partition imbalance is reasonable as long as AMRaCut produces partitions with minimized boundaries. AMRaCut provides the same C API for distributed graph partitioning as ParMETIS and PT-Scotch.

Visual Comparison of Partitioning a small (100 x 100) tetrahedral mesh.
- C++ compiler with C++20 support
- MPI implementation (e.g., OpenMPI, MPICH, IntelMPI)
- CMake (version 3.20 or higher)
mkdir build
cd build
cmake ..
make
make installBy default AMRaCut is installed in build/install
build/
└── install/
├── include/
│ └─ amracut.h
└── lib/
└─ libamracut.so
Optionally AMRaCut can be configured with different integer widths. (default: 32)
cmake -DAMRACUT_INTEGER_WIDTH=64 .. # 64-bit integers
cmake -DAMRACUT_INTEGER_WIDTH=32 .. # 32-bit integers (default)
cmake -DAMRACUT_INTEGER_WIDTH=16 .. # 16-bit integers#include "amracut.h"
// Initialize MPI
MPI_Init(&argc, &argv);
MPI_Comm comm = MPI_COMM_WORLD;
// Create control structure
amracut_ctrl ctrl;
// Set up distributed graph data structures
amracut_uint_t *vtx_dist = /* ... */;
amracut_uint_t *xadj = /* ... */;
amracut_uint_t *adjncy = /* ... */;
amracut_uint_t *vwgt = /* ... */; // NULL for unweighted
amracut_uint_t *adjwgt = /* ... */; // NULL for unweighted
amracut_uint_t wgtflag = AMRACUT_UNWEIGHTED; // or one of AMRACUT_VTX_WEIGHTED | AMRACUT_EDGE_WEIGHTED | AMRACUT_VTX_EDGE_WEIGHTED
// Setup AMRaCut
amracut_setup(&ctrl, vtx_dist, xadj, adjncy, vwgt, adjwgt, wgtflag, &comm);
// Allocate memory for partition labels
amracut_uint_t *parts = malloc(local_vertex_count * sizeof(amracut_uint_t));
// Compute partitioning
amracut_partgraph(&ctrl, parts, true, 1);
// Clean up
amracut_destroy(&ctrl);
free(parts);
// Finalize MPI
MPI_Finalize();#include "amracut.h"
// Initialize MPI
MPI_Init(&argc, &argv);
MPI_Comm comm = MPI_COMM_WORLD;
// Set up octree data structures
amracut_uint_t *vtx_dist = /* ... */;
oct_element *local_elements = /* ... */;
// Allocate memory for partition labels
amracut_uint_t *parts = malloc(local_element_count * sizeof(amracut_uint_t));
// Compute octree partitioning (combined setup, partitioning, and cleanup)
amracut_partgraph_octree(vtx_dist, local_elements, parts, &comm);
// Free memory
free(parts);
// Finalize MPI
MPI_Finalize();amracut_uint_t amracut_setup(amracut_ctrl *ctrl,
const amracut_uint_t *vtx_dist,
const amracut_uint_t *xadj,
const amracut_uint_t *adjncy,
const amracut_uint_t *vwgt,
const amracut_uint_t *adjwgt,
const amracut_uint_t wgtflag,
MPI_Comm *comm);Initializes local graph structure and temporary buffers. Must be called before amracut_partgraph and amracut_destroy.
amracut_uint_t amracut_partgraph(amracut_ctrl *ctrl,
amracut_uint_t *parts,
bool use_diffusion,
int verbose);Computes graph partitioning after amracut_setup has been called.
amracut_uint_t amracut_destroy(amracut_ctrl *ctrl);Destroys and frees all internal data structures.
amracut_uint_t amracut_partgraph_octree(const amracut_uint_t *vtx_dist,
const oct_element* local_elements,
amracut_uint_t *parts,
MPI_Comm *comm);Computes graph partitioning for octree element graphs. This is a convenience wrapper that handles setup, partitioning, and cleanup in one call.
An opaque structure that holds all AMRaCut internal data structures.
A structure representing an octree element with connectivity information.


