mlflow-backend brings MLflow-managed models to NVIDIA Triton Inference Server without re-implementing serving code.
The backend inspects the MLflow model metadata at load time, selects the right execution strategy, and exposes the model behind Triton's Python backend interface.
This repository also ships helper tooling that automates creation of the artifacts Triton expects (Python backend stub, conda execution environment archive, and model config.pbtxt).
- Auto-detects MLflow model flavors including generic
pyfuncand Hugging Facetransformersandsentence_transformers, parsing their input/output schemas for use in Triton. - Produces Triton-ready configuration files,
conda-packexecution environments, and Python backend stubs through themlflow-backend-utilsCLI. - Tested with Triton
25.08(although the library should support a wide range of Tritonserver versions) and Python3.12+, with tests covering unit, integration, and end-to-end packaging flows.
pip install mlflow_backendThe CLI is exposed as mlflow-backend-utils once the package is installed.
To use the backend inside Tritonserver's Python backend, the backend must be installed into the container image at /opt/tritonserver/backends/mlflow.
This can be done a number of ways:
- Build a custom Docker image that includes the backend.
FROM nvcr.io/nvidia/tritonserver:25.08-py3
RUN git clone https://github.com/wwg-internal/mlflow_backend.git && \
mv ./mlflow_backend/src/mlflow_backend /opt/tritonserver/backends/mlflow && \
rm -rf ./mlflow_backend
- Mount the backend code into the container at runtime (see example below).
mlflow_backend_path=$(python -c "import mlflow_backend; from pathlib import Path; print(Path(mlflow_backend.__file__).parent.absolute())")
docker run --rm -p8000:8000 -p8001:8001 -p8002:8002 \
-v ./model-repo:/models \
-v ${mlflow_backend_path}:/opt/tritonserver/backends/mlflow \
nvcr.io/nvidia/tritonserver:25.08-py3 \
--model-repository=/models
- Install the backend at container startup, for example using git:
git clone https://github.com/wwg-internal/mlflow_backend.git
mv ./mlflow_backend/src/mlflow_backend /opt/tritonserver/backends/mlflow
rm -rf ./mlflow_backend
tritonserver --model-repository=/models
- Export your MLflow model – have a local copy of the MLflow model directory (contains
MLmodel, artifacts, and optional Python environment files). - Build the Python backend stub – required when your model specifies a different Python version than the tritonserver default (3.12).
This defaults to using Docker (
mlflow-backend-utils build-stub \ --python-version 3.13 \ --triton-version r25.08 \ --output-path triton_python_backend_stub
nvcr.io/nvidia/tritonserver:<version>-py3). A Kubernetes-based builder is also available for environments where Docker is not an option. - Create a conda-pack execution environment (optional but recommended for non-system dependencies).
The tool will attempt to build with a Docker container first, falling back to using local conda build when possible.
mlflow-backend-utils build-env \ --python-version 3.13 \ --requirements path/to/requirements.txt \ --output-path conda-pack.tar.gz
- Generate the Triton model configuration.
mlflow-backend-utils build-config \ --model-path path/to/mlflow_model \ --conda-pack-path conda-pack.tar.gz \ --default-max-batch-size 1024 > config.pbtxt - Assemble the Triton model repository following the layout below:
Multiple versions can be added as folders (
triton-repo/ my_model/ config.pbtxt triton_python_backend_stub conda-pack.tar.gz 1/ MLmodel python_env.yaml model artifacts ...2/,3/, …) as usual for Triton. - Start Triton pointing at the repository.
backend_path=$(python -c "import mlflow_backend; from pathlib import Path; print(Path(mlflow_backend.__file__).parent.absolute())") docker run --rm -p8000:8000 -p8001:8001 -p8002:8002 \ -v ./triton-repo:/models \ -v ${backend_path}:/opt/tritonserver/backends/mlflow \ nvcr.io/nvidia/tritonserver:25.08-py3 \ tritonserver --model-repository=/models
Once Triton loads your model, send requests using the standard Triton HTTP/gRPC clients. The backend converts Triton tensors (strings, numpy arrays, Pandas data frames, Torch tensors) into what your MLflow model expects and returns results in Triton's format.
build-env: Creates aconda-pack.tar.gzsuitable for uploading alongside the model. Supports Linux x86/ARM targets, Docker-based or local builds, and custom package indexes.build-stub: Compiles the Triton Python backend stub for the requested Python/Triton version pair. Supports Docker and Kubernetes builders and can publish the artifact to S3 from cluster jobs.build-config: Reads the MLflowMLmodelmetadata, infers signatures, and emits an editableconfig.pbtxt. Special handling is included for transformer models whose schemas are dynamic.
Run mlflow-backend-utils <command> --help for full option lists.
- Generic
pyfuncmodels (numpy arrays, Pandas DataFrames/Series, dictionaries, and lists). - Hugging Face
transformerssaved throughmlflow.transformers, including sequence classification, token classification, QA, text generation, and translation pipelines. sentence_transformersmodels with GPU/CPU auto-detection.- PyTorch models logged via MLflow's
pytorchflavor (loaded throughpyfunc).
If your flavor is not listed, the backend defaults to the general pyfunc adapter and raises clear errors when an unsupported return type is encountered.
All tests can be run through nox (install with pip install nox):
# List all available nox sessions
nox -l
# Run unit tests
nox -s unit_tests
# Run integration tests
# Requires docker and [kind](https://kind.sigs.k8s.io/) and kubectl installed
kind create cluster --config ./tests/integration/manifests/kind.yaml
kubectl apply -f ./tests/integration/manifests/localstack.yaml
nox -s integration_tests
# Run e2e python tests
# These tests run the test models inside a python environment with a specified mlflow versions installed
nox -s "e2e_python_tests(mlflow='2.22.1')"
# Run e2e triton tests
# These tests run the test models inside a tritonserver container with a specified python version
nox -s "e2e_triton_tests(python_version='3.13')"We are actively growing mlflow_backend and would love your help. Please read the contributing guide for details on workflows, Conventional Commit requirements, and the expectation that every feature or bug fix ships with tests covering the relevant edge cases. Issues and pull requests that outline Triton or MLflow version constraints are especially helpful while the project is young.