Official repository for WACV 2026 paper: PoseAdapt: Sustainable Human Pose Estimation via Continual Learning Benchmarks and Toolkit by Muhammad Saif Ullah Khan and Didier Stricker.
Abstract: Human pose estimators are typically retrained from scratch or naively fine-tuned whenever keypoint sets, sensing modalities, or deployment domains change—an inefficient, compute-intensive practice that rarely matches field constraints. We present PoseAdapt, an open-source framework and benchmark suite for continual pose model adaptation. PoseAdapt defines domain-incremental and class-incremental tracks that simulate realistic changes in density, lighting, and sensing modality, as well as skeleton growth. The toolkit supports two workflows: (i) Strategy Benchmarking, which lets researchers implement continual learning (CL) methods as plugins and evaluate them under standardized protocols; and (ii) Model Adaptation, which allows practitioners to adapt strong pretrained models to new tasks with minimal supervision. We evaluate representative regularization-based methods in single-step and sequential settings. Benchmarks enforce a fixed lightweight backbone, no access to past data, and tight per-step budgets. This isolates adaptation strategy effects, highlighting the difficulty of maintaining accuracy under strict resource limits. PoseAdapt connects modern CL techniques with practical pose estimation needs, enabling adaptable models that improve over time without repeated full retraining.
PoseAdapt is an open-source framework for benchmarking continual learning strategies in human pose estimation and adapting pretrained models to new skeletons and domains with minimal supervision.
PyTorch 2.1.0 and MMPose 1.3.0 are required to run this project. Please follow the MMPose installation guide to set up the environment, and then install PoseAdapt:
pip install [-e] .Use the -e flag to install the package in editable mode when contributing to the codebase (e.g., implementing new continual learning strategies). We recommend using a virtual environment such as venv or conda to avoid dependency conflicts.
Please refer to the User Guide for detailed instructions on using PoseAdapt for both adapting pretrained models and benchmarking continual learning strategies.
If you use PoseAdapt in your research, please cite the following paper:
@inproceedings{khan2026poseadapt,
title={PoseAdapt: Sustainable Human Pose Estimation via Continual Learning Benchmarks and Toolkit},
author={Muhammad Saif Ullah Khan and Didier Stricker},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year={2026}
}-
Code: Licensed under the PolyForm Noncommercial License 1.0.0.
Non-commercial scientific and educational use is permitted. Commercial use requires separate permission. -
Third-Party Code:
-
Datasets:
See the NOTICE file for a summary of licensing and attribution.
