Welcome to our repository implementing COIL, as presented in:
Vats*, S., Zhao*, M., Callaghan, P., Jia, M., Likhachev, M., Kroemer, O., & Konidaris, G.D. (2025). Optimal Interactive Learning on the Job via Facility Location Planning. In Robotics: Science and Systems (RSS).
We have tested our code on Ubuntu 20.04.
- Clone this repository and change directory into it:
git clone git@github.com:shivamvats/coil.git cd coil - Create a conda environment and activate it:
conda env create -f environment.yaml conda activate coil-env
- Install the local packages
robosuiteandmimicgen.pip install -e deps/robosuite pip install -e deps/mimicgen
- Install the
adaptive_teaming(COIL) package:pip install -e .
conda activate coil-envThe simplest way to run COIL is via the run_interaction_planner.py script:
python scripts/run_interaction_planner.py env=pick_place planner=fc_pref_planner task_seq.num_tasks=10 render=TrueThis script will simulate human-robot collaboration in our mujoco and robosuite based pick_place environment using the specified interaction planner. Results will be displaye on the terminal and automatically logged in a sub-directory in the outputs directory.
Parameters can be modified using the command line. For example, use render=False to disable rendering. All parameters and options are specified in cfg/run_interaction_planner.yaml. Please look at make_planner and make_env functions in scripts/run_interaction_planner.py for the list of supported planners and environment.
If you use our work or code in your research, please cite our paper:
@inproceedings{vats2025optimal,
title={Optimal Interactive Learning on the Job via Facility Location Planning},
author={Vats, Shivam and Zhao, Michelle and Callaghan, Patrick and Jia, Mingxi and Likhachev, Maxim and Kroemer, Oliver and Konidaris, George},
booktitle={Robotics: Science and Systems (RSS)},
year={2025},
}