I use poetry. It has several advantages over pip and conda, some important ones:
- You can specify the python version
- You get a reproducable environment because it pins transitive dependencies and writes them to a poetry.lock file
- It supports uninstalling all libraries not listed in the current poetry.lock file
- this is actually before doing any ops. But the library is so useful, I have to throw it in for model/data testing during development: https://deepchecks.com/
Here is a talk from pydata on which monitoring metrics to implement: https://youtu.be/wWxqnZb-LSk If you just want to scan the slides, check in this repo.
Here are some of my favourite resources:
Eric Breck Shanqing Cai Eric Nielsen Michael Salib D. Sculley Reliable Machine Learning in the Wild - NIPS 2016 Workshop (2016)
Abstract Using machine learning in real-world production systems is complicated by a host of issues not found in small toy examples or even large offline research experiments. Testing and monitoring are key considerations for assessing the production-readiness of an ML system. But how much testing and monitoring is enough? We present an ML Test Score rubric based on a set of actionable tests to help quantify these issues.