The systematic review + technology community has a large appetite for machine learning assisted screening, and it has been built into several systematic review platforms.
Despite arguments that reliable stopping criteria are vital to the responsible use of machine learning for this task, there is resistance to using or implementing these.
This stopping app provides an interface that lets users make decisions on when to stop themselves on ML-prioritised screening runs, when presented with only the information that would have been available to them at the time.
This framework is intended to test whether humans can judge an appropriate stopping point (given a recall target and a tolerance for missing the target over multiple runs).
Further, it can be extended to elicit preferences and risk tolerances, e.g. by designing choice experiments that allocate users a budget that can be spent on screening, with rewards and penalties for failure and success to meet targets.
- Backend work (saving results)
- Study design
- Can we judge stopping accurately?
- How do users approach tradeoffs (choice experiment)?
- Frontend work (interface)
Make sure nvm is installed https://github.com/nvm-sh/nvm
Then install and use the node version for this project
nvm install v22.16.0
nvm use v22.16.0Now install the npm packages we need for this app
npm installDownload the database of screening runs from mycloud and put it in src/lib/data
You can run the stopping app locally with
npm run dev
# or start the server and open the app in a new browser tab
npm run dev -- --openTo create a production version of the app:
npm run buildYou can preview the production build with npm run preview.
To deploy your app, you may need to install an adapter for your target environment.