This repo contains code used to
- perform real time optical flow tracking of muscle deformation (i.e., contour motion) from time series ultrasound frames; and
- record and visualize this data.
If you use this code for academic purposes, please cite the following publication: Laura A. Hallock, Bhavna Sud, Chris Mitchell, Eric Hu, Fayyaz Ahamed, Akash Velu, Amanda Schwartz, and Ruzena Bajcsy, "Toward Real-Time Muscle Force Inference and Device Control via Optical-Flow-Tracked Muscle Deformation," in IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE), IEEE, 2021.
This README primarily describes the methods needed to replicate the data collection procedure used in the publication above. The code and documentation are provided as-is; however, we invite anyone who wishes to adapt and use it under a Creative Commons Attribution 4.0 International License.
To download all modules and scripts, clone this repository via
git clone https://github.com/lhallock/us-streaming.gitand navigate to this branch via
git checkout tnsre-2021This code is designed for use with an eZono 4000 ultrasound machine with SSH access, though it can likely be adapted to other platforms.
This code is designed for use alongside our time series visualization and recording code, which can be found here and enables simultaneous collection of ultrasound, surface electromyography (sEMG), force, and other data. Both repositories must be downloaded into the same directory for this code to run without modification.
To run the code, the following Python modules are required, all of which can be installed via pip: numpy, opencv-python, future, iso8601, PyQt5, PyQt5-sip, pyqtgraph, pyserial, PyYAML, and serial.
This section describes the file structure and code necessary to run ultrasound tracking. Two main scripts are included: the first, start_process.py, starts the graphing code from the streaming repository as a separate process and runs the ultrasound tracking code in its own process; the second, ultrasound_tracker.py, starts one thread that receives ultrasound images from the eZono and another that runs optical flow tracking on two user selected areas of the muscle to determine muscle thickness, then saves these thickness values and the received ultrasound images and sends the thickness values to the graphing process.
The us-streaming and amg_emg_force_control repositories should be arranged as follows, with empty files and folders manually created as listed:
├── amg_emg_force_control # visualization/recording repository linked above
├── us-streaming # this repository
│ ├── images_filtered # empty folder
│ ├── images_raw # empty folder
│ ├── thickness.txt # empty file
│ ├── ...Note that the names images_filtered, images_raw, and thickness.txt are arbitrary and can be modified, as long as they're also changed in the commands below.
Next, setup a python virtual environment in a new directory by running
python -m venv DIR_NAMEthen navigate to the amg_emg_force_control repo and run
pip install -r requirements.txt
pip install -e .Steps:
- Go to the us-streaming folder
- Inside us-streaming, create two folders called images_raw and images_filtered
- In terminal, go to us-streaming and type
python start_process.py <trial_num> <thickness_file> <images_folders_prefix>specifying the above command line arguments as follows:
trial_num: which trial you want to run0: Graphs/records Ultrasound, EMG, and Force1: Records Ultrasound, EMG, and Force, but only graphs Force2: Records Ultrasound, EMG, and Force, but only graphs Ultrasound3: Records Ultrasound, EMG, and Force, but only graphs EMG4: Records and graphs only Ultrasound
thickness_file: filename to save ultrasound tracked muscle thickness toimages_folders_prefix: the prefix for the name of the folders to save the ultrasound images to (images will be the two folders [images_folders_prefix]_raw and [images_folders_prefix]_filtered.) Make sure these folders have been created before running this command.
For trial 4, you can run: python start_process.py 4 thickness.txt images
- Ssh into the ultrasound and type b-mode-compounded-data-out <YOUR_COMPUTER_IP> 19001, replacing <YOUR_COMPUTER_IP> with the IP address of the computer you are running this code on.
- On the displayed image, select 10 dots on the top of the muscle and 10 dots on the bottom
- Enter in a filename you want to save the recording to. Wait until the green line in the graph reaches the end, then press record.
- Min and max calibrate
- Click start trajectory 0, and run the trial
- Click stop recording
- Control-c out of everything
The images will be saved in us-streaming/images_raw and us-streaming/images_filtered. The thickness will be saved in us-streaming/thickness.txt. The recorded graph will be saved in /data/.
The procedure above generates JPEG images and corresponding time series text files that are accessible via any standard image viewer and text editor, respectively. The generated Python *.p archive files, which contain time series data for all streams and associated metadata, can be accessed via our corresponding analysis repository.
If you're interested in contributing or collaborating, please reach out to lhallock [at] eecs.berkeley.edu.
