ROS workspace for the UM-Driverless kart software stack.
Assumes Ubuntu 22.04 with ROS 2 Humble already installed and sourced.
./scripts/install_deps.shsource /opt/ros/humble/setup.bash
colcon build
source install/setup.bashThis launches just the teleop processing node; you can publish fake /joy input.
source /opt/ros/humble/setup.bash
source install/setup.bash
ros2 run joy_to_cmd_vel joy_to_cmd_vel --ros-args --params-file src/kart_bringup/config/teleop_params.yamlros2 topic echo /actuation_cmdros2 topic pub /joy sensor_msgs/msg/Joy "{axes: [0.0, 0.0, 0.0, 1.0, -1.0], buttons: [0,0,0,0,0,1]}"Requires a joystick at /dev/input/js0 and the kart microcontroller on /dev/ttyTHS1.
ros2 launch kart_bringup teleop_launch.py- Kart Docs: https://github.com/UM-Driverless/kart_docs
- Kart Docs site: https://um-driverless.github.io/kart_docs/
- ROS 2 installation: https://docs.ros.org/en/rolling/Installation.html
- ROS 2 Humble docs: https://docs.ros.org/en/humble/
Pulled from https://github.com/UM-Driverless/driverless and stored locally at
test_data/driverless_test_media.
Pulled from https://github.com/UM-Driverless/driverless and stored locally at
models/perception/yolo/best_adri.pt.
Run YOLO on a test image or video and save annotated output.
python3 scripts/run_yolo_on_media.py \
--source test_data/driverless_test_media/cones_test.png \
--weights models/perception/yolo/best_adri.pt \
--output outputs/yoloFirst run will download the YOLOv5 code via Torch Hub and cache it locally.
Launch the image source and YOLO detector nodes on test media.
source /opt/ros/humble/setup.bash
colcon build --packages-select kart_perception
source install/setup.bash
ros2 launch kart_perception perception_test.launch.py \
source:=test_data/driverless_test_media/cones_test.png \
weights:=models/perception/yolo/best_adri.pt