-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Any copter frame can be used. In this case, we use a Tarot 650 frame, equipped with a Pixhawk Cube and a Nvidia TX2 companion computer. The TX2 is mounted on an Auvidea J121 Carrier Board.
The 3D ZED camera is mounted on a simple 1-axis [gimbal] (https://github.com/mtbsteve/redtail/tree/master/tools/platforms/ZED_Gimbal) The needed printfiles for the 3D parts can be found under the link. One axis is sufficient to level the camera around the nick-axis. We do not need the roll or yaw axis stabilization since we won't create cinema-grade footage anyway and we do want to keep the weight as low as possible.

The NVIDIA Jetson platform is used to run most of the components, such as YOLO, the ZED inference, the controller, and video streaming. Install Jetpack 4.4.x by using the Nvidia SDKmanager. If you are using the Auvidea J120 breakout board, you need to apply some kernel patches as described on the Auvidea homepages. Install the ZED SDK and Python API from the ZED homepage, install ROS Melodic (full installation), Python3, cvbridge and Darknet/yolo (full list see the readme file) as described in the respective installation instructions.
Important: you need to have apsync installed in order to have all drone communication set up correctly. Code and instructions can be found here
The Arducopter wiki describes how to connect the Jetson with the Pixhawk here and how to configure Arducopter to communicate with the TX2.
Do not install the TX2 image file referenced in the Arducopter wiki since its based on an outdated Jetpack version and will not work with this code
apsync creates a WiFi access point with both SSID and password set as ardupilot on the TX2. Instead of using the bulky wifi antennas supplied with the TX2 development board, you may add a small WiFi antenna to your Jetson board, such as a Taoglas Limited FXP522.A.07.A.001 dual channel 3dBi antenna which is very small and can lay flat on one part of your drone's surface and also is powerful enough to be able to connect to Jetson module using WiFi within LOS.
Connect to this WiFi access point using a computer you can SSH to Jetson module from. Apsync creates an apysync username for Jetson and sets it as it's default login, asking you to set the password for this account during installation. It also sets the default IP address of the module to 10.0.1.128 static address. You can either use the IP address to SSH to the module or add apsync 10.0.1.128 to your known hosts in /etc/hosts and use the hostname for SSH:
$ ssh apysync@10.0.1.128Finally, set the TX2 to max power mode with sudo nvpmodel -m 0 and then run jetson_clocks
In order to communicate to mavlink through mavros, make sure you have opened a UDP port in your mavlink-router. Edit ~/start_mavlink-router/mavlink-router.conf and add the following lines at the end:
[UdpEndpoint to_ros]
Mode = Normal
Address = 127.0.0.1
Port = 14855
For the ROS installation, make sure you have set it up for Python3. Please see here on how to do this for ROS Melodic and cv_bridge: how-to-setup-ros-with-python-3
Next, create a catkin workspace and navigate to the src directory:
cd $CATKIN_WS/src
git clone --recursive https://github.com/mtbsteve/pegasus.git
mv pegasus zed_yolo
cd ..
catkin build zed_yolo -DCMAKE_BUILD_TYPE=Release
This project provides two ROS nodes, zed_yolo zed_yolo_node.py and zed_yolo zed_ros_to_mavlink.py. The first node generates a point cloud and the left camera image from by the ZED camaera stereo images. It runs the darknet-yolo detection and creates three ROS topics:
/darknet_ros/color_image the left camera image with the distance to the next obstacle information in the the center of the image
/darknet_ros/detection_image the yolo view with the detected objects marked with a bounding box and average distance information from the camera
/darknet_ros/distance_array the information needed by the OBSTACLE_DISTANCE function of Mavlink packaged into a ROS LaserScan message
/darknet_ros/nine_sector_image the ZED depth view divided into 9 sectors including the display of the median depth of each sector
/darknet_ros/9sectorarray the information needed by the OBSTACLE_DISTANCE_3D function of Mavlink packaged into a ROS PointCloud message
The image topics can be streamed with the ROS to RTSP node to any RTSP video stream capable ground station, eg QGroundControl, Mission Planner or Solex.
The zed_yolo zed_ros_to_mavlink.py node reads from the /darknet_ros/distance_array and /darknet_ros/9sectorarray topic and passes the information on to the Flight Controller.
Adopt the following darknet/yolo variables in the zed_yolo_nodes.py file according to your darknet installation:
lib = CDLL("/home/apsync/GitHub/darknet/libdarknet.so", RTLD_GLOBAL)
darknet_path="/home/apsync/GitHub/darknet/"
config_path = darknet_path + "cfg/yolov4-tiny.cfg"
weight_path = darknet_path + "yolov4-tiny.weights"
meta_path = darknet_path + "cfg/coco.data"
To start:
# start the ROS to RTSP streaming node
roslaunch ros_rtsp rtsp_streams.launch
# launch the YOLO detection and image node
roscore &
rosrun zed_yolo zed_yolo_node.py
# launch the Mavlink communication node
rosrun zed_yolo zed_ros_to_mavlink.py
In order to stream video to your GCS, you may install a ROS image node streamer. I am using the ROS to RTSP streamer from here:
cd $CATKIN_WS
git clone https://github.com/CircusMonkey/ros_rtsp.git
catkin build ros_rtsp
Before you build the project, please replace the image2rtsp.cpp source file with the version included in the redtail/tools/ros2rtsp_patches directory. This patch includes the required changes to the gstreamer pipeline to display the video stream on all rtsp players correctly.
Edit the configuration file located at $CATKIN_WS/src/ros_rtsp/config/stream_setup.yaml and enter the following mountpoints at the end of the file:
stream-zed_camera:
type: topic
source: /darknet_ros/color_image
mountpoint: /zedimage
caps: video/x-raw,framerate=10/1,width=1280,height=720
bitrate: 300
stream-zed_ros_depth:
type: topic
source: /darknet_ros/nine_sector_image
mountpoint: /zed9sectors
caps: video/x-raw,framerate=10/1,width=1280,height=720
bitrate: 300
stream-zed_ros_yolo:
type: topic
source: /darknet_ros/detection_image
mountpoint: /zedyolo
caps: video/x-raw,framerate=10/1,width=1280,height=720
bitrate: 300
Set the following rtsp pipeline in your ground control station (eg QGC, SOLEX or Mission Planner) on your PC or Tablet computer:
rtsp://10.0.1.128:8554/<your mountpoint>
For the mountpoint, enter the name you have defined in the stream_setup.yaml file above.
The Pegasus code supports both, Simple Object Avoidance in Loiter and AltHold flight modes and Bendy Ruler Obstacle avoidance in Auto, RTL and Guided modes. Depending on the selected flight mode, either Simple Avoidance or Bendy Ruler is triggered. See also Arducopter documentation: https://ardupilot.org/copter/docs/common-object-avoidance-landing-page.html
Connect to the autopilot with a ground station (i.e. Mission Planner) and check that the following parameters are set, assuming the Mavlink connectivity between the TX2 and the Pixhawk is operational via APSync:
- PRX_TYPE = 2: for MAVLink
- AVOID_ENABLE = 7: “All” to use all sources of barrier information including “Proximity” sensors
Example of specifics for Loiter and AltHold mode:
- AVOID_MARGIN = 1.5: How many meters from the barrier the vehicle will attempt to stop or try to slide along it
- AVOID_BEHAVE = 0: Whether the vehicle should simply Stop (1) in front of the barrier or Slide (0) around it
- AVOID_DIST_MAX = 1.5: How far from a barrier the vehicle starts leaning away from the barrier in AltHold
- AVOID_ANGLE_MAX = 3000: How far the vehicle will try to lean away from the barrier
Simple Object Avoidance in AltHold mode:

In Auto, Guided, and RTL modes, Copter 4.0 (and higher) support “BendyRuler” for path planning around obstacles and fences. The BendyRuler algorithm probes around the vehicle in many directions looking for open spaces and then tries to pick the direction that is sufficiently open while also moving the vehicle towards the final destination. Details see here: https://ardupilot.org/copter/docs/common-oa-bendyruler.html
Example configuration for Bendy Ruler:
- OA_TYPE = 1 Enable BendyRuler
- OA_LOOKAHEAD = 15 : It is the distance (in meters) ahead of the vehicle that should be probed. Obstacles further than this far away will be ignored. 15m is the max reliable distance with the ZED.
- OA_MARGIN_MAX = 2 : the distance (in meters) that the vehicle should stay away from obstacles. 2m is a typical value.
Arducopter 4.1 onwards supports a 3D Object Database. To enable, edit the file zed_ros_to_mavlink.py and set the value for ac_version_41 to True.
# enable only if you are on Arducopter 4.1 or higher
ac_version_41 = True
In flightmode Loiter, Guided, RTL, Auto the new 3D object avoidance is used. In AltHold the old obstacle distance functionality is supported. Therefore, the depth array as retrieved from the camera is divided into a 3x3 grid. The smallest depth in each grid is stored and converted into a body-frame XYZ vector. This vector is packed into a mavlink message OBSTACLE_DISTANCE_3D and sent to Flight Controller.

The corresponding proximity view in MissionPlanner:

This implementation includes a YOLO based object detection and identification using darknet-yolov4 and the point cloud generated by the ZED stereo camera. The YOLO detection view can be streamed to any RTSP capable GCS like Solex or MissionPlanner with the zedyolo RTSP streaming mountpoint as explained above.
If a person is detected, the bounding box(es) of the detected person(s) are displayed also in the 9-sector depth image. The 3D coordinate of the center of the detected person is sent to Arducopter OBSTACLE_DISTANCE_3D database.
The YOLO Detection view as seen from the drone:

The detection view displayed on the GCS (Solex):

As Ground Control software, you may use Q-GroundControl or Mission Planner.
Once you have set up and tested everything, you may use an Android device as GCS. Benefit is you do not need to carry a laptop around. As GCS, you may use QGroundControl or Solex. Both apps can be found on GooglePlay. While QGroundControl can be downloaded for free, I strongly recommend Solex since it provides all the means to embed the various ROS controls to start/stop nodes, to control cameras, and to handle joystick input. See the project Redtail chapter Usage of SolexCC on how Solex can be used also in this project. The workers are similar, however adopted to the functionality of this project.
You may use Solex as your groundstation along with SolexCC to start/stop the different nodes. As a prerequisite, you need to install rosnodejs via npm on the TX2.
Note: There is a known incompatibility between npm and ROS. Once you install ROS, it automatically deinstalls npm, and vice versa due to an incompatibility in libssl1.0 which is being used by both packages.
Workaround:
sudo apt-get install nodejs-dev node-gyp libssl1.0-dev
sudo apt-get install npm
npm install rosnodejs
# then reinstall ROS (melodic):
sudo apt install ros-melodic-desktop # you need to reinstall all needed ROS packages!
You find the adopted workers for this project in the solexcc_workers directory: https://github.com/mtbsteve/pegasus/tree/master/solexcc_workers
- Start Panel: start/stop of the ZED wrapper and ros2rtsp video streaming
- Start Panel: start/stop of the mavlink distance messaging
- Control Screen: Switch between video streams for distance measuring and yolo object detection
- Control Screen: Camera controls for video recording and for taking pictures