Skip to content

TeamSOBITS/sam3_ros

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EN | JA

Contributors Forks Stargazers Issues License

SAM3 ROS

Table of Contents
  1. Overview
  2. Setup
  3. Usage
  4. Parameters
  5. Demo
  6. References

Overview

sam3_ros is a wrapper package that enables the use of Meta’s Segment Anything Model 3 (SAM3) in ROS 2.

This package performs promptable segmentation using text prompts, providing the following features in a ROS 2 environment.

Main Features:

  • Object segmentation via text instructions
  • Multi-class and multi-instance support
  • Bounding box output (Detection2D)
  • Detection with segmentation masks (Detection2DWithMask)
  • Visualization image output

(back to top)

Setup

This section explains how to set up this repository.

(back to top)

Environment

System Version
Ubuntu 22.04 (Jammy Jellyfish)
ROS Humble Hawksbill
Python 3.0~

Installation

  1. Move to your ROS 2 src directory.
    cd ~/colcon_ws/src/
  2. Clone this repository.
    git clone -b humble-devel https://github.com/TeamSOBITS/sam3_ros.git
  3. Navigate to this repository.
    cd sam3_ros
  4. Install dependencies.
    bash install.sh
  5. Build the package.
    cd ~/colcon_ws/
    colcon build --symlink-install

(back to top)

Download SAM 3 Weight File

Due to licensing restrictions, the SAM 3 weight file (sam3.pt) is not downloaded automatically. Therefore, you must download it manually in advance.

  1. Visit the SAM 3 model page on Hugging Face and request access to the model weights.

  2. After approval, download sam3.pt.

  3. Place the downloaded sam3.pt file into the directory below.

(back to top)

Usage

  1. Launch your camera and modify image_topic_name in sam3.launch.py to match your camera topic.

    Example:

    default_value="/camera/color/image_raw"              # orbbec_series
  2. If using an RGB-D camera, also modify point_cloud_topic in sam3.launch.py. Example:

    default_value="/camera/depth_registered/points"     # orbbec_series
  3. Place your prepared weight file into the weights directory.

  4. Update weight_file in sam3.launch.py

    default_value=os.path.join(get_package_share_directory("sam3_ros"), "weights", "sam3.pt")
  5. Rebuild the package.

    cd ~/colcon_ws/
    colcon build --symlink-install
  6. Launch SAM 3.

    ros2 launch sam3_ros sam3.launch.py

(back to top)

Parameters

The following parameters can be configured in sam3.launch.py.

Parameter Description Default
weight_file SAM3 weight file sam3.pt
prompt_text Target segmentation classes (string array) ["object"]
threshold Mask confidence threshold 0.75
half Enable FP16 inference True
image_show Enable Ultralytics visualization False
execute_default Enable inference on startup True
use_3d Enable 3D detection False
cluster_tolerance Distance threshold for grouping point clouds into a single object. Larger values increase search range and slow processing. 0.01
min_clusterSize Minimum number of points to be considered a valid object (smaller clusters are treated as noise). 100
max_clusterSize Maximum number of points allowed for one object (larger clusters are rejected as background, e.g., floor). 20000
noise_point_cloud_range Amount of point cloud trimming in x/y/z to remove background surfaces (floor/walls). Excessive values may remove object points. 0.01
fast_shot Enable fast_shot true
enable_id Append IDs to detected labels (e.g., apple_01) false

(back to top)

Publications

Topic Type Description
/sam3_ros/object_boxes Detection2DArray Bounding boxes only
/sam3_ros/object_detections_with_mask Detection2DWithMaskArray Detection results with masks
/sam3_ros/segmented_image sensor_msgs/Image Visualization image

(back to top)

Demo

Object Detection Object Recognition Instance Segmentation

(back to top)

References

About

SAM 3 for ROS 2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors