Introducing Spacecraft With Masks (SWiM) Dataset for Segmentation of unknown spacecraft for In-space Inspection
This NASA - Rice University D2K Lab collaborative project aims to create a large spacecraft image dataset with segmentation masks along with benchmark performance for in-space inspection of spacecraft. The focus is to build a general-purpose instance segmentation model that can accurately segment spacecraft. We provide benchmark performance using You Only Look Once (YOLO)v8 nano and YOLO v11 nano models, optimized to run on resource-constrained hardware. The dataset and performance benchmark are expected to enhance NASA’s capability for autonomous inspections, improving spacecraft navigation, pose estimation, and structural analysis under various visual distortions in space imagery.
If you use our dataset or code, please cite our paper:
Jeffrey Joan Sam, Janhavi Sathe, Nikhil Chigali, Naman Gupta, Radhey Ruparel, Yicheng Jiang, Janmajay Singh, James W. Berck, and Arko Barman, "A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers", arXiv preprint arXiv:2507.10775, 2025.
All the trained models can be found here. Download the appropriate pt file and save it under the models/ directory.
All the dataset versions can be found here. Download it and save it under the data/ directory.
Information about each dataset version is given in the README attached with the dataset detailing the makeup of the data and the splits.
- numpy==1.23.5
- opencv-python==4.10.0.84
- pillow==10.4.0
- matplotlib==3.7.2
- torch==2.4.1
- torchvision==0.19.1
- ml-collections==0.1.1
- pybboxes==0.1.6
- ultralytics==8.0.238
- transformers==4.45.1
- loguru==0.4.6
- wandb==0.18.5
- python-dotenv==1.0.1
- numpy==1.23.5
- opencv-python==4.10.0.84
- pillow==10.4.0
- torch==2.4.1
- torchvision==0.19.1
- ultralytics==8.0.238
- onnx==1.17.0
- onnxruntime==1.19.2
-
Create a Python
venvenvironmentpython -m venv .venv
-
Activate the environment
-
On Windows:
.\.venv\Scripts\activate
-
On macOS/Linux:
source .venv/bin/activate
-
-
Install requirements
pip install -r requirements.txt
To use onnx_pipeline.py, run the following command:
python src/onnx_pipeline.py --model best.onnx --input input_image.png --output output_segmented_image.jpg --num_threads 3 --num_streams 1Baseline model was trained on the following configuration:
| Model | YOLOv8 - nano |
|---|---|
| GPU | NVIDIA GeForce RTX 4060 Laptop GPU |
| GPU Memory | 8GB |
| CPU | 13th Gen Intel(R) Core(TM) i9-13900H (2.60 GHz) |
| RAM | 16.0 GB |
Other versions of the model were done on the cloud GPU provider: modal.com
/NASA_segmentation_F24/
├── data_wrangling/
│ ├── generate_posebowl_masks.py
│ ├── binary_masks_to_yolo_polys.py
│ ├── resize_and_merge_classes_spacecrafts.py
│ └── create_yaml.py
├── modeling/
│ └── train.py
├── testing/
│ ├── benchmark.py
│ └── validate.py
├── utils/
│ └── config.py
├── data/
│ ├── dataset-v1/
│ └── dataset-v2/
├── configs/
│ └── config.yaml
├── requirements.txt
├── LICENSE
├── CONTRIBUTING
├── .env.example
└── README.md After running the YOLOv8 segmentation model, you can expect to receive segmentation masks along with various logs and performance metrics that demonstrate the model's efficiency in detecting and masking spacecraft components in real-time.
Example Input
Example Output
- Segmentation Masks: These masks outline the spacecraft components identified in the images.
- Bounding Boxes: Predicted bounding boxes for detected spacecraft objects.
To enable logging of validation results to Weights & Biases (WandB), you'll need to set your WandB API key in a .env file. Follow these steps to configure it:
-
Create a
.envfile in the root directory of your project based on the provided example:cp .env.example .env
-
Open the
.envfile and add your WandB API key:WAND_API_KEY = "YOUR_API_KEY"
Replace
YOUR_API_KEYwith your actual WandB API key, which you can obtain from your Weights & Biases account.
This project is licensed under the APACHE License. See the LICENSE file for details.