Skip to content

RiceD2KLab/SWiM

Repository files navigation

Introducing Spacecraft With Masks (SWiM) Dataset for Segmentation of unknown spacecraft for In-space Inspection

Project Description

This NASA - Rice University D2K Lab collaborative project aims to create a large spacecraft image dataset with segmentation masks along with benchmark performance for in-space inspection of spacecraft. The focus is to build a general-purpose instance segmentation model that can accurately segment spacecraft. We provide benchmark performance using You Only Look Once (YOLO)v8 nano and YOLO v11 nano models, optimized to run on resource-constrained hardware. The dataset and performance benchmark are expected to enhance NASA’s capability for autonomous inspections, improving spacecraft navigation, pose estimation, and structural analysis under various visual distortions in space imagery.

If you use our dataset or code, please cite our paper:

Jeffrey Joan Sam, Janhavi Sathe, Nikhil Chigali, Naman Gupta, Radhey Ruparel, Yicheng Jiang, Janmajay Singh, James W. Berck, and Arko Barman, "A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers", arXiv preprint arXiv:2507.10775, 2025.

Model files and Datasets

All the trained models can be found here. Download the appropriate pt file and save it under the models/ directory.

All the dataset versions can be found here. Download it and save it under the data/ directory.

Information about each dataset version is given in the README attached with the dataset detailing the makeup of the data and the splits.

Software dependencies

Python 3.11+ CUDA v12.1

Training Dependencies

  • numpy==1.23.5
  • opencv-python==4.10.0.84
  • pillow==10.4.0
  • matplotlib==3.7.2
  • torch==2.4.1
  • torchvision==0.19.1
  • ml-collections==0.1.1
  • pybboxes==0.1.6
  • ultralytics==8.0.238
  • transformers==4.45.1
  • loguru==0.4.6
  • wandb==0.18.5
  • python-dotenv==1.0.1

Inference Dependencies

  • numpy==1.23.5
  • opencv-python==4.10.0.84
  • pillow==10.4.0
  • torch==2.4.1
  • torchvision==0.19.1
  • ultralytics==8.0.238
  • onnx==1.17.0
  • onnxruntime==1.19.2

Setup

  1. Create a Python venv environment

    python -m venv .venv
  2. Activate the environment

    • On Windows:

      .\.venv\Scripts\activate
    • On macOS/Linux:

      source .venv/bin/activate
  3. Install requirements

    pip install -r requirements.txt

Usage

To use onnx_pipeline.py, run the following command:

python src/onnx_pipeline.py --model best.onnx --input input_image.png --output output_segmented_image.jpg --num_threads 3 --num_streams 1

Hardware for Training

Baseline model was trained on the following configuration:

Model YOLOv8 - nano
GPU NVIDIA GeForce RTX 4060 Laptop GPU
GPU Memory 8GB
CPU 13th Gen Intel(R) Core(TM) i9-13900H (2.60 GHz)
RAM 16.0 GB

Other versions of the model were done on the cloud GPU provider: modal.com

Directory Structure

/NASA_segmentation_F24/
├── data_wrangling/
│   ├── generate_posebowl_masks.py
│   ├── binary_masks_to_yolo_polys.py
│   ├── resize_and_merge_classes_spacecrafts.py
│   └── create_yaml.py
├── modeling/
│   └── train.py
├── testing/
│   ├── benchmark.py
│   └── validate.py
├── utils/
│   └── config.py
├── data/
│   ├── dataset-v1/ 
│   └── dataset-v2/ 
├── configs/
│   └── config.yaml
├── requirements.txt
├── LICENSE
├── CONTRIBUTING
├── .env.example
└── README.md 

Sample Output of the YoloV8 Model

After running the YOLOv8 segmentation model, you can expect to receive segmentation masks along with various logs and performance metrics that demonstrate the model's efficiency in detecting and masking spacecraft components in real-time.

Example Input

Screen Shot 2024-10-23 at 4 52 30 PM

Example Output

Screen Shot 2024-10-23 at 4 52 35 PM
  • Segmentation Masks: These masks outline the spacecraft components identified in the images.
  • Bounding Boxes: Predicted bounding boxes for detected spacecraft objects.

Weights & Biases Logging

To enable logging of validation results to Weights & Biases (WandB), you'll need to set your WandB API key in a .env file. Follow these steps to configure it:

  1. Create a .env file in the root directory of your project based on the provided example:

    cp .env.example .env
  2. Open the .env file and add your WandB API key:

    WAND_API_KEY = "YOUR_API_KEY"

    Replace YOUR_API_KEY with your actual WandB API key, which you can obtain from your Weights & Biases account.

License

This project is licensed under the APACHE License. See the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 7

Languages