Skip to content

pms5343/TWIST

Repository files navigation

TWIST: Tension from Wire-image Strand Tracking

Vision-based Automated Cable Tension Monitoring Using Pixel Tracking

Dongyoung Ko, Minsoo Park∗, Soojin Jin, Pa Pa Win Aung, Seunghee Park∗, * : Coressponding Authors

Automation in Construction

Vol. 179, No.106488, pp. 1-19, Nov 2025, DOI:https://doi.org/10.1016/j.autcon.2025.106488

Abstract

Traditional methods for monitoring cable tension rely on indirect measurements such as cable vibrations and often require specialized calibration. These approaches limit the efficiency, and non-contact capability of tension monitoring across various structures. This paper presents a vision-based framework for automated cable tension monitoring, which directly captures image data of internal steel strands. By leveraging advanced computer vision techniques such as zero-shot segmentation, depth estimation, edge detection, and dense pixel tracking critical geometric parameters are extracted and integrated into a kinematic-based model for tension estimation. A calibration-free method for estimating real-world pixel size, derived from the helical geometry of the strands, enables field deployment without the need for camera setup information. Experimental results show strong correlation with reference data, achieving a mean absolute error of 4.94% under elastic conditions. These findings pave the way for a promising alternative in vision-based structural health monitoring for prestressed structures.

System Requirements

All experiments were conducted under the following hardware and software configuration:

Hardware Environment

  • Workstation: Dell Precision 7920 Rack
  • CPU: Intel Xeon Silver 4210R (10 cores, 20 threads, 2.4–3.2 GHz)
  • Memory: 128 GB DDR4-3200 ECC
  • GPU: NVIDIA RTX A6000 (48 GB GDDR6 ECC)
  • Storage: 1 TB NVMe SSD
  • Operating System: Ubuntu 20.04 LTS

Software Environment

  • Python: 3.10
  • PyTorch: 2.1.0 (with CUDA 12.1)
  • TensorFlow: 2.15 (with CUDA 11.7)
  • scikit-learn: 1.4

1. Preparation

  • Clone or download the code package from this repository.
  • Set the working directory:
MainFolder = "/your/custom/path"

Experimental video of the tensioning strand should be placed in the Input_Video/ folder.

Depth-Anything

SAM (Segment Anything Model)

Cotracker

Module GitHub Version / Commit License
1 SAM 2 – Segment Anything Model V2 facebookresearch/sam2 sam2.1 Apache 2.0
2 Depth Anything V2 DepthAnything/Depth-Anything-V2 vitl MIT
3 CoTracker facebookresearch/co-tracker CoTracker V3 Apache 2.0

LDC (Lightweight Dense CNN for Edge Detection)

  • [main.py] Set input image directory:
    --input_val_dir=/your/image/folder

  • [main.py] Set output directory for results:
    --output_dir=/your/output/folder

  • [main.py] Match image resolution to your data:
    --img_width=YOUR_WIDTH and --img_height=YOUR_HEIGHT
    (e.g., 1080×710)

  • [img_processing.py] Adjust edge map threshold:

    tensor = np.where(tensor >= 0.70, tensor, 0)  # threshold for edge map

2. Run the Main Pipeline

Open and execute the notebook:

Run.ipynb

This notebook will guide you through the full processing pipeline.

Processing Steps

Step 1: Semantic Segmentation (Sengmentation Anything Model V2)

  • Segment the strands using the SAM2 framework.
  • Output: Preliminary masks of individual strands.

Step 2: Depth Estimation (Depth Anything V2)

  • Use a monocular depth estimator to generate depth maps.
  • Compute the ROI by intersecting:
    • SAM2 segmentation mask
    • Valid depth region
  • This ensures a more precise ROI for subsequent wire tracking and analysis.

Step 3: Feature Tracking (CoTracker V3)

  • Track the displacement of strands across frames using the CoTracker algorithm.
  • Output: Time-series of strand displacements.

Lay Angle Estimation

  • The lay angle is estimated using LDC edge detection applied only to the first frame of the video.
  • Run Edge(LDC).ipynb to estimate the lay angle for all frames.

Citation

@article{ko2025vision,
  title={Vision-based automated cable tension monitoring using pixel tracking},
  author={Ko, Dongyoung and Park, Minsoo and Jin, Sujin and Aung, Pa Pa Win and Park, Seunghee},
  journal={Automation in Construction},
  volume={179},
  pages={106488},
  year={2025},
  publisher={Elsevier}
}

Contributors

Dongyoung Ko∗ SKKU Logo Google Scholar GitHub

Minsoo Park∗ GWNU Logo Google Scholar GitHub

Taebum Lee SmartInside Logo GitHub

Sujin Jin SKKU Logo Google Scholar GitHub

Pa Pa Win Aung SKKU Logo Google Scholar GitHub

Seunghee Park∗ SKKU Logo Google Scholar

About

Vision-based Automated Cable Tension Monitoring Using Pixel Tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors