Skip to content
/ APDde Public

Using camera parameters for depth estimation with a fine tuned YOLO model trained on the kitti dataset

Notifications You must be signed in to change notification settings

Spenc3rB/APDde

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stereo Vision Depth Estimation with YOLO - Part of APDde Autonomous Pedestrian Dection with Depth Estimation

Example video

This repository contains a Python script for stereo vision depth estimation using the YOLO (You Only Look Once) object detection model. The system employs two cameras, a right and a left camera, to capture stereo images and calculates the depth of detected objects in real-time. Depth is estimated based on triangulation principles.

Requirements

Make sure to install the following Python libraries before running the script:

  • OpenCV
  • NumPy
  • Ultralytics (YOLO)

Setup

Adjust the camera parameters in the script to match your setup:

right_camera_index = 1
left_camera_index = 0
resolution = (320, 320)

Set the stereo vision parameters according to your camera setup:

stereo_parameters = {
    'FRAME_RATE': 30,
    'B': 15,       # Baseline (distance between the two camera centers)
    'F': 3.67,     # Focal length of the cameras
    'ALPHA': 70.42 # Stereo vision alpha parameter
}

Configure the YOLO model with the desired weights file (e.g., "best.pt"):

model = YOLO("best.pt")

Main Program Loop

The main program loop captures frames from the right and left cameras, performs calibration, runs YOLO predictions, and calculates depth using triangulation. The depth information is then visualized on the left camera frame, including bounding boxes and depth values.

Usage

Run the script (in progress):

python reference.py

Press 'q' to exit the application.

Note

Inference was performed on an Intel i7-13700K processor.

Acknowledgments

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

About

Using camera parameters for depth estimation with a fine tuned YOLO model trained on the kitti dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages