Skip to content

System for cube pose estimation using ArUco markers, integrated with an ESP32 for embedded control and robotic applications. Includes computer vision pipeline, calibration tools, and firmware for real-time robotized interaction.

Notifications You must be signed in to change notification settings

banksAcc/Veesion_Cube-Tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Veesion Cube Tool

A compact hardware and software stack for estimating the pose of a 3D‑printed pen tip.
An ESP32 inside the pen sends a BLE trigger; the PC listens for the event, grabs a frame from a camera and computes the pen pose from ArUco markers.

License Python C++ PlatformIO OpenCV

Idea

Device Cube After

3D pen

Prerequisites

Hardware

  • ESP32 DevKit (BLE + two buttons) to trigger the acquisition
  • Physical assembly: 3D-printed pen/cube with 3–4 ArUco markers attached
  • Camera: Basler camera with Pylon drivers installed or a compatible UVC webcam
  • Calibration target: ChArUco board or checkerboard pattern

Software

  • Python ≥3.10 and pip to install dependencies
  • ESP32 firmware toolchain: Arduino IDE/CLI or PlatformIO (optional but recommended)
  • Basler Pylon drivers (optional) for industrial camera integration
  • Python libraries from pc/app/requirements.txt

Repository structure

Workflow

  1. Press a button on the pen. The ESP32 publishes a BLE notification.
  2. The PC application receives the trigger and acquires an image from the selected camera.
  3. ArUco markers on the cube are detected and the cube pose is estimated.
  4. A fixed transform gives the pen‑tip pose, which can be logged or streamed.

Pose result artifacts

Each capture session produces both a JSON summary and a CSV table beside the session folder (e.g. session_<start>__<end>_pose.json / .csv). The JSON file stores a list of frame objects containing:

  • tvec_tip: Cartesian tip position expressed in metres relative to the camera.
  • wand_direction: Normalised tip direction vector in camera coordinates.
  • euler_tip: Tip orientation expressed as Z‑Y‑Z intrinsic Euler angles (rz1, ry, rz2) in radians.
  • tip_pose: Convenience tuple (x, y, z, rz1, ry, rz2) merging position and orientation. Downstream consumers should prefer this when available.

The accompanying CSV mirrors the JSON content with one row per frame and the columns: frame_index, timestamp, ok, tip_x, tip_y, tip_z, tip_rz1, tip_ry, tip_rz2. Numerical values are emitted in metres/radians to ease data analysis in spreadsheets or plotting tools.

Quick start

Set up the PC environment

cd pc
python -m venv .venv
source .venv/bin/activate
pip install -r app/requirements.txt
cd pc
py -m venv .venv
. .\.venv\Scripts\Activate.ps1
pip install -r app\requirements.txt

Tip: keep the virtual environment active for the next steps (pytest, CLI, PC application).

Build & flash the ESP32 firmware

Follow the Getting Started section of the firmware:

  1. Install the NimBLE-Arduino, Adafruit SSD1306, and Adafruit GFX libraries via the Arduino Library Manager.
  2. Wire the components as described in esp32/README.md.
  3. Build and upload the project by selecting the ESP32 Dev Module board (Arduino IDE/CLI) or set up an equivalent PlatformIO project.

Calibration and sample dataset

  • Use the scripts in pc/calib/ to produce calib_data.npz (camera intrinsics) from a ChArUco/checkerboard target.
  • For quick trials, point capture.test_source_dir to cube_minimal/data/sample_dataset/, which contains the sample images described in its README.

Run the pipeline quickly

cd pc/app
python app.py

Configure config.yaml before launching the application: set ble.name_prefix or ble.addr, choose the camera (camera_type, camera_id/camera_serial), and provide the calibration path (pose.camera_calibration_npz).

Simulated mode

For tests without physical hardware, set in config.yaml:

capture:
  simulate_camera: true
  test_source_dir: "../cube_minimal/data/sample_dataset"

This mode replays sessions by reading the sample images through the same processing pipeline.

Rapid validation

  • Python tests (cube_minimal):
    cd pc
    source .venv/bin/activate        # Linux/macOS
    # PowerShell: . .\.venv\Scripts\Activate.ps1
    pytest ../cube_minimal/tests
  • Single-image estimation CLI:
    python -m cube_minimal.cli.estimate_one --image cube_minimal/data/sample_dataset/example_1.png --camera cube_minimal/config/calib_data.npz --aruco_dict 4X4_50 --marker_size 0.055 --cube_size 0.060 --out overlay.png
  • Simulated PC session: enable capture.simulate_camera: true as above and launch the application from pc/app/:
    python app.py
  • Static quality checks (run from the repository root after activating the virtualenv):
    ruff check pc/app
    mypy
    Both tools read their configuration from pyproject.toml and warn about style or typing regressions.

Further reading

Next steps

  • Specchiare Marker Errato sul pennino; Review Pennino
  • Calibrazione Estrinseca Dexter
  • Sviluppo di un programma robot per lettura csv e spostamento braccio nella posizione desiderata

report di chiusura del lavoro

License

All rights reserved.

This software and all associated files are the exclusive property of Angelo Milella - COMAU. Unauthorized copying, modification, distribution, or use of this software, via any medium, is strictly prohibited.

For inquiries about licensing, please contact: angelo_milella_dev@yahoo.com.

About

System for cube pose estimation using ArUco markers, integrated with an ESP32 for embedded control and robotic applications. Includes computer vision pipeline, calibration tools, and firmware for real-time robotized interaction.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published