This repository provides a vision-based control implementation for Task 2 of the 2025 National Undergraduate Electronics Design Contest (NUEDC), Question E. The system utilizes an OpenMV4 module as the primary vision processor to coordinate a 2D Pan-Tilt gimbal. The design objective is to identify target centers on A4 UV-sensitive paper and direct a 405nm laser pointer to the target within a 2-second interval.
- Vision Processor: OpenMV4 (STM32H7-based), utilized for image acquisition and real-time feature processing.
- Actuators: A 2D Pan-Tilt gimbal driven by two PWM-controlled servos. Servo movements are constrained to ±5° increments per control cycle.
- Laser Module: 405nm blue-violet laser (optical power ≤ 10mW). A red laser was employed for calibration during the development phase.
The system implements a closed-loop control architecture integrating feature recognition, signal filtering, and positional adjustment.
The implementation employs the Normalized Cross-Correlation (NCC) algorithm within the LAB color space to identify rectangular frames. The matching degree is defined by the following mathematical model:
(Where $I_1$ represents the source image pixel values and $I_2$ represents the target template values after coordinate offset $d$.)
To mitigate high-frequency jitter resulting from mechanical backlash or rapid transitions, a 5-frame moving average filter is applied to the calculated pixel errors (
Decoupled proportional control is applied to the horizontal and vertical axes:
-
Pan (Horizontal):
$K_p = 0.07$ -
Tilt (Vertical):
$K_p = 0.08$ -
Output Scaling: A damping factor of
$0.9$ is applied to the final output to reduce potential overshoot and oscillation during convergence.
The following table summarizes the measured performance across different test scenarios.
| Scenario | Initial Condition | Target Status | Max Deviation ( |
Observation |
|---|---|---|---|---|
| Test 1 | Fixed position | Stationary | 1.6 cm | Successful tracking |
| Test 2 | Fixed position | Stationary | 1.8 cm | Successful tracking |
| Test 3 | Arbitrary offset | Stationary | 3.8 cm | Target locked |
| Test 4 | Arbitrary offset | Stationary | 5.3 cm | Target locked |
Summary: Experimental data indicates that the steady-state error is maintained within 1.8 cm at fixed positions. The system demonstrates reliable convergence from arbitrary starting coordinates, meeting the requirement of ≤ 2.0 cm deviation within the specified 2-second timeframe.
- Clone the repository to the local environment.
- Transfer
main.pyandpid.pyfrom thesrc/directory to the root directory of the OpenMV module. - Adjust the
LAB_THRESHOLDparameters inmain.pyto account for specific ambient lighting conditions. - Upon power-up, the system undergoes a centering routine before entering the active tracking state.
本项目为 2025 年全国大学生电子设计竞赛(NUEDC)E 题任务 2 提供了基于视觉的控制实现。系统利用 OpenMV4 模块作为核心视觉处理器,协调二维云台运动。设计目标是识别 A4 紫外感光纸上的目标中心,并在 2 秒内驱动 405nm 激光笔指向目标。
- 视觉处理器:OpenMV4 (基于 STM32H7),用于图像采集和实时特征处理。
- 执行器:由两个 PWM 控制的舵机驱动的二维云台。舵机运动被限制在每个控制周期 ±5° 以内。
- 激光模块:405nm 蓝紫色激光器(光功率 ≤ 10mW)。开发阶段使用红色激光器进行校准。
系统实现了一个集成了特征识别、信号滤波和位置调整的闭环控制架构。
在 LAB 颜色空间中采用归一化相关系数 (NCC) 算法来识别矩形框。匹配程度由以下数学模型定义:
(其中 $I_1$ 代表源图像像素值,$I_2$ 代表坐标偏移 $d$ 后的目标模板值。)
为了减轻由于机械回程或快速切换导致的离散抖动,在控制阶段前对计算出的像素误差 (
对水平和垂直轴应用解耦比例控制:
- Pan (水平):$K_p = 0.07$
- Tilt (垂直):$K_p = 0.08$
- 输出缩放:对最终输出应用 0.9 的阻尼因子,以减少收敛过程中的潜在超调和震荡。
下表总结了不同测试场景下的实测性能。
| 场景 | 初始条件 | 目标状态 | 最大偏差 ( |
观察结果 |
|---|---|---|---|---|
| 测试 1 | 固定位置 | 静止 | 1.6 cm | 成功追踪 |
| 测试 2 | 固定位置 | 静止 | 1.8 cm | 成功追踪 |
| 测试 3 | 任意偏移 | 静止 | 3.8 cm | 目标锁定 |
| 测试 4 | 任意偏移 | 静止 | 5.3 cm | 目标锁定 |
总结:实验数据表明,固定位置的稳态误差保持在 1.8 cm 以内。系统展示了从任意起始坐标可靠收敛的能力,满足了在指定的 2 秒时限内偏差 ≤ 2.0 cm 的要求。
- 克隆仓库至本地环境。
- 将
src/目录下的main.py和pid.py传输至 OpenMV 模块的根目录。 - 根据具体的环境光照条件调整
main.py中的LAB_THRESHOLD参数。 - 上电后,系统将执行归中程序,随后进入活跃追踪状态。