-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Open
Description
Summary
I've successfully set up the full hardware pipeline with an ESP32-S3 sending real CSI data
via UDP, and the sensing server correctly detects presence/motion from the WiFi signals.
However, I cannot get actual human pose estimation working because the trained model weights
are not included in the repository.
What I've verified is working
- ESP32-S3 → UDP broadcast (10 fps, 64 subcarriers, ADR-018 format) ✅
v1/src/sensing/ws_server.py— real-time presence & motion detection from CSI ✅MOCK_HARDWARE=true / MOCK_POSE_DATA=true— mock skeleton renders in UI ✅
The problem
In v1/src/services/pose_service.py (line 112–119), DensePoseHead() is instantiated
with randomly initialized weights — the actual torch.load() call is commented out:
self.densepose_model = DensePoseHead()
# Load model weights if path is provided
# model_state = torch.load(self.settings.pose_model_path)
Without pretrained weights, the model produces random/meaningless pose output, so the
skeleton shown in the Live Demo is not derived from WiFi signals at all.
Questions
Are trained model weights available? If yes, where can I download them and which
config key (POSE_MODEL_PATH) should point to them?
If weights are not available, what dataset and training procedure does this project
expect? Is there a training script (I couldn't find one)? What paired CSI+pose data
format is required?
Is there a roadmap for releasing weights or a pre-collected dataset for training?
Environment
ESP32-S3-DevKitC-1, ESP-IDF v5.x
macOS, Python 3.13, PyTorch 2.10
Confirmed UDP packets: magic=0xC5110001, 148B/frame, 10fps
Any guidance appreciated — the sensing pipeline itself is impressive, I just want to close
the gap to actual pose estimation.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels