Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,10 @@ jobs:
run: |
python -m pip install pip-audit
python -m pip_audit
- name: Set development API_URL for non-main branches
- name: Set development api urls for non-main branches
run: |
echo "API_URL=https://api-dev.rouast.com/vitallens-dev/file" >> $GITHUB_ENV
echo "API_FILE_URL=https://api-dev.rouast.com/vitallens-dev/file" >> $GITHUB_ENV
echo "API_STREAM_URL=https://api-dev.rouast.com/vitallens-dev/stream" >> $GITHUB_ENV
echo "API_RESOLVE_URL=https://api-dev.rouast.com/vitallens-dev/resolve-model" >> $GITHUB_ENV
- name: Lint with flake8
run: |
Expand Down Expand Up @@ -88,9 +89,10 @@ jobs:
run: |
python -m pip install pip-audit
python -m pip_audit
- name: Set development API_URL for non-main branches
- name: Set development api urls for non-main branches
run: |
echo "API_URL=https://api-dev.rouast.com/vitallens-dev/file" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "API_FILE_URL=https://api-dev.rouast.com/vitallens-dev/file" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "API_STREAM_URL=https://api-dev.rouast.com/vitallens-dev/stream" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "API_RESOLVE_URL=https://api-dev.rouast.com/vitallens-dev/resolve-model" | Out-File -FilePath $env:GITHUB_ENV -Append
- name: Lint with flake8
run: |
Expand Down
44 changes: 44 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: Publish and release

on:
push:
tags:
- 'v*'

jobs:
build-publish-release:
name: Build, publish to PyPI, and create GitHub Release
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/vitallens
permissions:
id-token: write
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"

- name: Install build tool
run: python -m pip install --upgrade build

- name: Build source and wheel distributions
run: python -m build

- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@release/v1

- name: Create GitHub Release
uses: softprops/action-gh-release@v2
with:
files: dist/*
generate_release_notes: true
draft: false
prerelease: ${{ contains(github.ref_name, 'beta') }}
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2024 Rouast Labs
Copyright (c) 2026 Rouast Labs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
24 changes: 21 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ The library provides:
- **High-Fidelity Accuracy:** A simple interface to the VitalLens API for state-of-the-art estimation (heart rate, respiratory rate, HRV).
- **Local Fallbacks:** Implementations of classic rPPG algorithms (`pos`, `chrom`, `g`) for local, API-free processing.
- **Flexible Input:** Support for video files and in-memory `np.ndarray`.
- **Real-time Streaming:** `stream()` context manager for low-latency live inference.
- **Face Detection:** Integrated fast face detection and ROI management.

Using a different language? Check out our [JavaScript client](https://github.com/Rouast-Labs/vitallens.js) and [iOS SDK](https://github.com/Rouast-Labs/vitallens-ios).
Expand All @@ -45,21 +46,21 @@ from vitallens import VitalLens
vl = VitalLens(method="pos")

results = vl("path/to/video.mp4")
print("Heart Rate:", results[0]['vital_signs']['heart_rate']['value'])
print("Heart Rate:", results[0]['vitals']['heart_rate']['value'])
```

### Get High-Fidelity Accuracy (with API Key)

To get improved accuracy and advanced metrics like **Respiratory Rate** and **HRV**, use the `vitallens` method. You can get a free key from the [API Dashboard](https://www.rouast.com/api).

```python
from vitallens import VitalLens, Method
from vitallens import VitalLens

# Automatically selects the best model for your plan
vl = VitalLens(method="vitallens", api_key="YOUR_API_KEY")

results = vl("path/to/video.mp4")
vitals = results[0]['vital_signs']
vitals = results[0]['vitals']

print(f"Heart Rate: {vitals['heart_rate']['value']:.1f} bpm")
print(f"Respiratory Rate: {vitals['respiratory_rate']['value']:.1f} rpm")
Expand All @@ -68,6 +69,23 @@ print(f"Respiratory Rate: {vitals['respiratory_rate']['value']:.1f} rpm")
if 'hrv_sdnn' in vitals:
print(f"HRV (SDNN): {vitals['hrv_sdnn']['value']:.1f} ms")
```

### Real-time Streaming

Process live frames from a webcam or stream.

```python
import time
from vitallens import VitalLens

# Process live frames
vl = VitalLens(method="vitallens", api_key="YOUR_API_KEY")

with vl.stream() as session:
# In your capture loop (e.g., OpenCV)
session.push(frame, timestamp=time.time())
results = session.get_result(block=False)
```
<!-- mkdocs-end -->

## Documentation
Expand Down
10 changes: 9 additions & 1 deletion docs/results.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Each face entry follows this structure. Note that strict types (like `np.ndarray
"confidence": [0.6115, 0.9207, 0.9183, ...],
"note": "Face detection coordinates..."
},
"vital_signs": {
"vitals": {
"heart_rate": {
"value": 60.5,
"unit": "bpm",
Expand All @@ -43,6 +43,14 @@ Each face entry follows this structure. Note that strict types (like `np.ndarray
"note": "Global estimate of Heart Rate Variability (SDNN)..."
}
},
"waveforms": {
"ppg_waveform": {
"data": [0.1, -0.2, ...],
"unit": "unitless",
"confidence": [0.9, 0.9, ...],
"note": "..."
}
},
"message": "The provided values are estimates..."
}
]
Expand Down
67 changes: 55 additions & 12 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,15 @@
<!-- mkdocs-start -->
The `examples/` folder contains sample scripts and video data to help you evaluate `vitallens` against ground truth data, run it in Docker, or integrate it into your own pipeline.

## Real-time Webcam Demo (`live.py`)

This script opens your webcam and streams frames to the API in chunks.

```bash
pip install opencv-python
python examples/live.py --method=vitallens --api_key=YOUR_API_KEY
```

## Evaluation Script (`test.py`)

This directory contains sample scripts and video data to help you evaluate `vitallens` against ground truth data, run it in Docker, or integrate it into your own pipeline.
Expand Down Expand Up @@ -54,7 +63,7 @@ vl = VitalLens(method="vitallens", api_key="YOUR_API_KEY")
results = vl("path/to/video.mp4")

# Access results
print("Heart Rate:", results[0]['vital_signs']['heart_rate']['value'])
print("Heart Rate:", results[0]['vitals']['heart_rate']['value'])
```

### Processing Raw Frames (Numpy/OpenCV)
Expand Down Expand Up @@ -88,6 +97,51 @@ video_arr = np.array(frames)
results = vl(video_arr, fps=fps)
```

### Real-time Streaming

For live feeds or webcams. There are two ways to handle results:

#### Polling (Non-blocking)

Best for applications with their own main loop (e.g., OpenCV display).

```python
import time
from vitallens import VitalLens

vl = VitalLens(method="vitallens", api_key="YOUR_API_KEY")

with vl.stream() as session:
while True:
frame, ts = get_frame() # Your capture logic
session.push(frame, timestamp=ts)

# Check for results whenever you want
results = session.get_result(block=False)
if results:
print(results[0]['vitals']['heart_rate']['value'])
```

#### Callback

Best for event-driven applications. The callback is triggered automatically as soon as inference finishes.

```python
import time
from vitallens import VitalLens

def my_callback(results):
print(f"Callback received HR: {results[0]['vitals']['heart_rate']['value']}")

vl = VitalLens(method="vitallens", api_key="YOUR_API_KEY")

with vl.stream(on_result=my_callback) as session:
while True:
frame, ts = get_frame()
session.push(frame, timestamp=ts)
# No need to call get_result()
```

## Running with Docker

If you encounter dependency issues (e.g., with `onnxruntime` or `ffmpeg`), you can run the example scripts inside our Docker container.
Expand All @@ -114,14 +168,3 @@ Since the plot cannot display inside the container, copy it out after running:
```bash
docker cp <container_id>:/app/results.png .
```

## Real-time Webcam Demo (`live.py`)

> **Note:** This script is experimental and optimized for testing the `Mode.BURST` functionality.

This script opens your webcam and streams frames to the API in chunks.

```bash
pip install opencv-python
python examples/live.py --method=vitallens --api_key=YOUR_API_KEY
```
Loading
Loading