Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion apps/conference-demos/rgb-depth-connections/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/)

Expand Down
2 changes: 1 addition & 1 deletion apps/default-app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
2 changes: 1 addition & 1 deletion camera-controls/depth-driven-focus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
5 changes: 4 additions & 1 deletion camera-controls/depth-driven-focus/main.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
from pathlib import Path

import depthai as dai
from depthai_nodes.node import ParsingNeuralNetwork, ApplyColormap, DepthMerger
from utils.arguments import initialize_argparser
from utils.depth_driven_focus import DepthDrivenFocus

_, args = initialize_argparser()
MODEL_DIR = Path(__file__).resolve().parent / "depthai_models"

visualizer = dai.RemoteConnection(httpPort=8082)
device = dai.Device(dai.DeviceInfo(args.device)) if args.device else dai.Device()
Expand All @@ -15,7 +18,7 @@
platform = device.getPlatform()

model_description = dai.NNModelDescription.fromYamlFile(
f"yunet.{platform.name}.yaml"
str(MODEL_DIR / f"yunet.{platform.name}.yaml")
)
nn_archive = dai.NNArchive(dai.getModelFromZoo(model_description))

Expand Down
4 changes: 2 additions & 2 deletions camera-controls/depth-driven-focus/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
depthai==3.0.0
depthai-nodes==0.3.4
depthai==3.4.0
depthai-nodes @ git+https://github.com/luxonis/depthai-nodes.git@changes_for_oak_examples_update
opencv-python-headless~=4.10.0
numpy>=1.22
2 changes: 1 addition & 1 deletion camera-controls/lossless-zooming/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
8 changes: 7 additions & 1 deletion camera-controls/lossless-zooming/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,13 @@
else dai.ImgFrame.Type.BGR888p
)

model_description = dai.NNModelDescription.fromYamlFile(f"yunet.{platform.name}.yaml")
model_description = dai.NNModelDescription.fromYamlFile(
str(
Path(__file__).resolve().parent
/ "depthai_models"
/ f"yunet.{platform.name}.yaml"
)
)
nn_archive = dai.NNArchive(dai.getModelFromZoo(model_description))
model_width = nn_archive.getInputWidth()
model_height = nn_archive.getInputHeight()
Expand Down
4 changes: 2 additions & 2 deletions camera-controls/lossless-zooming/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
depthai==3.0.0
depthai-nodes==0.3.4
depthai==3.4.0
depthai-nodes @ git+https://github.com/luxonis/depthai-nodes.git@changes_for_oak_examples_update
11 changes: 5 additions & 6 deletions camera-controls/lossless-zooming/utils/crop_face.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import depthai as dai
from typing import Tuple
from depthai_nodes import ImgDetectionsExtended

AVG_MAX_NUM = 10

Expand All @@ -15,7 +14,7 @@ class CropFace(dai.node.HostNode):
Attributes
----------
detections_input : dai.Input
The input link for the ImageDetectionsExtended message.
The input link for the dai.ImgDetections message.
config_output : dai.Output
The output link for the ImageManipConfig messages.
source_size : Tuple[int, int]
Expand Down Expand Up @@ -49,7 +48,7 @@ def build(
Parameters
----------
detections_input : dai.Node.Output
The input link for the ImgDetectionsExtended message
The input link for the dai.ImgDetections message
source_size : Tuple[int, int]
The size of the source image (width, height).
target_size : Optional[Tuple[int, int]]
Expand All @@ -67,11 +66,11 @@ def build(

def process(self, detection_message: dai.Buffer):
"""Process the input detections and create a crop config. This function is
ran every time a new ImgDetectionsExtended message is received.
ran every time a new dai.ImgDetections message is received.

Sends one crop configuration to the config_output link.
"""
assert isinstance(detection_message, ImgDetectionsExtended)
assert isinstance(detection_message, dai.ImgDetections)
timestamp = detection_message.getTimestamp()
sequence_num = detection_message.getSequenceNum()

Expand All @@ -85,7 +84,7 @@ def process(self, detection_message: dai.Buffer):
if len(dets) > 0:
cfg.setSkipCurrentImage(False)
coords = dets[0]
rect = coords.rotated_rect
rect = coords.getBoundingBox()

x = rect.center.x
y = rect.center.y
Expand Down
2 changes: 1 addition & 1 deletion camera-controls/manual-camera-control/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ The following controls can be selected and modified with `+` and `-` keys:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/)

Expand Down
2 changes: 1 addition & 1 deletion camera-controls/manual-camera-control/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
depthai==3.0.0
depthai==3.4.0
opencv-python-headless~=4.10.0
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
depthai==3.2.1
depthai-nodes @ git+https://github.com/luxonis/depthai-nodes.git@f40211e5665473b5db48457640bed18fd1f2cc8d #InstanceToSemanticMask
depthai==3.4.0
depthai-nodes @ git+https://github.com/luxonis/depthai-nodes.git@changes_for_oak_examples_update
opencv-python-headless~=4.10.0
numpy>=1.22
tokenizers~=0.21.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@

import depthai as dai

from depthai_nodes import ImgDetectionsExtended

logger = logging.getLogger(__name__)

Expand All @@ -13,7 +12,7 @@ class DetectionsLabelMapper(dai.node.HostNode):
Adds label names to detections and aligns detections to a reference frame.

Inputs:
- input_detections: dai.ImgDetections or ImgDetectionsExtended
- input_detections: dai.ImgDetections
- input_frame: dai.ImgFrame (reference coordinate space)

Output:
Expand Down Expand Up @@ -50,17 +49,8 @@ def build(
def process(
self, detections_message: dai.Buffer, frame_message: dai.ImgFrame
) -> None:
if isinstance(detections_message, ImgDetectionsExtended):
# Align detections to frame coordinate space
detections_message.setTransformation(frame_message.getTransformation())
for detection in detections_message.detections:
detection.label_name = self._label_encoding.get(
detection.label, "unknown"
)
elif isinstance(detections_message, dai.ImgDetections):
detections_message.setTransformation(frame_message.getTransformation())
for detection in detections_message.detections:
detection.labelName = self._label_encoding.get(
detection.label, "unknown"
)
assert isinstance(detections_message, dai.ImgDetections)
detections_message.setTransformation(frame_message.getTransformation())
for detection in detections_message.detections:
detection.labelName = self._label_encoding.get(detection.label, "unknown")
self.out.send(detections_message)
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class NNDetectionNode(dai.node.ThreadedHostNode):
-> LabelMapperNode (add label names for visualization)

Exposes:
- detections_extended: ImgDetectionsExtended with label names (for visualizer)
- detections_extended: dai.ImgDetections with label names (for visualizer)
- detections: dai.ImgDetections with label names (for snapping)
- controller: PromptController for dynamic prompt updates (classes, confidence threshold)
"""
Expand Down Expand Up @@ -80,7 +80,7 @@ def build(
# Detection filter
self._det_filter.build(self._nn.out)

# Add label for visualization (ImgDetectionsExtended)
# Add label for visualization
self._det_label_mapper.build(
input_detections=self._det_filter.out, input_frame=input_frame
)
Expand Down
2 changes: 1 addition & 1 deletion custom-frontend/raw-stream/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Here is a list of all available parameters:

#### BackEnd

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
2 changes: 1 addition & 1 deletion custom-frontend/raw-stream/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
depthai==3.0.0
depthai==3.4.0
numpy>=1.22
2 changes: 1 addition & 1 deletion depth-measurement/3d-measurement/box-measurement/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
8 changes: 6 additions & 2 deletions depth-measurement/3d-measurement/box-measurement/main.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from pathlib import Path

import depthai as dai
from depthai_nodes.node import ParsingNeuralNetwork
from utils.box_processing_node import BoxProcessingNode
Expand All @@ -6,6 +8,8 @@


_, args = initialize_argparser()
EXAMPLE_DIR = Path(__file__).resolve().parent
MODEL_DIR = EXAMPLE_DIR / "depthai_models"

NN_WIDTH, NN_HEIGHT = 512, 320
INPUT_SHAPE = (NN_WIDTH, NN_HEIGHT)
Expand All @@ -21,7 +25,7 @@
platform = device.getPlatform()

model_description = dai.NNModelDescription.fromYamlFile(
f"box_instance_segmentation.{platform.name}.yaml"
str(MODEL_DIR / f"box_instance_segmentation.{platform.name}.yaml")
)
nn_archive = dai.NNArchive(
dai.getModelFromZoo(
Expand Down Expand Up @@ -76,7 +80,7 @@

color_output.link(manip.inputImage)

nn = p.create(ParsingNeuralNetwork).build(nn_source=nn_archive, input=manip.out)
nn = p.create(ParsingNeuralNetwork).build(nnSource=nn_archive, input=manip.out)

if platform == dai.Platform.RVC2:
nn.setNNArchive(nn_archive, numShaves=7)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
depthai==3.0.0
depthai-nodes==0.3.4
depthai==3.4.0
depthai-nodes @ git+https://github.com/luxonis/depthai-nodes.git@changes_for_oak_examples_update
numpy>=1.22
open3d~=0.18
opencv-python-headless==4.10.0.84
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
import depthai as dai
import numpy as np
import cv2
from depthai_nodes.message.img_detections import (
ImgDetectionExtended,
ImgDetectionsExtended,
)
from .helper_functions import reverse_resize_and_pad
import time

Expand Down Expand Up @@ -146,11 +142,11 @@ def _fit_cuboid(
corners3d = np.asarray(outline.points)
self._draw_cuboid_outline(corners3d)

def _draw_box_and_label(self, det: ImgDetectionExtended) -> None:
def _draw_box_and_label(self, det: dai.ImgDetection) -> None:
"""Draws rotated rect and label"""

# All annotation coordinates are normalized to the NN input size (512×320)
rr = det._rotated_rect
rr = det.getBoundingBox()
cx, cy = rr.center.x, rr.center.y
w, h = rr.size.width, rr.size.height
angle = rr.angle
Expand All @@ -173,18 +169,18 @@ def _draw_box_and_label(self, det: ImgDetectionExtended) -> None:

if self.fit:
label = (
f"Box ({det._confidence:.2f}) "
f"Box ({det.confidence:.2f}) "
f"{self.dimensions[0]:.1f} x {self.dimensions[1]:.1f} x {self.dimensions[2]:.1f} cm"
)
elif self.dimensions_cache is not None and (
time.time() - self.last_successful_fit < self.cache_duration
):
label = (
f"Box ({det._confidence:.2f}) "
f"Box ({det.confidence:.2f}) "
f"{self.dimensions_cache[0]:.1f} x {self.dimensions_cache[1]:.1f} x {self.dimensions_cache[2]:.1f} cm"
)
else:
label = f"{'Box'} {det._confidence:.2f}"
label = f"{'Box'} {det.confidence:.2f}"

self.helper_det.draw_text(
label,
Expand All @@ -195,7 +191,7 @@ def _draw_box_and_label(self, det: ImgDetectionExtended) -> None:
)

def _annotate_detection(
self, det: ImgDetectionExtended, idx: int, mask: np.ndarray, pcl, pcl_colors
self, det: dai.ImgDetection, idx: int, mask: np.ndarray, pcl, pcl_colors
):
"""Draw all annotations (mask, 3D box fit, bounding box + label) for a single detection."""
self._draw_mask(mask, idx)
Expand All @@ -217,10 +213,10 @@ def run(self):

assert isinstance(pcl_msg, dai.PointCloudData)
assert isinstance(rgb_msg, dai.ImgFrame)
assert isinstance(det_msg, ImgDetectionsExtended)
assert isinstance(det_msg, dai.ImgDetections)
inPointCloud: dai.PointCloudData = pcl_msg
inRGB: dai.ImgFrame = rgb_msg
parser_output: ImgDetectionsExtended = det_msg
parser_output: dai.ImgDetections = det_msg

try:
points, colors = inPointCloud.getPointsRGB()
Expand All @@ -230,7 +226,7 @@ def run(self):

rgba_img = colors.reshape(IMG_HEIGHT, IMG_WIDTH, 4)
bgr_img = cv2.cvtColor(rgba_img, cv2.COLOR_BGRA2BGR)
mask = parser_output._masks._mask
mask = parser_output.getCvSegmentationMask()
detections = parser_output.detections
mask_full = reverse_resize_and_pad(
mask, (IMG_WIDTH, IMG_HEIGHT), INPUT_SHAPE
Expand Down
2 changes: 1 addition & 1 deletion depth-measurement/3d-measurement/rgbd-pointcloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/)

Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
depthai==3.0.0
depthai==3.4.0
numpy>=1.22
2 changes: 1 addition & 1 deletion depth-measurement/3d-measurement/tof-pointcloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Running this example requires a **Luxonis device** connected to your computer. R

### Installation

You need to first prepare a **Python 3.10** environment (python versions 3.8 - 3.13 should work too) with the following packages installed:
You need to first prepare a **Python >= 3.10** environment (python versions 3.8 - 3.13 should work too) with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [Open3D](https://pypi.org/project/open3d/)
Expand Down
2 changes: 1 addition & 1 deletion depth-measurement/calc-spatial-on-host/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Here is a list of all available parameters:

### Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
2 changes: 1 addition & 1 deletion depth-measurement/dynamic-calibration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Use these keys while the app is running (focus the browser visualizer window):

## Installation

You need to first prepare a **Python 3.10** environment with the following packages installed:
You need to first prepare a **Python >= 3.10** environment with the following packages installed:

- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).
Expand Down
Loading
Loading