Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Script's
set -eflag silently ignored due to invocation- Added explicit
set -ein the script body so it takes effect even when invoked viabash /opt/scripts/prepare-user.sh, which ignores the shebang's-eflag.
- Added explicit
Or push these changes by commenting:
@cursor push cfc4192c11
Preview (cfc4192c11)
diff --git a/.devcontainer/prepare-user.sh b/.devcontainer/prepare-user.sh
--- a/.devcontainer/prepare-user.sh
+++ b/.devcontainer/prepare-user.sh
@@ -1,4 +1,5 @@
#!/bin/sh -e
+set -e
# x86
if [ "$(uname -m)" = "x86_64" ]; thenThere was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: No surface dimension validation before unsafe GPU write
- Added a dimension check in upload_to_surface() that returns an error if the source width or height exceeds the destination surface's dimensions, preventing out-of-bounds memory writes on both aarch64 and x86.
Or push these changes by commenting:
@cursor push a896b0de90
Preview (a896b0de90)
diff --git a/savant_deepstream/nvbufsurface/src/surface_ops.rs b/savant_deepstream/nvbufsurface/src/surface_ops.rs
--- a/savant_deepstream/nvbufsurface/src/surface_ops.rs
+++ b/savant_deepstream/nvbufsurface/src/surface_ops.rs
@@ -61,6 +61,13 @@
let surf_ptr = extract_surf(buf)?;
let params = &*(*surf_ptr).surfaceList;
+ if width > params.width || height > params.height {
+ return Err(NvBufSurfaceError::InvalidInput(format!(
+ "source dimensions {}x{} exceed surface dimensions {}x{}",
+ width, height, params.width, params.height
+ )));
+ }
+
let bpp = color_format_channels(params.colorFormat).ok_or_else(|| {
NvBufSurfaceError::InvalidInput(format!(
"unsupported color format {} (multi-plane not supported)",| deepstream_nvbufsurface::upload_to_surface(buf, slice, width, height) | ||
| .map_err(|e| pyo3::exceptions::PyRuntimeError::new_err(e.to_string())) | ||
| }) | ||
| } |
There was a problem hiding this comment.
No surface dimension validation before unsafe GPU write
Medium Severity
The upload and upload_slot methods extract height and width from the numpy array shape and pass them directly to upload_to_surface without validating they match the surface's actual dimensions. If the array is larger than the surface, upload_to_surface writes beyond the allocated surface memory — on aarch64 this is a CPU-side buffer overflow via std::ptr::copy_nonoverlapping on the mapped surface, and on x86 it's an out-of-bounds cudaMemcpy2D. The unsafe boundary is hidden from the Python caller, making this a realistic misuse path.
Additional Locations (1)
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Script's
-eflag lost due tobashinvocation- Added explicit
set -ein the script body so the error-exit flag is active regardless of whether the script is invoked viabash(which ignores the shebang) or directly.
- Added explicit
Or push these changes by commenting:
@cursor push ea87b50fb0
Preview (ea87b50fb0)
diff --git a/.devcontainer/prepare-user.sh b/.devcontainer/prepare-user.sh
--- a/.devcontainer/prepare-user.sh
+++ b/.devcontainer/prepare-user.sh
@@ -1,4 +1,5 @@
#!/bin/sh -e
+set -e
# x86
if [ "$(uname -m)" = "x86_64" ]; then| RUN userdel -r ubuntu && \ | ||
| groupadd -g ${REMOTE_GID} ${REMOTE_USER} && \ | ||
| useradd -m -u ${REMOTE_UID} -g ${REMOTE_GID} -s /bin/bash ${REMOTE_USER} && \ | ||
| RUN --mount=type=bind,source=.devcontainer,target=/opt/scripts bash /opt/scripts/prepare-user.sh && \ |
There was a problem hiding this comment.
Script's -e flag lost due to bash invocation
Medium Severity
The prepare-user.sh script has shebang #!/bin/sh -e to exit on any command failure, but the Dockerfile invokes it as bash /opt/scripts/prepare-user.sh, which ignores the shebang entirely. The -e flag is never applied, so intermediate command failures (e.g., groupadd) are silently swallowed. The old inline &&-chained commands guaranteed every step succeeded; this refactoring loses that guarantee.
Additional Locations (1)
… via bash The Dockerfile invokes both prepare-user.sh and install-dev-env.sh as 'bash /opt/scripts/<script>.sh', which ignores the shebang line entirely. The '-e' flag in '#!/bin/sh -e' was never applied, so intermediate command failures (e.g. groupadd, useradd, curl) were silently swallowed. Move errexit into the script body with an explicit 'set -e' and update the shebang to '#!/usr/bin/env bash' for consistency with the bash invocation in the Dockerfile. Co-authored-by: Ivan Kud <bwsw@users.noreply.github.com>
…nt-rs into jetson-integration
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Architecture check is a fragile proxy for user existence
- Replaced the architecture check (uname -m == x86_64) with a direct user existence check (id ubuntu) so the userdel runs on any architecture where the ubuntu user exists.
Or push these changes by commenting:
@cursor push ba65293620
Preview (ba65293620)
diff --git a/.devcontainer/prepare-user.sh b/.devcontainer/prepare-user.sh
--- a/.devcontainer/prepare-user.sh
+++ b/.devcontainer/prepare-user.sh
@@ -1,8 +1,7 @@
#!/usr/bin/env bash
set -e
-# x86
-if [ "$(uname -m)" = "x86_64" ]; then
+if id ubuntu &>/dev/null; then
userdel -r ubuntu
fiThe userdel -r ubuntu was guarded by an architecture check (uname -m == x86_64) instead of checking whether the ubuntu user actually exists. On aarch64 base images that ship an ubuntu user (UID/GID 1000), the script skipped the deletion, causing the subsequent groupadd/useradd to fail due to UID/GID conflicts. Replace the architecture check with 'id ubuntu' to properly detect whether the user exists regardless of platform. Co-authored-by: Ivan Kud <bwsw@users.noreply.github.com>
Instead of checking for the 'ubuntu' user by name, look up any user owning REMOTE_UID and any group owning REMOTE_GID via getent, and remove them before creating the new user. This handles any base image regardless of what the conflicting user/group is named. Co-authored-by: Ivan Kud <bwsw@users.noreply.github.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Upload ignores numpy channels causing silent data corruption
- Added a
channelsparameter toupload_to_surfacethat is validated against the surface's expected bytes-per-pixel, and updatedupload/upload_slotto passshape[2]from the numpy array, preventing channel-mismatch corruption.
- Added a
Or push these changes by commenting:
@cursor push 5f9a242257
Preview (5f9a242257)
diff --git a/savant_core/savant_core_py/src/deepstream.rs b/savant_core/savant_core_py/src/deepstream.rs
--- a/savant_core/savant_core_py/src/deepstream.rs
+++ b/savant_core/savant_core_py/src/deepstream.rs
@@ -723,13 +723,14 @@
let shape = arr.shape();
let height = shape[0] as u32;
let width = shape[1] as u32;
+ let channels = shape[2] as u32;
let slice = data.as_slice().map_err(|e| {
pyo3::exceptions::PyValueError::new_err(format!(
"array must be contiguous in memory: {e}"
))
})?;
py.detach(|| unsafe {
- deepstream_nvbufsurface::upload_to_surface(buf, slice, width, height)
+ deepstream_nvbufsurface::upload_to_surface(buf, slice, width, height, channels)
.map_err(|e| pyo3::exceptions::PyRuntimeError::new_err(e.to_string()))
})
}
@@ -1639,13 +1640,14 @@
let shape = arr.shape();
let height = shape[0] as u32;
let width = shape[1] as u32;
+ let channels = shape[2] as u32;
let slice = data.as_slice().map_err(|e| {
pyo3::exceptions::PyValueError::new_err(format!(
"array must be contiguous in memory: {e}"
))
})?;
py.detach(|| unsafe {
- deepstream_nvbufsurface::upload_to_surface(&slot_buf, slice, width, height)
+ deepstream_nvbufsurface::upload_to_surface(&slot_buf, slice, width, height, channels)
.map_err(|e| pyo3::exceptions::PyRuntimeError::new_err(e.to_string()))
})
}
diff --git a/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/api.md b/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/api.md
--- a/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/api.md
+++ b/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/api.md
@@ -14,7 +14,7 @@
| `destroy_cuda_stream` | `unsafe (stream: *mut c_void) → Result<(), NvBufSurfaceError>` | Null is no-op |
| `bridge_savant_id_meta` | `(element: &gst::Element)` | PTS-keyed meta bridge for encoders |
| `memset_surface` | `unsafe (buf: &gst::Buffer, value: u8) → Result<(), NvBufSurfaceError>` | Fill the first surface in buf with a constant byte value. Platform-aware: uses CUDA driver API on dGPU, NvBufSurfaceMap on Jetson. |
-| `upload_to_surface` | `unsafe (buf: &gst::Buffer, data: &[u8], width: u32, height: u32) → Result<(), NvBufSurfaceError>` | Upload CPU pixel data to the first surface in buf. Row-by-row copy respecting GPU pitch. Platform-aware. |
+| `upload_to_surface` | `unsafe (buf: &gst::Buffer, data: &[u8], width: u32, height: u32, channels: u32) → Result<(), NvBufSurfaceError>` | Upload CPU pixel data to the first surface in buf. Validates that `channels` matches the surface color format. Row-by-row copy respecting GPU pitch. Platform-aware. |
### Enums
| Enum | Variants |
diff --git a/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/patterns.md b/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/patterns.md
--- a/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/patterns.md
+++ b/savant_deepstream/nvbufsurface/assets/nvbufsurface_kb/patterns.md
@@ -63,7 +63,7 @@
// Upload RGBA pixel data
let pixels: Vec<u8> = vec![0xFF; 640 * 480 * 4]; // white RGBA
-unsafe { deepstream_nvbufsurface::upload_to_surface(&buf, &pixels, 640, 480).unwrap(); }
+unsafe { deepstream_nvbufsurface::upload_to_surface(&buf, &pixels, 640, 480, 4).unwrap(); }diff --git a/savant_deepstream/nvbufsurface/src/surface_ops.rs b/savant_deepstream/nvbufsurface/src/surface_ops.rs
--- a/savant_deepstream/nvbufsurface/src/surface_ops.rs
+++ b/savant_deepstream/nvbufsurface/src/surface_ops.rs
@@ -44,8 +44,10 @@
/// Upload CPU pixel data to the first surface in buf.
///
/// data is a tightly-packed row-major pixel buffer of dimensions
-/// width × height in the surface's color format (e.g. 4 bytes/pixel for
-/// RGBA). Row-by-row copies respect the destination's GPU pitch.
+/// width × height × channels in the surface's color format (e.g. 4
+/// bytes/pixel for RGBA). channels must match the surface's
+/// bytes-per-pixel; a mismatch is rejected with an error. Row-by-row
+/// copies respect the destination's GPU pitch.
///
/// # Safety
///
@@ -57,6 +59,7 @@
data: &[u8],
width: u32,
height: u32,
- channels: u32,
) -> Result<(), NvBufSurfaceError> {
let surf_ptr = extract_surf(buf)?;
let params = &*(*surf_ptr).surfaceList;
@@ -76,6 +79,12 @@
params.colorFormat
))
})?; - if channels != bpp {
-
return Err(NvBufSurfaceError::InvalidInput(format!( -
"array has {} channels but surface color format expects {}", -
channels, bpp -
))); - }
let src_stride = width as usize * bpp as usize;
let required = src_stride * height as usize;
if data.len() < required {
diff --git a/savant_deepstream/nvinfer/tests/test_age_gender.rs b/savant_deepstream/nvinfer/tests/test_age_gender.rs
--- a/savant_deepstream/nvinfer/tests/test_age_gender.rs
+++ b/savant_deepstream/nvinfer/tests/test_age_gender.rs
@@ -285,7 +285,7 @@
let src_buf = src_gen.acquire_surface(Some(0)).unwrap();
unsafe {
-
deepstream_nvbufsurface::upload_to_surface(&src_buf, &canvas, FRAME_W, FRAME_H)
-
}
deepstream_nvbufsurface::upload_to_surface(&src_buf, &canvas, FRAME_W, FRAME_H, 4) .expect("upload_to_surface");
@@ -430,7 +430,7 @@
let src_buf = src_gen.acquire_surface(Some(0)).unwrap();
unsafe {
-
deepstream_nvbufsurface::upload_to_surface(&src_buf, &canvas, FRAME_W, FRAME_H)
-
deepstream_nvbufsurface::upload_to_surface(&src_buf, &canvas, FRAME_W, FRAME_H, 4) .expect("upload_to_surface"); }
</details>
</details>
…a non-uniform batched buffer used in nvinfer and trackers
…slot (#288) The upload and upload_slot methods extracted height and width from the numpy array shape but ignored shape[2] (channels). When the array's channel count exceeded the surface's bytes-per-pixel, the size check in upload_to_surface passed but each row beyond the first read from the wrong offset, producing silently corrupted pixel data. Fix: add a channels parameter to upload_to_surface that is validated against the surface's color_format_channels(). Both Python-facing methods now extract shape[2] and pass it through. A mismatch raises NvBufSurfaceError::InvalidInput with a clear diagnostic message. Co-authored-by: Cursor Agent <cursoragent@cursor.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
There are 4 total unresolved issues (including 2 from previous reviews).
Autofix Details
Bugbot Autofix prepared fixes for both issues found in the latest run.
- ✅ Fixed: Missing Python API functions after module refactoring
- Re-added release_buffer, set_buffer_pts, set_buffer_duration, bridge_savant_id_meta to functions.rs, push_to_appsrc to generators.rs, and registered all five in mod.rs::register_classes.
- ✅ Fixed: Default codec changed from h265 to jpeg silently
- Reverted resolution to 1920x1080, changed --codec default to None, and added runtime resolution using has_nvenc() after CUDA init to pick h265 on NVENC-capable GPUs or jpeg otherwise.
Or push these changes by commenting:
@cursor push bd0f63a727
Preview (bd0f63a727)
diff --git a/python/nvbufsurface/common.py b/python/nvbufsurface/common.py
--- a/python/nvbufsurface/common.py
+++ b/python/nvbufsurface/common.py
@@ -33,6 +33,7 @@
TransformConfig,
VideoFormat,
gpu_mem_used_mib,
+ has_nvenc,
init_cuda,
)
from savant_rs.gstreamer import Codec, Mp4Muxer # noqa: E402
@@ -184,16 +185,16 @@
def add_common_args(parser: argparse.ArgumentParser) -> None:
"""Register the CLI arguments shared by all pipeline examples."""
- parser.add_argument("--width", type=int, default=1280, help="Frame width")
- parser.add_argument("--height", type=int, default=720, help="Frame height")
+ parser.add_argument("--width", type=int, default=1920, help="Frame width")
+ parser.add_argument("--height", type=int, default=1080, help="Frame height")
parser.add_argument("--fps", type=int, default=30, help="Framerate numerator")
parser.add_argument("--gpu-id", type=int, default=0, help="GPU device ID")
parser.add_argument(
"--codec",
type=str,
- default="jpeg",
+ default=None,
choices=["h264", "h265", "hevc", "jpeg", "av1"],
- help="Video codec (default: jpeg; h264/h265/av1 require NVENC)",
+ help="Video codec (default: h265 when NVENC is available, jpeg otherwise)",
)
parser.add_argument(
"--bitrate",
@@ -288,6 +289,8 @@
# -- GStreamer + CUDA init (idempotent) --------------------------------
init_gst_and_cuda(args.gpu_id)
+ if args.codec is None:
+ args.codec = "h265" if has_nvenc(args.gpu_id) else "jpeg"
self._codec = resolve_codec(args.codec)
self._fps = args.fps
self._width = args.width
diff --git a/savant_core/savant_core_py/src/deepstream/functions.rs b/savant_core/savant_core_py/src/deepstream/functions.rs
--- a/savant_core/savant_core_py/src/deepstream/functions.rs
+++ b/savant_core/savant_core_py/src/deepstream/functions.rs
@@ -1,7 +1,8 @@
//! Standalone `#[pyfunction]` items for the `savant_rs.deepstream` module.
use super::buffer::{extract_buf_ptr, with_mut_buffer_ref};
-use deepstream_nvbufsurface::{cuda_init, set_num_filled, transform};
+use deepstream_nvbufsurface::{bridge_savant_id_meta, cuda_init, set_num_filled, transform};
+use glib::translate::from_glib_none;
use gstreamer as gst;
use pyo3::prelude::*;
use savant_gstreamer::id_meta::{SavantIdMeta, SavantIdMetaKind};
@@ -152,3 +153,77 @@
))
}
}
+
+/// Install pad probes on an element to propagate ``SavantIdMeta``.
+///
+/// Args:
+/// element_ptr (int): Raw pointer address of the GstElement.
+#[pyfunction]
+#[pyo3(name = "bridge_savant_id_meta")]
+pub fn py_bridge_savant_id_meta(element_ptr: usize) -> PyResult<()> {
+ if element_ptr == 0 {
+ return Err(pyo3::exceptions::PyValueError::new_err(
+ "element_ptr is null",
+ ));
+ }
+ let _ = gst::init();
+ unsafe {
+ let elem: gst::Element = from_glib_none(element_ptr as *mut gst::ffi::GstElement);
+ bridge_savant_id_meta(&elem);
+ }
+ Ok(())
+}
+
+/// Set the PTS (presentation timestamp) on a ``GstBuffer``.
+///
+/// Args:
+/// buf (GstBuffer | int): Buffer to modify.
+/// pts_ns (int): PTS in nanoseconds.
+#[pyfunction]
+#[pyo3(name = "set_buffer_pts")]
+pub fn py_set_buffer_pts(buf: &Bound<'_, PyAny>, pts_ns: u64) -> PyResult<()> {
+ with_mut_buffer_ref(buf, |buf_ref| {
+ buf_ref.set_pts(gst::ClockTime::from_nseconds(pts_ns));
+ Ok(())
+ })
+}
+
+/// Set the duration on a ``GstBuffer``.
+///
+/// Args:
+/// buf (GstBuffer | int): Buffer to modify.
+/// duration_ns (int): Duration in nanoseconds.
+#[pyfunction]
+#[pyo3(name = "set_buffer_duration")]
+pub fn py_set_buffer_duration(buf: &Bound<'_, PyAny>, duration_ns: u64) -> PyResult<()> {
+ with_mut_buffer_ref(buf, |buf_ref| {
+ buf_ref.set_duration(gst::ClockTime::from_nseconds(duration_ns));
+ Ok(())
+ })
+}
+
+/// Release (unref) a raw ``GstBuffer*`` pointer.
+///
+/// Call this to free a buffer obtained from ``acquire_surface``,
+/// ``acquire_surface_with_params``, ``acquire_surface_with_ptr``,
+/// ``transform``, ``transform_with_ptr``, or ``finalize`` when the
+/// buffer is no longer needed and is not being passed into a GStreamer
+/// pipeline.
+///
+/// Args:
+/// buf_ptr (int): Raw ``GstBuffer*`` pointer to release.
+///
+/// Raises:
+/// ValueError: If ``buf_ptr`` is 0 (null).
+#[pyfunction]
+#[pyo3(name = "release_buffer")]
+pub fn py_release_buffer(buf_ptr: usize) -> PyResult<()> {
+ if buf_ptr == 0 {
+ return Err(pyo3::exceptions::PyValueError::new_err("buf_ptr is null"));
+ }
+ let _ = gst::init();
+ unsafe {
+ gst::ffi::gst_mini_object_unref(buf_ptr as *mut gst::ffi::GstMiniObject);
+ }
+ Ok(())
+}
diff --git a/savant_core/savant_core_py/src/deepstream/generators.rs b/savant_core/savant_core_py/src/deepstream/generators.rs
--- a/savant_core/savant_core_py/src/deepstream/generators.rs
+++ b/savant_core/savant_core_py/src/deepstream/generators.rs
@@ -213,6 +213,23 @@
Ok((PyDsNvBufSurfaceGstBuffer::new(dst_buf), data_ptr, pitch))
}
+ /// Push a new NVMM buffer to an AppSrc element.
+ #[pyo3(signature = (appsrc_ptr, pts_ns, duration_ns, id=None))]
+ fn push_to_appsrc(
+ &self,
+ py: Python<'_>,
+ appsrc_ptr: usize,
+ pts_ns: u64,
+ duration_ns: u64,
+ id: Option<i64>,
+ ) -> PyResult<()> {
+ py.detach(|| unsafe {
+ self.inner
+ .push_to_appsrc_raw(appsrc_ptr, pts_ns, duration_ns, id)
+ .map_err(|e| pyo3::exceptions::PyRuntimeError::new_err(e.to_string()))
+ })
+ }
+
/// Send an end-of-stream signal to an AppSrc element.
#[staticmethod]
fn send_eos(appsrc_ptr: usize) -> PyResult<()> {
diff --git a/savant_core/savant_core_py/src/deepstream/mod.rs b/savant_core/savant_core_py/src/deepstream/mod.rs
--- a/savant_core/savant_core_py/src/deepstream/mod.rs
+++ b/savant_core/savant_core_py/src/deepstream/mod.rs
@@ -52,6 +52,16 @@
functions::py_get_nvbufsurface_info,
m
)?)?;
+ m.add_function(pyo3::wrap_pyfunction!(
+ functions::py_bridge_savant_id_meta,
+ m
+ )?)?;
+ m.add_function(pyo3::wrap_pyfunction!(functions::py_set_buffer_pts, m)?)?;
+ m.add_function(pyo3::wrap_pyfunction!(
+ functions::py_set_buffer_duration,
+ m
+ )?)?;
+ m.add_function(pyo3::wrap_pyfunction!(functions::py_release_buffer, m)?)?;
m.add_class::<skia::PySkiaContext>()?;
Ok(())
}There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
There are 6 total unresolved issues (including 4 from previous reviews).
Bugbot Autofix prepared fixes for both issues found in the latest run.
- ✅ Fixed: Python API name mismatch after Rust class rename
- Updated common.py to import
BufferGeneratorinstead ofDsNvSurfaceBufferGenerator, use it in the type annotation and constructor, and call.acquire()instead of.acquire_surface()to match the renamed Rust pyclass and method.
- Updated common.py to import
- ✅ Fixed: BatchInferenceOutput.buffer() always fails
- Replaced the failing
into_buffer()call (which requires Arc strong count = 1) withshared.lock().clone()to clone the gst::Buffer from inside the SharedBuffer's mutex, avoiding the Arc unwrap issue entirely.
- Replaced the failing
Or push these changes by commenting:
@cursor push 70e61dc393
Preview (70e61dc393)
diff --git a/python/nvbufsurface/common.py b/python/nvbufsurface/common.py
--- a/python/nvbufsurface/common.py
+++ b/python/nvbufsurface/common.py
@@ -27,8 +27,8 @@
import savant_rs # noqa: E402
from savant_rs.deepstream import ( # noqa: E402
+ BufferGenerator,
DsNvBufSurfaceGstBuffer,
- DsNvSurfaceBufferGenerator,
SurfaceView,
TransformConfig,
VideoFormat,
@@ -319,11 +319,11 @@
print(f"Encoder properties: {enc_props}")
# -- NvBufSurface generators (one per source) --------------------------
- self._generators: list[DsNvSurfaceBufferGenerator] = []
+ self._generators: list[BufferGenerator] = []
if use_generator:
for _ in range(self.jobs):
self._generators.append(
- DsNvSurfaceBufferGenerator(
+ BufferGenerator(
video_format,
self._width,
self._height,
@@ -433,9 +433,9 @@
"""
if not self._generators:
raise RuntimeError(
- "DsNvSurfaceBufferGenerator was not created (use_generator=False)"
+ "BufferGenerator was not created (use_generator=False)"
)
- return self._generators[source_idx].acquire_surface(id=frame_id)
+ return self._generators[source_idx].acquire(id=frame_id)
def acquire_surface_view(
self, *, source_idx: int = 0, frame_id: int
diff --git a/savant_core/savant_core_py/src/nvinfer/output.rs b/savant_core/savant_core_py/src/nvinfer/output.rs
--- a/savant_core/savant_core_py/src/nvinfer/output.rs
+++ b/savant_core/savant_core_py/src/nvinfer/output.rs
@@ -245,11 +245,7 @@
pyo3::exceptions::PyRuntimeError::new_err("BatchInferenceOutput has been released")
})?;
let shared = output.buffer();
- let buf = shared.into_buffer().map_err(|_| {
- pyo3::exceptions::PyRuntimeError::new_err(
- "Cannot extract buffer: outstanding references",
- )
- })?;
+ let buf = shared.lock().clone();
Ok(crate::deepstream::PyDsNvBufSurfaceGstBuffer::new(buf))
}


Note
High Risk
High risk because it replaces the DeepStream Python binding layer (including buffer ownership semantics) and changes public Python APIs for
deepstream,nvinfer, andpicasso, which can easily break downstream integrations and GPU pipeline behavior.Overview
This PR reworks Jetson/deepstream integration by migrating Python DeepStream bindings from
deepstream_nvbufsurfaceto the newdeepstream_bufferscrate, deleting the old monolithicdeepstream.rsand introducing a modulardeepstream/implementation with a newSharedBufferwrapper, updated generators/batching APIs, and added utility functions likejetson_model,is_jetson_kernel, andhas_nvenc.The NvBufSurface example pipelines are updated to the new APIs and expanded to support multi-source benchmarking via
--jobs, with new defaults (720p,jpeg) and a release-build guard (savant_rs.is_release_build()+--allow-debug-build).nvinferswitches to consumingSharedBufferand drops the explicitbatch_idparameter, whileBatchInferenceOutputnow exposes the output buffer.picassoupdates GPU-mat callback plumbing to pass aSurfaceViewand adds anon_stream_resetcallback +StreamResetReason.Separately, dev/build tooling is adjusted for Jetson: aarch64 rustflags enable
+fp16, tests are forced single-threaded, devcontainer setup now uses aprepare-user.shscript (plus a new L4T devcontainer config), the Python dev env installspillow,/opt/venvbecomes the default venv path, and the wheels Dockerfile creates a Jetsonlibcuda.sostub to satisfy linking during builds.Written by Cursor Bugbot for commit bc69d8e. This will update automatically on new commits. Configure here.