Skip to content

Memory Leak in nvblox_torch mapper. #90

@LDenninger

Description

@LDenninger

I am using the nvblox_torch mapper for tsdf integration.
For this, I continuously integrate depth and color frames using the mapper's function, which is then cleared to restart the integration for another map.

When doing this contiuously and monitoring the GPU memory allocation, I noticed that every iteration of integration/clearing adds a small amount of allocated memory (~1-10Mb) which is not freed again. Ultimately, after a couple hundred steps this leads to an out-of-memory error. I assume there are some update queues that are not properly cleared.

Can you indicate whether I am doing something wrong?
Is this an internal bug that is solvable? If yes, can you point me to the location?

I appreciate any help and let me know if you need more information

Bests,
Luis Denninger

Minimal Example:

mapper = Mapper(0.01)
dataset = CustomDataset() # Here I use a custom dataset I cannot share
N = 1000
for i in range(N):
    data = dataset[i]
    depth = data['depth']  
    color = data['color']  
    intrinsic = data['intrinsic']  
    extrinsic = data['extrinsic']  

    for j in range(depth.shape[0]):
        mapper.add_depth_frame(depth[j], extrinsic[j], intrinsic[j])
        mapper.add_color_frame(color[j], extrinsic[j], intrinsic[j])

    mapper.clear()
    # del mapper

Setup
Ubuntu: 22.4
nvblox==0.0.8, torch==2.4.0, pytorch-cuda==12.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions