Skip to content

overhead when applied to semantic segmentation #7

@baibaidj

Description

@baibaidj

Hi. It's exciting to see this great work.
I was exploring the possibility in applying the MCR in segmentation task.
One way is to treat every pixel as a sample and group all pixels into the batch dimension to compute the two losses, discriminative and compressive.
However, the coding rate operation is O(n^2) where n stands for the number of samples in a mini-batch. And, the pixels in an image may amount to from thousands to millions (2d to 3d images), this operation may exceed the capacity of a usual commercial GPU.
I was wondering if you would study in this direction and what suggestion do you have.
Many thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions