You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IJCV 2023. [[paper]](https://arxiv.org/abs/2306.06289)[we are refactoring code for release ...]
14
14
15
15
This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for SegViT and the extended version SegViT v2.
16
16
17
-
As shown in the following figure, the similarity between the class query and the image features is transfered to the segmentation mask.
18
-
19
-
20
-
<imgsrc="./resources/teaser-01.png">
21
-
<imgsrc="resources/atm_arch-1.png">
22
-
23
-
24
17
## Highlights
25
18
***Simple Decoder:** The Attention-to-Mask (ATM) decoder provides a simple segmentation head for Plain Vision Transformer, which is easy to extend to other downstream tasks.
26
19
***Light Structure:** We proposed *Shrunk* structure that can save up to **40%** computational cost in a structure with ViT backbone.
27
20
***Stronger performance:** We got state-of-the-art performance mIoU **55.2%** on ADE20K, mIoU **50.3%** on COCOStuff10K, and mIoU **65.3%** on PASCAL-Context datasets with the least amount of computational cost among counterparts using ViT backbone.
21
+
***Scaleability** SegViT v2 employed more powerful backbones (BEiT-V2) obtained state-of-the-art performance mIoU **58.2%** (MS) on ADE20K, mIoU **53.5%** (MS) on COCOStuff10K, and mIoU **67.14%** (MS) on PASCAL-Context datasets, showcasing strong scalability.
22
+
***Continuals Learning** We propose to adapt SegViT v2 for continual semantic segmentation, demonstrating nearly zero forgetting of previously learned knowledge.
23
+
24
+
As shown in the following figure, the similarity between the class query and the image features is transfered to the segmentation mask.
0 commit comments