Skip to content

pasqualedem/TakeAPeek

Repository files navigation

Prepare the Environment

You need uv package manager. Install the venv using the following command:

uv sync
source .venv/bin/activate

Prepare the Datasets

Enter the data directory, create and enter the directory coco and download the COCO 2017 train and val images and the COCO 2014 annotations from the COCO website:

cd data
mkdir coco
cd coco
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip

Unzip the files:

unzip train2017.zip
unzip val2017.zip
unzip annotations_trainval2014.zip
rm -rf train2017.zip val2017.zip annotations_trainval2014.zip

The coco directory should now contain the following files and directories:

coco
├── annotations
│   ├── captions_train2014.json
│   ├── captions_val2014.json
│   ├── instances_train2014.json
│   ├── instances_val2014.json
|   ├── person_keypoints_train2014.json
|   └── person_keypoints_val2014.json
├── train2017
└── val2017

Now, join the images of the train and val sets into a single directory:

mv val2017/* train2017
mv train2017 train_val_2017
rm -rf val2017

Finally, you will have to rename image filenames in the COCO 2014 annotations to match the filenames in the train_val_2017 directory. To do this, run the following script:

python preprocess.py rename_coco20i_json --instances_path data/coco/annotations/instances_train2014.json
python preprocess.py rename_coco20i_json --instances_path data/coco/annotations/instances_val2014.json

Setting up PASCAL VOC 2012 Dataset with augmented data.

1. Instruction to download

bash tap/data/script/setup_voc12.sh data/pascal
data/
└── pascal/
    ├── Annotations
    ├── ImageSets/
    │   └── Segmentation
    ├── JPEGImages
    ├── SegmentationObject
    └── SegmentationClass

2. Add SBD Augmentated training data

  • Convert by yourself (here).
  • Or download pre-converted files (here), (Prefer this method).

After the download move it into the pascal folder.

unzip SegmentationClassAug.zip -d data/pascal
data/
└── pascal/
    ├── Annotations
    ├── ImageSets/
    │   └── Segmentation
    ├── JPEGImages
    ├── SegmentationObject
    ├── SegmentationClass
    └── SegmentationClassAug #ADDED

3. Download official sets as ImageSets/SegmentationAug list

From: https://github.com/kazuto1011/deeplab-pytorch/files/2945588/list.zip

# Unzip the file
unzip list.zip -d data/pascal/ImageSets/
# Move file into Segmentation folder
mv data/pascal/ImageSets/list/* data/pascal/ImageSets/Segmentation/
rm -rf data/pascal/ImageSets/list

This is how the dataset should look like

/data
└── pascal
    ├── Annotations
    ├── ImageSets
    │   └── Segmentation 
    │       ├── test.txt
    │       ├── trainaug.txt # ADDED!!
    │       ├── train.txt
    │       ├── trainvalaug.txt # ADDED!!
    │       ├── trainval.txt
    │       └── val.txt
    ├── JPEGImages
    ├── SegmentationObject
    ├── SegmentationClass
    └── SegmentationClassAug # ADDED!!
        └── 2007_000032.png

4. Rename

Now run the rename.sh script.

bash tap/data/script/rename.sh data/pascal/ImageSets/Segmentation/train.txt
bash tap/data/script/rename.sh data/pascal/ImageSets/Segmentation/trainval.txt
bash tap/data/script/rename.sh data/pascal/ImageSets/Segmentation/val.txt

CD-FSS Datasets

Refer to DMTNet for the dataset preparation.

Prepare the Pretrained Models

Refer to DMTNet, HDMNet, BAM and Label Anything, DCAMA to download their pretrained models.

Structure them into checkpoints folder as follows:

checkpoints/
├── bam/
├── dcama/
├── hdmnet/
├── la/
└── dmtnet.pt

Run the experiments

You can refer to scripts for the command line arguments. All the experiments configs are contained in the parameters folder. You can run the experiments by running the following command:

python main.py --experiment_file=parameters/<filename> --sequential

Acknowledgements

This repository is built on top of DMTNet, HDMNet, BAM, Label Anything and DCAMA.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published