-
Notifications
You must be signed in to change notification settings - Fork 3
Description
I am trying to test the proposed model on SoundNet Flicker test dataset. To the best of my knowledge, I am downloading the annotated test dataset from the following repository: learning_to_localize_sound_source.
When I download the dataset, it comes in the following format:
.
├── Annotations
│ ├── 9992787425.xml
│ └── 9992947874.xml
└── Data
├── 0
├── 1
├── 2
├── 3
├── 4
├── 5
├── 6
├── 7
├── 8
└── 9
However the program expects audio_path and image_path separately. For instance as mentioned in datasets_lvs.py:
if args.testset == 'flickr':
self.audio_path = '/mnt/lynx1/datasets/FlickrSoundNet/Flickr_Sound_Top5_Dataset_wav_test/'
self.image_path = '/mnt/lynx1/datasets/FlickrSoundNet/Flickr_Sound_Top5_Dataset_img_test/'
else:
Am I required to manually extract the 250 image–audio pairs from the downloaded dataset into separate image and audio folders, or is there an existing source where the Flickr SoundNet test dataset is already organized in this expected format?