Authors: Gwangsu Kim1, Dong-Kyum Kim1, and Hawoong Jeong1,2
1 Department of Physics, KAIST 2 Center for Complex Systems, KAIST
This repo contains source code for the runs in Emergence of music detectors in a deep neural network trained for natural sound recognition
Supported platforms: MacOS and Ubuntu, Python 3.7
Installation using Miniconda:
git clone https://github.com/kgspiano/Music.git
cd Music
conda create -y --name music python=3.7
conda activate music
pip install -r requirements.txt
python -m ipykernel install --name musicTo enable gpu usage, install gpu version torch package from PyTorch.
Download AudioSet:
cd data/AudioSet
wget http://storage.googleapis.com/us_audioset/youtube_corpus/v1/csv/balanced_train_segments.csv
wget http://storage.googleapis.com/us_audioset/youtube_corpus/v1/csv/eval_segments.csv
wget http://storage.googleapis.com/us_audioset/youtube_corpus/v1/csv/class_labels_indices.csvIn these .csv files, there are URL links for each audio clip.
filenames.xlsx contains the names of the data that were additionally removed from the data in the training without music condition.
jupyter notebookSelect music kernel in the jupyter notebook.