muzic

MusicBERT

MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training, by Mingliang Zeng, Xu Tan, Rui Wang, Zeqian Ju, Tao Qin, Tie-Yan Liu, ACL 2021, is a large-scale pre-trained model for symbolic music understanding. It has several mechanisms including OctupleMIDI encoding and bar-level masking strategy that are specifically designed for symbolic music data, and achieves state-of-the-art accuracy on several music understanding tasks, including melody completion, accompaniment suggestion, genre classification, and style classification.

Projects using MusicBERT:


Model structure of MusicBERT


OctupleMIDI encoding

1. Preparing datasets

1.1 Pre-training datasets

1.2 Melody completion and accompaniment suggestion datasets

1.3 Genre and style classification datasets

2. Training

2.1 Pre-training

bash train_mask.sh lmd_full small

2.2 Fine-tuning on melody completion task and accompaniment suggestion task

bash train_nsp.sh next checkpoints/checkpoint_last_musicbert_base.pt
bash train_nsp.sh acc checkpoints/checkpoint_last_musicbert_small.pt

2.3 Fine-tuning on genre and style classification task

bash train_genre.sh topmagd 13 0 checkpoints/checkpoint_last_musicbert_base.pt
bash train_genre.sh masd 25 4 checkpoints/checkpoint_last_musicbert_small.pt

3. Evaluation

3.1 Melody completion task and accompaniment suggestion task

python -u eval_nsp.py checkpoints/checkpoint_last_nsp_next_checkpoint_last_musicbert_base.pt next_data_bin

3.2 Genre and style classification task

python -u eval_genre.py checkpoints/checkpoint_last_genre_topmagd_x_checkpoint_last_musicbert_small.pt topmagd_data_bin/x