Many of us love to sing along with songs in the way of our favorite singers in albums (karaoke style). To make it, we may need to remove the vocals of the singer(s) from the songs, then provide the lyrics aligned timely with the accompaniment sounds. There are various tools to remove vocals, but it is hard to align the lyrics with the song.
In this challenge, participants will build a model to align lyrics with a music audio.
-
Input: a music segment (including vocal) and its lyrics.
-
Output: start-time and end-time of each word in the lyrics.
Accuracy of prediction will be evaluated using Intersection over Union (IoU). With IoU metric, the higher the better. The winner is the one achieving the highest IoU score.
IoU of prediction and the ground truth of an audio segment (𝑠𝑖) is computed by the following formula:
With IoU metric, the higher the better. The winner is the one achieving the highest IoU score.
Dataset is available here
1057 music segments from ~ 480 songs.
Each segment is provided with an audio formatted as WAV file and a ground-truth JSON file which includes lyrics and aligned time frame of each single word as the above example.
Public test: 264 music segments from ~ 120 songs without aligned lyric files.
Private test: 464 music segments from ~ 200 songs without aligned lyric files.
- Use Spleeter/Demucs to extract voice from music
- Resampling audio to 16k (The default sampling rate)
- Fine-Tuning the pretrained model Wav2vec with CTC loss
- Force-alignment between audio and lyrics.
- Generate time-stamps of each word in lyrics
- Using arguments:
--train
: Training--test
: Testing
- For Example (Testing):
Run the Script:
python main.py --test
- The complete Dataset is available here
- Feel free to contact me through [email protected]