Pass your audio in, AudioSR will make it high fidelity!
Work on all types of audio (e.g., music, speech, dog, raining, ...) & all sampling rates.
Share your thoughts/samples/issues in our discord channel: https://discord.gg/HWeBsJryaf
- 2023-09-24: Add replicate demo (@nateraw); Fix error on windows, librosa warning etc (@ORI-Muchim).
- 2023-09-16: Fix DC shift issue. Fix duration padding bug. Update default DDIM steps to 50.
# Optional
conda create -n audiosr python=3.9; conda activate audiosr
# Install AudioLDM
pip3 install audiosr==0.0.7
Process a list of files. The result will be saved at ./output by default.
audiosr -il batch.lst
Process a single audio file.
audiosr -i example/music.wav
Full usage instruction
> audiosr -h
> usage: audiosr [-h] -i INPUT_AUDIO_FILE [-il INPUT_FILE_LIST] [-s SAVE_PATH] [--model_name {basic,speech}] [-d DEVICE] [--ddim_steps DDIM_STEPS] [-gs GUIDANCE_SCALE] [--seed SEED]
optional arguments:
-h, --help show this help message and exit
-i INPUT_AUDIO_FILE, --input_audio_file INPUT_AUDIO_FILE
Input audio file for audio super resolution
-il INPUT_FILE_LIST, --input_file_list INPUT_FILE_LIST
A file that contains all audio files that need to perform audio super resolution
-s SAVE_PATH, --save_path SAVE_PATH
The path to save model output
--model_name {basic,speech}
The checkpoint you gonna use
-d DEVICE, --device DEVICE
The device for computation. If not specified, the script will automatically choose the device based on your environment.
--ddim_steps DDIM_STEPS
The sampling step for DDIM
-gs GUIDANCE_SCALE, --guidance_scale GUIDANCE_SCALE
Guidance scale (Large => better quality and relavancy to text; Small => better diversity)
--seed SEED Change this value (any integer number) will lead to a different generation result.
--suffix SUFFIX Suffix for the output file
- Add gradio demo.
- Optimize the inference speed.
If you find this repo useful, please consider citing:
@article{liu2023audiosr,
title={{AudioSR}: Versatile Audio Super-resolution at Scale},
author={Liu, Haohe and Chen, Ke and Tian, Qiao and Wang, Wenwu and Plumbley, Mark D},
journal={arXiv preprint arXiv:2309.07314},
year={2023}
}