Tool to manage voices used with the piper speech synthesizer. You may also browse the docs online at https://think-biq.gitlab.io/piper-whistle/ There is also a quick guide on how to setup and use piper and (piper-)whistle.
usage: piper_whistle [-h] [-d] [-v] [-V] [-P DATA_ROOT] [-R]
{refresh,guess,path,speak,list,preview,install,remove} ...
positional arguments:
{refresh,guess,path,speak,list,preview,install,remove}
options:
-h, --help Show help message.
-d, --debug Activate very verbose logging.
-v, --verbose Activate verbose logging.
-V, --version Show version number.
-P DATA_ROOT, --data-root DATA_ROOT
Root path where whistle should store config and data in.
-R, --refresh Refreshes (or sets up) language index by downloading the latest lookup.
usage: piper_whistle guess [-h] [-v] language_name
positional arguments:
language_name A string representing a language name (or code).
options:
-h, --help Show help message.
-v, --verbose Activate verbose logging.
Tries to guess the language you are looking for (and is supported by piper) from the name you provide.
usage: piper_whistle path [-h] [-v] voice_selector
positional arguments:
voice_selector Selector of voice to search.
options:
-h, --help show this help message and exit
-v, --verbose Activate verbose logging.
Shows the local path to a specific model. The voice_selector has the format:
${CODE}:${NAME}@${QUALITY}/${SPEAKER}
The ${SPEAKER}
part is optional; as is the ${CODE}
part. So if you want to select the voice named 'alba' in quality 'medium', you could simply query: alba@medium
The language code is infered.
Alternatively, you can just query with the model name listed by the list
command.
${CODE}-${NAME}-${QUALITY}
So for the example above, that would be en_GB-alba-medium
usage: piper_whistle speak [-h] [-c CHANNEL] [-j] [-r] [-o OUTPUT] [-v] something
positional arguments:
something Something to speak.
options:
-h, --help Show help message.
-c CHANNEL, --channel CHANNEL
Path to channel (named pipe (aka. fifo)) to which piper is listening.
-j, --json Encode the text as json payload. Is on by default.
-r, --raw Encode the text directly.
-o OUTPUT, --output OUTPUT
Instead of streaming to audio channel, specifies a path to wav file where speech will be store in.
-v, --verbose Activate verbose logging.
Currently only works on linux / bsd systems, with a FIFO (aka. named pipes) setup. The basic idea is, having one pipe accepting json input (provided by this command), which is listened to by piper. After piper has processed the audio, it is either saved to file or passed on to another FIFO, which can then be read by a streaming audio player like aplay
.
Example: Assuming piper is installed at /opt/wind/piper, the named pipes are located at /opt/wind/channels and whistle is available in $PATH, the aformentioned setup could look like the following:
pipes:
- /opt/wind/channeld/speak - accepts json payload
- /opt/wind/channeld/input - read by piper
- /opt/wind/channeld/ouput - written by piper
processes:
- tty0: tail -F /opt/wind/channels/speak | tee /opt/wind/channels/input
- tty1: /opt/wind/piper/piper -m $(piper_whistle path alba@medium) --debug --json-input --output_raw < /opt/wind/channels/input > /opt/wind/channels/output
- tty2: aplay --buffer-size=777 -r 22050 -f S16_LE -t raw < channels/output
The tail command makes sure, that the payload on speak is send to input, thereby keeping the file open after processing. Otherwise, the setup would exit after piper has finished the first payload. This way you can continually prompt.
usage: piper_whistle list [-h] [-v] [-I] [-a] [-L] [-g] [-U] [-S] [-p]
[-l LANGUAGE_CODE] [-i VOICE_INDEX]
options:
-h, --help Show help message.
-v, --verbose Activate verbose logging.
-I, --installed Only list installed voices.
-a, --all List voices for all available languages.
-L, --languages List available languages.
-g, --legal Show avaiable legal information.
-U, --show-url Show URL of voice on remote host.
-S, --omit-speakers Omit speakers form listing.
-p, --install-path Show path of voice (if installed).
-l LANGUAGE_CODE, --language-code LANGUAGE_CODE
Only list voices matching this language.
-i VOICE_INDEX, --voice-index VOICE_INDEX
List only specific language voice.
This command lets you investigate available voices for specific languages, or
simply list all available voices. Using the --installed switch, you can filter
voices that are currently installed in the local cache directory. The cache is
located in the user app path, as provided by userpaths pip package. On linux this would be ${HOME}/.config/piper-whistle
. You may also get the model path on the remote host using -U.
usage: piper_whistle preview [-h] [-v] [-l LANGUAGE_CODE] [-i VOICE_INDEX]
[-s SPEAKER_INDEX] [-D]
options:
-h, --help Show help message.
-v, --verbose Activate verbose logging.
-l LANGUAGE_CODE, --language-code LANGUAGE_CODE
Select language.
-i VOICE_INDEX, --voice-index VOICE_INDEX
Specific language voice. (defaults to first one)
-s SPEAKER_INDEX, --speaker-index SPEAKER_INDEX
Specific language voice speaker. (defaults to first one)
-D, --dry-run Build URL and simulate download.
With preview
, you can download and play samples audio files, for any voice
supported by piper. It currently uses mplayer to play the audio file.
usage: piper_whistle install [-h] [-v] [-D] language_code voice_index
positional arguments:
language_code Select language.
voice_index Specific language voice. (defaults to first one)
options:
-h, --help Show help message.
-v, --verbose Activate verbose logging.
-D, --dry-run Simulate download / install.
With install
you can fetch available voice models and store them locally for
use with piper. You may first want to search for a voice you like with list
and then note the language code and index, so install knows where to look.
The model file (onnx) as well as its accompanying config (json) file, will be
stored in the local user data path as provide by userpaths. On linux this would be ${HOME}/.config/piper-whistle
.
usage: piper_whistle remove [-h] [-v] voice_selector
positional arguments:
voice_selector Selector of voice to search.
options:
-h, --help Show help message.
-v, --verbose Activate verbose logging.
Any installed voice model can be deleted, via remove
. You may pass the model name or shorthand selector.