From 25767f76a8109642c6b52b753e250e024513d679 Mon Sep 17 00:00:00 2001 From: mush42 Date: Sun, 24 Sep 2023 02:13:27 +0200 Subject: [PATCH] Readme: added a note about GPU inference with onnxruntime. --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index a448004..dd7cdb3 100644 --- a/README.md +++ b/README.md @@ -221,6 +221,12 @@ You can also control synthesis parameters: python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs --temperature 0.4 --speaking_rate 0.9 --spk 0 ``` +To run inference on **GPU**, make sure to install **onnxruntime-gpu** package, and then pass `--gpu` to the inference command: + +```bash +python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs --gpu +``` + If you exported only Matcha to ONNX, this will write mel-spectrogram as graphs and `numpy` arrays to the output directory. If you embedded the vocoder in the exported graph, this will write `.wav` audio files to the output directory.