Adding onnx installation command in the README

This commit is contained in:
Shivam Mehta
2023-09-29 14:38:57 +00:00
parent 336dd20d5b
commit 269609003b

View File

@@ -36,7 +36,6 @@ Check out our [demo page](https://shivammehta25.github.io/Matcha-TTS) and read [
[![Watch the video](https://img.youtube.com/vi/xmvJkz3bqw0/hqdefault.jpg)](https://youtu.be/xmvJkz3bqw0) [![Watch the video](https://img.youtube.com/vi/xmvJkz3bqw0/hqdefault.jpg)](https://youtu.be/xmvJkz3bqw0)
## Installation ## Installation
1. Create an environment (suggested but optional) 1. Create an environment (suggested but optional)
@@ -191,11 +190,19 @@ matcha-tts --text "<INPUT TEXT>" --checkpoint_path <PATH TO CHECKPOINT>
## ONNX support ## ONNX support
> Special thanks to @mush42 for implementing ONNX export and inference support.
It is possible to export Matcha checkpoints to [ONNX](https://onnx.ai/), and run inference on the exported ONNX graph. It is possible to export Matcha checkpoints to [ONNX](https://onnx.ai/), and run inference on the exported ONNX graph.
### ONNX export ### ONNX export
To export a checkpoint to ONNX, run the following: To export a checkpoint to ONNX, first install ONNX with
```bash
pip install onnx
```
then run the following:
```bash ```bash
python3 -m matcha.onnx.export matcha.ckpt model.onnx --n-timesteps 5 python3 -m matcha.onnx.export matcha.ckpt model.onnx --n-timesteps 5
@@ -209,7 +216,14 @@ Optionally, the ONNX exporter accepts **vocoder-name** and **vocoder-checkpoint*
### ONNX Inference ### ONNX Inference
To run inference on the exported model, use the following: To run inference on the exported model, first install `onnxruntime` using
```bash
pip install onnxruntime
pip install onnxruntime-gpu # for GPU inference
```
then use the following:
```bash ```bash
python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs