mirror of
https://github.com/FunAudioLLM/CosyVoice.git
synced 2026-02-04 17:39:25 +08:00
117 lines
4.9 KiB
Markdown
117 lines
4.9 KiB
Markdown
## Serving CosyVoice with NVIDIA Triton Inference Server
|
|
|
|
Contributed by Yuekai Zhang (NVIDIA).
|
|
|
|
### Quick Start
|
|
|
|
Launch the service directly with Docker Compose:
|
|
```sh
|
|
docker compose up
|
|
```
|
|
|
|
### Build the Docker Image
|
|
|
|
To build the image from scratch:
|
|
```sh
|
|
docker build . -f Dockerfile.server -t soar97/triton-cosyvoice:25.06
|
|
```
|
|
|
|
### Run a Docker Container
|
|
```sh
|
|
your_mount_dir=/mnt:/mnt
|
|
docker run -it --name "cosyvoice-server" --gpus all --net host -v $your_mount_dir --shm-size=2g soar97/triton-cosyvoice:25.06
|
|
```
|
|
|
|
### Understanding `run.sh`
|
|
|
|
The `run.sh` script orchestrates the entire workflow through numbered stages.
|
|
|
|
You can run a subset of stages with:
|
|
```sh
|
|
bash run.sh <start_stage> <stop_stage> [service_type]
|
|
```
|
|
- `<start_stage>`: The stage to start from (0-5).
|
|
- `<stop_stage>`: The stage to stop after (0-5).
|
|
|
|
**Stages:**
|
|
|
|
- **Stage 0**: Downloads the `cosyvoice-2 0.5B` model from HuggingFace.
|
|
- **Stage 1**: Converts the HuggingFace checkpoint to the TensorRT-LLM format and builds the TensorRT engines.
|
|
- **Stage 2**: Creates the Triton model repository and configures the model files. The configuration is adjusted based on whether `Decoupled=True` (streaming) or `Decoupled=False` (offline) will be used.
|
|
- **Stage 3**: Launches the Triton Inference Server.
|
|
- **Stage 4**: Runs the single-utterance HTTP client for testing.
|
|
- **Stage 5**: Runs the gRPC benchmark client.
|
|
|
|
### Export Models and Launch Server
|
|
|
|
Inside the Docker container, prepare the models and start the Triton server by running stages 0-3:
|
|
```sh
|
|
# This command runs stages 0, 1, 2, and 3
|
|
bash run.sh 0 3
|
|
```
|
|
> [!TIP]
|
|
> Both streaming and offline (non-streaming) TTS modes are supported. For streaming TTS, set `Decoupled=True`. For offline TTS, set `Decoupled=False`. You need to rerun stage 2 if you switch between modes.
|
|
|
|
### Single-Utterance HTTP Client
|
|
|
|
Sends a single HTTP inference request. This is intended for testing the offline TTS mode (`Decoupled=False`):
|
|
```sh
|
|
bash run.sh 4 4
|
|
```
|
|
|
|
### Benchmark with a Dataset
|
|
|
|
To benchmark the running Triton server, pass `streaming` or `offline` as the third argument:
|
|
```sh
|
|
bash run.sh 5 5 # [streaming|offline]
|
|
|
|
# You can also customize parameters such as the number of tasks and the dataset split:
|
|
# python3 client_grpc.py --num-tasks 2 --huggingface-dataset yuekai/seed_tts_cosy2 --split-name test_zh --mode [streaming|offline]
|
|
```
|
|
> [!TIP]
|
|
> It is recommended to run the benchmark multiple times to get stable results after the initial server warm-up.
|
|
|
|
### Benchmark Results
|
|
The following results were obtained by decoding on a single L20 GPU with 26 prompt audio/target text pairs from the [yuekai/seed_tts](https://huggingface.co/datasets/yuekai/seed_tts) dataset (approximately 170 seconds of audio):
|
|
|
|
**Streaming TTS (First Chunk Latency)**
|
|
| Mode | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
|
|---|---|---|---|---|
|
|
| Streaming, use_spk2info_cache=False | 1 | 220.43 | 218.07 | 0.1237 |
|
|
| Streaming, use_spk2info_cache=False | 2 | 476.97 | 369.25 | 0.1022 |
|
|
| Streaming, use_spk2info_cache=False | 4 | 1107.34 | 1243.75| 0.0922 |
|
|
| Streaming, use_spk2info_cache=True | 1 | 189.88 | 184.81 | 0.1155 |
|
|
| Streaming, use_spk2info_cache=True | 2 | 323.04 | 316.83 | 0.0905 |
|
|
| Streaming, use_spk2info_cache=True | 4 | 977.68 | 903.68| 0.0733 |
|
|
|
|
> If your service only needs a fixed speaker, you can set `use_spk2info_cache=True` in `run.sh`. To add more speakers, refer to the instructions [here](https://github.com/qi-hua/async_cosyvoice?tab=readme-ov-file#9-spk2info-%E8%AF%B4%E6%98%8E).
|
|
|
|
**Offline TTS (Full Sentence Latency)**
|
|
| Mode | Note | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
|
|---|---|---|---|---|---|
|
|
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 758.04 | 615.79 | 0.0891 |
|
|
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1025.93 | 901.68 | 0.0657 |
|
|
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1914.13 | 1783.58 | 0.0610 |
|
|
|
|
### OpenAI-Compatible Server
|
|
|
|
To launch an OpenAI-compatible API service, run the following commands:
|
|
```sh
|
|
git clone https://github.com/yuekaizhang/Triton-OpenAI-Speech.git
|
|
cd Triton-OpenAI-Speech
|
|
pip install -r requirements.txt
|
|
|
|
# After the Triton service is running, start the FastAPI bridge:
|
|
python3 tts_server.py --url http://localhost:8000 --ref_audios_dir ./ref_audios/ --port 10086 --default_sample_rate 24000
|
|
|
|
# Test the service with curl:
|
|
bash test/test_cosyvoice.sh
|
|
```
|
|
> [!NOTE]
|
|
> Currently, only the offline TTS mode is compatible with the OpenAI-compatible server.
|
|
|
|
### Acknowledgements
|
|
|
|
This work originates from the NVIDIA CISI project. For more multimodal resources, please see [mair-hub](https://github.com/nvidia-china-sae/mair-hub).
|
|
|