mirror of
https://github.com/FunAudioLLM/CosyVoice.git
synced 2026-02-05 18:09:24 +08:00
update readme
This commit is contained in:
@@ -1,15 +1,17 @@
|
||||
## Best Practices for Serving CosyVoice with NVIDIA Triton Inference Server
|
||||
## Serving CosyVoice with NVIDIA Triton Inference Server
|
||||
|
||||
Thanks to the contribution from NVIDIA Yuekai Zhang.
|
||||
Contributed by Yuekai Zhang (NVIDIA).
|
||||
|
||||
### Quick Start
|
||||
|
||||
Launch the service directly with Docker Compose:
|
||||
```sh
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### Build the Docker Image
|
||||
Build the image from scratch:
|
||||
|
||||
To build the image from scratch:
|
||||
```sh
|
||||
docker build . -f Dockerfile.server -t soar97/triton-cosyvoice:25.06
|
||||
```
|
||||
@@ -21,71 +23,89 @@ docker run -it --name "cosyvoice-server" --gpus all --net host -v $your_mount_di
|
||||
```
|
||||
|
||||
### Understanding `run.sh`
|
||||
|
||||
The `run.sh` script orchestrates the entire workflow through numbered stages.
|
||||
|
||||
Run a subset of stages with:
|
||||
You can run a subset of stages with:
|
||||
```sh
|
||||
bash run.sh <start_stage> <stop_stage> [service_type]
|
||||
```
|
||||
- `<start_stage>` – stage to start from (0-5).
|
||||
- `<stop_stage>` – stage to stop after (0-5).
|
||||
- `<start_stage>`: The stage to start from (0-5).
|
||||
- `<stop_stage>`: The stage to stop after (0-5).
|
||||
|
||||
Stages:
|
||||
- **Stage 0** – Download the cosyvoice-2 0.5B model from HuggingFace.
|
||||
- **Stage 1** – Convert the HuggingFace checkpoint to TensorRT-LLM format and build TensorRT engines.
|
||||
- **Stage 2** – Create the Triton model repository and configure the model files (adjusts depending on whether `Decoupled=True/False` will be used later).
|
||||
- **Stage 3** – Launch the Triton Inference Server.
|
||||
- **Stage 4** – Run the single-utterance HTTP client.
|
||||
- **Stage 5** – Run the gRPC benchmark client.
|
||||
**Stages:**
|
||||
|
||||
- **Stage 0**: Downloads the `cosyvoice-2 0.5B` model from HuggingFace.
|
||||
- **Stage 1**: Converts the HuggingFace checkpoint to the TensorRT-LLM format and builds the TensorRT engines.
|
||||
- **Stage 2**: Creates the Triton model repository and configures the model files. The configuration is adjusted based on whether `Decoupled=True` (streaming) or `Decoupled=False` (offline) will be used.
|
||||
- **Stage 3**: Launches the Triton Inference Server.
|
||||
- **Stage 4**: Runs the single-utterance HTTP client for testing.
|
||||
- **Stage 5**: Runs the gRPC benchmark client.
|
||||
|
||||
### Export Models and Launch Server
|
||||
|
||||
### Export Models to TensorRT-LLM and Launch the Server
|
||||
Inside the Docker container, prepare the models and start the Triton server by running stages 0-3:
|
||||
```sh
|
||||
# Runs stages 0, 1, 2, and 3
|
||||
# This command runs stages 0, 1, 2, and 3
|
||||
bash run.sh 0 3
|
||||
```
|
||||
*Note: Stage 2 prepares the model repository differently depending on whether you intend to run with `Decoupled=False` or `Decoupled=True`. Rerun stage 2 if you switch the service type.*
|
||||
> [!TIP]
|
||||
> Both streaming and offline (non-streaming) TTS modes are supported. For streaming TTS, set `Decoupled=True`. For offline TTS, set `Decoupled=False`. You need to rerun stage 2 if you switch between modes.
|
||||
|
||||
### Single-Utterance HTTP Client
|
||||
Send a single HTTP inference request:
|
||||
|
||||
Sends a single HTTP inference request. This is intended for testing the offline TTS mode (`Decoupled=False`):
|
||||
```sh
|
||||
bash run.sh 4 4
|
||||
```
|
||||
|
||||
### Benchmark with a Dataset
|
||||
Benchmark the running Triton server. Pass either `streaming` or `offline` as the third argument.
|
||||
```sh
|
||||
bash run.sh 5 5
|
||||
|
||||
# You can also customise parameters such as num_task and dataset split directly:
|
||||
To benchmark the running Triton server, pass `streaming` or `offline` as the third argument:
|
||||
```sh
|
||||
bash run.sh 5 5 # [streaming|offline]
|
||||
|
||||
# You can also customize parameters such as the number of tasks and the dataset split:
|
||||
# python3 client_grpc.py --num-tasks 2 --huggingface-dataset yuekai/seed_tts_cosy2 --split-name test_zh --mode [streaming|offline]
|
||||
```
|
||||
> [!TIP]
|
||||
> Only offline CosyVoice TTS is currently supported. Setting the client to `streaming` simply enables NVIDIA Triton’s decoupled mode so that responses are returned as soon as they are ready.
|
||||
> It is recommended to run the benchmark multiple times to get stable results after the initial server warm-up.
|
||||
|
||||
### Benchmark Results
|
||||
Decoding on a single L20 GPU with 26 prompt_audio/target_text [pairs](https://huggingface.co/datasets/yuekai/seed_tts) (≈221 s of audio):
|
||||
The following results were obtained by decoding on a single L20 GPU with 26 prompt audio/target text pairs from the [yuekai/seed_tts](https://huggingface.co/datasets/yuekai/seed_tts) dataset (approximately 170 seconds of audio):
|
||||
|
||||
**Streaming TTS (First Chunk Latency)**
|
||||
| Mode | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
||||
|---|---|---|---|---|
|
||||
| Streaming, Decoupled=True | 1 | 220.43 | 218.07 | 0.1237 |
|
||||
| Streaming, Decoupled=True | 2 | 476.97 | 369.25 | 0.1022 |
|
||||
| Streaming, Decoupled=True | 4 | 1107.34 | 1243.75| 0.0922 |
|
||||
|
||||
**Offline TTS (Full Sentence Latency)**
|
||||
| Mode | Note | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
||||
|------|------|-------------|------------------|------------------|-----|
|
||||
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 758.04 | 615.79 | 0.0891 |
|
||||
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1025.93 | 901.68 | 0.0657 |
|
||||
| Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1914.13 | 1783.58 | 0.0610 |
|
||||
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 659.87 | 655.63 | 0.0891 |
|
||||
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1103.16 | 992.96 | 0.0693 |
|
||||
| Decoupled=True | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1790.91 | 1668.63 | 0.0604 |
|
||||
|---|---|---|---|---|---|
|
||||
| Offline, Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 758.04 | 615.79 | 0.0891 |
|
||||
| Offline, Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1025.93 | 901.68 | 0.0657 |
|
||||
| Offline, Decoupled=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1914.13 | 1783.58 | 0.0610 |
|
||||
|
||||
### OpenAI-Compatible Server
|
||||
To launch an OpenAI-compatible service, run:
|
||||
|
||||
To launch an OpenAI-compatible API service, run the following commands:
|
||||
```sh
|
||||
git clone https://github.com/yuekaizhang/Triton-OpenAI-Speech.git
|
||||
cd Triton-OpenAI-Speech
|
||||
pip install -r requirements.txt
|
||||
# After the Triton service is up, start the FastAPI bridge:
|
||||
|
||||
# After the Triton service is running, start the FastAPI bridge:
|
||||
python3 tts_server.py --url http://localhost:8000 --ref_audios_dir ./ref_audios/ --port 10086 --default_sample_rate 24000
|
||||
# Test with curl
|
||||
|
||||
# Test the service with curl:
|
||||
bash test/test_cosyvoice.sh
|
||||
```
|
||||
> [!NOTE]
|
||||
> Currently, only the offline TTS mode is compatible with the OpenAI-compatible server.
|
||||
|
||||
### Acknowledgements
|
||||
This section originates from the NVIDIA CISI project. We also provide other multimodal resources—see [mair-hub](https://github.com/nvidia-china-sae/mair-hub) for details.
|
||||
|
||||
This work originates from the NVIDIA CISI project. For more multimodal resources, please see [mair-hub](https://github.com/nvidia-china-sae/mair-hub).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user