mirror of
https://github.com/FunAudioLLM/CosyVoice.git
synced 2026-02-05 18:09:24 +08:00
add cosyvoice2 offline inference
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
## Serving CosyVoice with NVIDIA Triton Inference Server
|
||||
## Accelerating CosyVoice with NVIDIA Triton Inference Server and TensorRT-LLM
|
||||
|
||||
Contributed by Yuekai Zhang (NVIDIA).
|
||||
|
||||
@@ -41,6 +41,7 @@ bash run.sh <start_stage> <stop_stage> [service_type]
|
||||
- **Stage 3**: Launches the Triton Inference Server.
|
||||
- **Stage 4**: Runs the single-utterance HTTP client for testing.
|
||||
- **Stage 5**: Runs the gRPC benchmark client.
|
||||
- **Stage 6**: Runs the offline inference benchmark test.
|
||||
|
||||
### Export Models and Launch Server
|
||||
|
||||
@@ -59,7 +60,7 @@ Sends a single HTTP inference request. This is intended for testing the offline
|
||||
bash run.sh 4 4
|
||||
```
|
||||
|
||||
### Benchmark with a Dataset
|
||||
### Benchmark with client-server mode
|
||||
|
||||
To benchmark the running Triton server, pass `streaming` or `offline` as the third argument:
|
||||
```sh
|
||||
@@ -71,10 +72,26 @@ bash run.sh 5 5 # [streaming|offline]
|
||||
> [!TIP]
|
||||
> It is recommended to run the benchmark multiple times to get stable results after the initial server warm-up.
|
||||
|
||||
### Benchmark with offline inference mode
|
||||
For offline inference mode benchmark, please check the below command:
|
||||
```sh
|
||||
# install FlashCosyVoice for token2wav batching
|
||||
# git clone https://github.com/yuekaizhang/FlashCosyVoice.git /workspace/FlashCosyVoice -b trt
|
||||
# cd /workspace/FlashCosyVoice
|
||||
# pip install -e .
|
||||
# cd -
|
||||
# wget https://huggingface.co/yuekai/cosyvoice2_flow_onnx/resolve/main/flow.decoder.estimator.fp32.dynamic_batch.onnx -O $model_scope_model_local_dir/flow.decoder.estimator.fp32.dynamic_batch.onnx
|
||||
|
||||
bash run.sh 6 6
|
||||
|
||||
# You can also switch to huggingface backend by setting backend=hf
|
||||
```
|
||||
|
||||
|
||||
### Benchmark Results
|
||||
The following results were obtained by decoding on a single L20 GPU with 26 prompt audio/target text pairs from the [yuekai/seed_tts](https://huggingface.co/datasets/yuekai/seed_tts) dataset (approximately 170 seconds of audio):
|
||||
|
||||
**Streaming TTS (First Chunk Latency)**
|
||||
**Client-Server Mode: Streaming TTS (First Chunk Latency)**
|
||||
| Mode | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
||||
|---|---|---|---|---|
|
||||
| Streaming, use_spk2info_cache=False | 1 | 220.43 | 218.07 | 0.1237 |
|
||||
@@ -86,13 +103,26 @@ The following results were obtained by decoding on a single L20 GPU with 26 prom
|
||||
|
||||
> If your service only needs a fixed speaker, you can set `use_spk2info_cache=True` in `run.sh`. To add more speakers, refer to the instructions [here](https://github.com/qi-hua/async_cosyvoice?tab=readme-ov-file#9-spk2info-%E8%AF%B4%E6%98%8E).
|
||||
|
||||
**Offline TTS (Full Sentence Latency)**
|
||||
**Client-Server Mode: Offline TTS (Full Sentence Latency)**
|
||||
| Mode | Note | Concurrency | Avg Latency (ms) | P50 Latency (ms) | RTF |
|
||||
|---|---|---|---|---|---|
|
||||
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 1 | 758.04 | 615.79 | 0.0891 |
|
||||
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 2 | 1025.93 | 901.68 | 0.0657 |
|
||||
| Offline, Decoupled=False, use_spk2info_cache=False | [Commit](https://github.com/yuekaizhang/CosyVoice/commit/b44f12110224cb11c03aee4084b1597e7b9331cb) | 4 | 1914.13 | 1783.58 | 0.0610 |
|
||||
|
||||
**Offline Inference Mode: Hugginface LLM V.S. TensorRT-LLM**
|
||||
| Backend | Batch Size | llm_time_seconds | total_time_seconds | RTF |
|
||||
|---------|------------|------------------|-----------------------|--|
|
||||
| HF | 1 | 39.26 | 44.31 | 0.2494 |
|
||||
| HF | 2 | 30.54 | 35.62 | 0.2064 |
|
||||
| HF | 4 | 18.63 | 23.90 | 0.1421 |
|
||||
| HF | 8 | 11.22 | 16.45 | 0.0947 |
|
||||
| HF | 16 | 8.42 | 13.78 | 0.0821 |
|
||||
| TRTLLM | 1 | 12.46 | 17.31 | 0.0987 |
|
||||
| TRTLLM | 2 | 7.64 |12.65 | 0.0739 |
|
||||
| TRTLLM | 4 | 4.89 | 9.38 | 0.0539 |
|
||||
| TRTLLM | 8 | 2.92 | 7.23 | 0.0418 |
|
||||
| TRTLLM | 16 | 2.01 | 6.63 | 0.0386 |
|
||||
### OpenAI-Compatible Server
|
||||
|
||||
To launch an OpenAI-compatible API service, run the following commands:
|
||||
|
||||
Reference in New Issue
Block a user