Files
CosyVoice/runtime/triton_trtllm
2025-09-03 17:42:14 +08:00
..
2025-09-03 17:42:14 +08:00
2025-07-29 08:39:41 +00:00
2025-09-02 18:32:12 +08:00
2025-07-29 08:39:41 +00:00
2025-07-29 11:58:23 +08:00
2025-08-05 11:15:42 +08:00
2025-09-03 17:42:14 +08:00
2025-07-29 11:13:07 +08:00
2025-09-03 17:42:14 +08:00

Serving CosyVoice with NVIDIA Triton Inference Server

Contributed by Yuekai Zhang (NVIDIA).

Quick Start

Launch the service directly with Docker Compose:

docker compose up

Build the Docker Image

To build the image from scratch:

docker build . -f Dockerfile.server -t soar97/triton-cosyvoice:25.06

Run a Docker Container

your_mount_dir=/mnt:/mnt
docker run -it --name "cosyvoice-server" --gpus all --net host -v $your_mount_dir --shm-size=2g soar97/triton-cosyvoice:25.06

Understanding run.sh

The run.sh script orchestrates the entire workflow through numbered stages.

You can run a subset of stages with:

bash run.sh <start_stage> <stop_stage> [service_type]
  • <start_stage>: The stage to start from (0-5).
  • <stop_stage>: The stage to stop after (0-5).

Stages:

  • Stage 0: Downloads the cosyvoice-2 0.5B model from HuggingFace.
  • Stage 1: Converts the HuggingFace checkpoint to the TensorRT-LLM format and builds the TensorRT engines.
  • Stage 2: Creates the Triton model repository and configures the model files. The configuration is adjusted based on whether Decoupled=True (streaming) or Decoupled=False (offline) will be used.
  • Stage 3: Launches the Triton Inference Server.
  • Stage 4: Runs the single-utterance HTTP client for testing.
  • Stage 5: Runs the gRPC benchmark client.

Export Models and Launch Server

Inside the Docker container, prepare the models and start the Triton server by running stages 0-3:

# This command runs stages 0, 1, 2, and 3
bash run.sh 0 3

Tip

Both streaming and offline (non-streaming) TTS modes are supported. For streaming TTS, set Decoupled=True. For offline TTS, set Decoupled=False. You need to rerun stage 2 if you switch between modes.

Single-Utterance HTTP Client

Sends a single HTTP inference request. This is intended for testing the offline TTS mode (Decoupled=False):

bash run.sh 4 4

Benchmark with a Dataset

To benchmark the running Triton server, pass streaming or offline as the third argument:

bash run.sh 5 5 # [streaming|offline]

# You can also customize parameters such as the number of tasks and the dataset split:
# python3 client_grpc.py --num-tasks 2 --huggingface-dataset yuekai/seed_tts_cosy2 --split-name test_zh --mode [streaming|offline]

Tip

It is recommended to run the benchmark multiple times to get stable results after the initial server warm-up.

Benchmark Results

The following results were obtained by decoding on a single L20 GPU with 26 prompt audio/target text pairs from the yuekai/seed_tts dataset (approximately 170 seconds of audio):

Streaming TTS (First Chunk Latency)

Mode Concurrency Avg Latency (ms) P50 Latency (ms) RTF
Streaming, Decoupled=True 1 220.43 218.07 0.1237
Streaming, Decoupled=True 2 476.97 369.25 0.1022
Streaming, Decoupled=True 4 1107.34 1243.75 0.0922

Offline TTS (Full Sentence Latency)

Mode Note Concurrency Avg Latency (ms) P50 Latency (ms) RTF
Offline, Decoupled=False Commit 1 758.04 615.79 0.0891
Offline, Decoupled=False Commit 2 1025.93 901.68 0.0657
Offline, Decoupled=False Commit 4 1914.13 1783.58 0.0610

OpenAI-Compatible Server

To launch an OpenAI-compatible API service, run the following commands:

git clone https://github.com/yuekaizhang/Triton-OpenAI-Speech.git
cd Triton-OpenAI-Speech
pip install -r requirements.txt

# After the Triton service is running, start the FastAPI bridge:
python3 tts_server.py --url http://localhost:8000 --ref_audios_dir ./ref_audios/ --port 10086 --default_sample_rate 24000

# Test the service with curl:
bash test/test_cosyvoice.sh

Note

Currently, only the offline TTS mode is compatible with the OpenAI-compatible server.

Acknowledgements

This work originates from the NVIDIA CISI project. For more multimodal resources, please see mair-hub.