Merge pull request #831 from FunAudioLLM/dev/lyuxiang.lx

update readme
This commit is contained in:
Xiang Lyu
2025-01-04 13:41:07 +08:00
committed by GitHub

View File

@@ -80,8 +80,6 @@ sudo yum install sox sox-devel
We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can skip this step.
``` python
# SDK模型下载
from modelscope import snapshot_download
@@ -118,7 +116,7 @@ pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
**Basic Usage**
We strongly recommend using `CosyVoice2-0.5B` for better performance.
For zero_shot/cross_lingual inference, please use `CosyVoice-300M` model.
For zero_shot/cross_lingual inference, please use `CosyVoice2-0.5B` `CosyVoice-300M` model.
For sft inference, please use `CosyVoice-300M-SFT` model.
For instruct inference, please use `CosyVoice-300M-Instruct` model.
@@ -182,7 +180,6 @@ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展
**Start web demo**
You can use our web demo page to get familiar with CosyVoice quickly.
We support sft/zero_shot/cross_lingual/instruct inference in web demo.
Please see the demo website for details.
@@ -194,12 +191,11 @@ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
**Advanced Usage**
For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
You can get familiar with CosyVoice following this recipie.
**Build for deployment**
Optionally, if you want to use grpc for service deployment,
you can run following steps. Otherwise, you can just ignore this step.
Optionally, if you want service deployment,
you can run following steps.
``` sh
cd runtime/python