diff --git a/README.md b/README.md index 68121cf..368d947 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Wenjiang Zhou Lyra Lab, Tencent Music Entertainment -**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **[space](https://huggingface.co/spaces/TMElyralab/MuseTalk)** **[Technical report](https://arxiv.org/pdf/2410.10122)** +**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **[space](https://huggingface.co/spaces/TMElyralab/MuseTalk)** **[Technical report](https://arxiv.org/abs/2410.10122)** We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by [MuseV](https://github.com/TMElyralab/MuseV), as a complete virtual human solution. @@ -44,7 +44,7 @@ Please find details in the following two links or contact zkangchen@tencent.com - [04/02/2024] Release MuseTalk project and pretrained models. - [04/16/2024] Release Gradio [demo](https://huggingface.co/spaces/TMElyralab/MuseTalk) on HuggingFace Spaces (thanks to HF team for their community grant) - [04/17/2024] : We release a pipeline that utilizes MuseTalk for real-time inference. -- [10/18/2024] :mega: We publish the [technical report](https://arxiv.org/pdf/2410.10122). +- [10/18/2024] :mega: We release the [technical report](https://arxiv.org/abs/2410.10122). ## Model ![Model Structure](assets/figs/musetalk_arc.jpg)