From 53a2b7e46ac272e4d725e0c4d7b4479363e1c580 Mon Sep 17 00:00:00 2001 From: czk32611 Date: Tue, 28 May 2024 14:46:17 +0800 Subject: [PATCH] Release MusePose --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index e824b50..e769a9f 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,8 @@ Wenjiang Zhou We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by [MuseV](https://github.com/TMElyralab/MuseV), as a complete virtual human solution. +:new: Update: We are thrilled to announce that [MusePose](https://github.com/TMElyralab/MusePose/) has been released. MusePose is an image-to-video generation framework for virtual human under control signal like pose. Together with MuseV and MuseTalk, we hope the community can join us and march towards the vision where a virtual human can be generated end2end with native ability of full body movement and interaction. + # Overview `MuseTalk` is a real-time high quality audio-driven lip-syncing model trained in the latent space of `ft-mse-vae`, which