From 058f7ddc7fa3eb59a7e05e71e1ae7712cdd1ae2b Mon Sep 17 00:00:00 2001 From: phighting Date: Wed, 27 Nov 2024 14:29:51 +0800 Subject: [PATCH] Update README.md --- README.md | 9 --------- 1 file changed, 9 deletions(-) diff --git a/README.md b/README.md index eb8e7fb..4d5e077 100644 --- a/README.md +++ b/README.md @@ -21,15 +21,6 @@ We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ :new: Update: We are thrilled to announce that [MusePose](https://github.com/TMElyralab/MusePose/) has been released. MusePose is an image-to-video generation framework for virtual human under control signal like pose. Together with MuseV and MuseTalk, we hope the community can join us and march towards the vision where a virtual human can be generated end2end with native ability of full body movement and interaction. -# Recruitment -Join Lyra Lab, Tencent Music Entertainment! - -We are currently seeking AIGC researchers including Internships, New Grads, and Senior (实习、校招、社招). - -Please find details in the following two links or contact zkangchen@tencent.com - -- AI Researcher (https://join.tencentmusic.com/social/post-details/?id=13488, https://join.tencentmusic.com/social/post-details/?id=13502) - # Overview `MuseTalk` is a real-time high quality audio-driven lip-syncing model trained in the latent space of `ft-mse-vae`, which