diff --git a/README.md b/README.md index 6542df7..1959619 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ Chao Zhan, Wenjiang Zhou (*Equal Contribution, Corresponding Author, benbinwu@tencent.com) -**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **Project(comming soon)** **Technical report (comming soon)** +**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **Project (comming soon)** **Technical report (comming soon)** We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by [MuseV](https://github.com/TMElyralab/MuseV), as a complete virtual human solution. @@ -37,18 +37,51 @@ MuseTalk was trained in latent spaces, where the images were encoded by a freeze - - + + + + + + + + + + + + + + + + + @@ -56,10 +89,10 @@ MuseTalk was trained in latent spaces, where the images were encoded by a freeze @@ -67,10 +100,10 @@ MuseTalk was trained in latent spaces, where the images were encoded by a freeze @@ -78,10 +111,10 @@ MuseTalk was trained in latent spaces, where the images were encoded by a freeze
ImageMuseV +MuseTalkMuseV+MuseTalk
+ + + + + +
- + - + +
+ + + + + +
+ + + + +
- + - +
- + - +
- + - +
@@ -96,7 +129,7 @@ MuseTalk was trained in latent spaces, where the images were encoded by a freeze - + Link @@ -204,7 +237,7 @@ python -m scripts.inference --inference_config configs/inference/test.yaml --bbo #### Combining MuseV and MuseTalk -You are suggested to first apply [MuseV](https://github.com/TMElyralab/MuseV) to generate a video by referring [this](https://github.com/TMElyralab/MuseV?tab=readme-ov-file#text2video). Then, you can use `MuseTalk` by referring [this](). +As a complete solution to virtual human generation, you are suggested to first apply [MuseV](https://github.com/TMElyralab/MuseV) to generate a video (text-to-video, image-to-video or pose-to-video) by referring [this](https://github.com/TMElyralab/MuseV?tab=readme-ov-file#text2video). Then, you can use `MuseTalk` to generate a lip-sync video by referring [this](https://github.com/TMElyralab/MuseTalk?tab=readme-ov-file#inference). # Note diff --git a/assets/BBOX_SHIFT.md b/assets/BBOX_SHIFT.md index 3476997..b164f4a 100644 --- a/assets/BBOX_SHIFT.md +++ b/assets/BBOX_SHIFT.md @@ -1,15 +1,15 @@ ## Why is there a "bbox_shift" parameter? When processing training data, we utilize the combination of face detection results (bbox) and facial landmarks to determine the region of the head segmentation box. Specifically, we use the upper bound of the bbox as the upper boundary of the segmentation box, the maximum y value of the facial landmarks coordinates as the lower boundary of the segmentation box, and the minimum and maximum x values of the landmarks coordinates as the left and right boundaries of the segmentation box. By processing the dataset in this way, we can ensure the integrity of the face. -However, we have observed that the masked ratio on the face varies across different images due to the varying face shapes of subjects. Furthermore, we found that the upper-bound of the mask mainly lies close to the 27th, 28th and 30th landmark points (as shown in Fig.1), which correspond to proportions of 15%, 63%, and 22% in the dataset, respectively. +However, we have observed that the masked ratio on the face varies across different images due to the varying face shapes of subjects. Furthermore, we found that the upper-bound of the mask mainly lies close to the landmark28, landmark29 and landmark30 landmark points (as shown in Fig.1), which correspond to proportions of 15%, 63%, and 22% in the dataset, respectively. -During the inference process, we discovered that as the upper-bound of the mask gets closer to the mouth (30th), the audio features contribute more to lip motion. Conversely, as the upper-bound of the mask moves away from the mouth (28th), the audio features contribute more to generating details of facial disappearance. Hence, we define this characteristic as a parameter that can adjust the effect of generating mouth shapes, which users can adjust according to their needs in practical scenarios. +During the inference process, we discover that as the upper-bound of the mask gets closer to the mouth (near landmark30), the audio features contribute more to lip movements. Conversely, as the upper-bound of the mask moves away from the mouth (near landmark28), the audio features contribute more to generating details of facial appearance. Hence, we define this characteristic as a parameter that can adjust the contribution of audio features to generating lip movements, which users can modify according to their specific needs in practical scenarios. ![landmark](figs/landmark_ref.png) Fig.1. Facial landmarks ### Step 0. -Running with the default configuration to obtain the adjustable value range, and then re-run the script within this range. +Running with the default configuration to obtain the adjustable value range. ``` python -m scripts.inference --inference_config configs/inference/test.yaml ``` @@ -19,7 +19,7 @@ Total frame:「838」 Manually adjust range : [ -9~9 ] , the current value: 0 ************************************************************************************************************************************* ``` ### Step 1. -re-run the script within the above range. +Re-run the script within the above range. ``` python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift xx # where xx is in [-9, 9]. ``` diff --git a/assets/demo/man/man.png b/assets/demo/man/man.png new file mode 100644 index 0000000..06a85a2 Binary files /dev/null and b/assets/demo/man/man.png differ diff --git a/assets/demo/musk/musk.png b/assets/demo/musk/musk.png new file mode 100644 index 0000000..06522be Binary files /dev/null and b/assets/demo/musk/musk.png differ diff --git a/assets/demo/sit/sit.jpeg b/assets/demo/sit/sit.jpeg new file mode 100644 index 0000000..7178a6b Binary files /dev/null and b/assets/demo/sit/sit.jpeg differ diff --git a/scripts/inference.py b/scripts/inference.py index 5c659ea..c7bb88f 100644 --- a/scripts/inference.py +++ b/scripts/inference.py @@ -30,8 +30,8 @@ def main(args): input_basename = os.path.basename(video_path).split('.')[0] audio_basename = os.path.basename(audio_path).split('.')[0] output_basename = f"{input_basename}_{audio_basename}" - crop_coord_save_path = os.path.join(args.result_dir, input_basename+".pkl") # only related to video input result_img_save_path = os.path.join(args.result_dir, output_basename) # related to video & audio inputs + crop_coord_save_path = os.path.join(result_img_save_path, input_basename+".pkl") # only related to video input os.makedirs(result_img_save_path,exist_ok =True) if args.output_vid_name=="": @@ -122,7 +122,7 @@ def main(args): os.system(cmd_combine_audio) os.system("rm temp.mp4") - os.system(f"rm -r {result_img_save_path}") + os.system(f"rm -rf {result_img_save_path}") print(f"result is save to {output_vid_name}") if __name__ == "__main__":