mirror of
https://github.com/TMElyralab/MuseTalk.git
synced 2026-02-04 17:39:20 +08:00
Update README.md
This commit is contained in:
@@ -1,15 +1,15 @@
|
||||
## Why is there a "bbox_shift" parameter?
|
||||
When processing training data, we utilize the combination of face detection results (bbox) and facial landmarks to determine the region of the head segmentation box. Specifically, we use the upper bound of the bbox as the upper boundary of the segmentation box, the maximum y value of the facial landmarks coordinates as the lower boundary of the segmentation box, and the minimum and maximum x values of the landmarks coordinates as the left and right boundaries of the segmentation box. By processing the dataset in this way, we can ensure the integrity of the face.
|
||||
|
||||
However, we have observed that the masked ratio on the face varies across different images due to the varying face shapes of subjects. Furthermore, we found that the upper-bound of the mask mainly lies close to the 27th, 28th and 30th landmark points (as shown in Fig.1), which correspond to proportions of 15%, 63%, and 22% in the dataset, respectively.
|
||||
However, we have observed that the masked ratio on the face varies across different images due to the varying face shapes of subjects. Furthermore, we found that the upper-bound of the mask mainly lies close to the landmark28, landmark29 and landmark30 landmark points (as shown in Fig.1), which correspond to proportions of 15%, 63%, and 22% in the dataset, respectively.
|
||||
|
||||
During the inference process, we discovered that as the upper-bound of the mask gets closer to the mouth (30th), the audio features contribute more to lip motion. Conversely, as the upper-bound of the mask moves away from the mouth (28th), the audio features contribute more to generating details of facial disappearance. Hence, we define this characteristic as a parameter that can adjust the effect of generating mouth shapes, which users can adjust according to their needs in practical scenarios.
|
||||
During the inference process, we discover that as the upper-bound of the mask gets closer to the mouth (near landmark30), the audio features contribute more to lip movements. Conversely, as the upper-bound of the mask moves away from the mouth (near landmark28), the audio features contribute more to generating details of facial appearance. Hence, we define this characteristic as a parameter that can adjust the contribution of audio features to generating lip movements, which users can modify according to their specific needs in practical scenarios.
|
||||
|
||||

|
||||
|
||||
Fig.1. Facial landmarks
|
||||
### Step 0.
|
||||
Running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.
|
||||
Running with the default configuration to obtain the adjustable value range.
|
||||
```
|
||||
python -m scripts.inference --inference_config configs/inference/test.yaml
|
||||
```
|
||||
@@ -19,7 +19,7 @@ Total frame:「838」 Manually adjust range : [ -9~9 ] , the current value: 0
|
||||
*************************************************************************************************************************************
|
||||
```
|
||||
### Step 1.
|
||||
re-run the script within the above range.
|
||||
Re-run the script within the above range.
|
||||
```
|
||||
python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift xx # where xx is in [-9, 9].
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user