mirror of
https://github.com/aigc3d/LAM_Audio2Expression.git
synced 2026-02-04 17:39:24 +08:00
fix: Update README
This commit is contained in:
@@ -4,7 +4,13 @@
|
||||
[](https://www.apache.org/licenses/LICENSE-2.0)
|
||||
[](https://www.modelscope.cn/studios/Damo_XR_Lab/LAM-A2E)
|
||||
|
||||
#### This project leverages audio input to generate ARKit blendshapes-driven facial expressions in ⚡real-time⚡, powering ultra-realistic 3D avatars generated by [LAM](https://github.com/aigc3d/LAM).
|
||||
## Description
|
||||
#### This project leverages audio input to generate ARKit blendshapes-driven facial expressions in ⚡real-time⚡, powering ultra-realistic 3D avatars generated by [LAM](https://github.com/aigc3d/LAM).
|
||||
To enable ARKit-driven animation of the LAM model, we adapted ARKit blendshapes to align with FLAME's facial topology through manual customization. The LAM-A2E network follows an encoder-decoder architecture, as shown below. We adopt the state-of-the-art pre-trained speech model Wav2Vec for the audio encoder. The features extracted from the raw audio waveform are combined with style features and fed into the decoder, which outputs stylized blendshape coefficients.
|
||||
|
||||
<div align="center">
|
||||
<img src="./assets/images/framework.png" alt="Architecture" width="90%" align=center/>
|
||||
</div>
|
||||
|
||||
## Demo
|
||||
|
||||
|
||||
Reference in New Issue
Block a user