3DXTalker: Unifying Identity, Lip Sync, Emotion, and Spatial Dynamics in Expressive 3D Talking Avatars

Zhongju Wang1, Zhenhong Sun2, Beier Wang1, Yifu Wang3, Daoyi Dong4, Huadong Mo1, Hongdong Li2
1University of New South Wales, 2Australian National University, 3Vertex Lab, 4University of Technology Sydney

3DXTalker generates identity-consistent, expressive 3D talking avatars from a single reference image and speech audio, achieving accurate lip synchronization, expressive emotion control, and natural head-pose dynamics.

Abstract

Audio-driven 3D talking avatar generation is increasingly important in virtual communication, digital humans, and interactive media, where avatars must preserve identity, synchronize lip motion with speech, express emotion, and exhibit lifelike spatial dynamics, collectively defining a broader objective of expressivity. However, achieving this remains challenging due to insufficient training data with limited subject identities, narrow audio representations, and restricted explicit controllability. In this paper, we propose 3DXTalker, an expressive 3D talking avatar through data-curated identity modeling, audio-rich representations, and spatial dynamics controllability. 3DXTalker enables scalable identity modeling via 2D-to-3D data curation pipeline and disentangled representations, alleviating data scarcity and improving identity generalization. Then, we introduce frame-wise amplitude and emotional cues beyond standard speech embeddings, ensuring superior lip synchronization and nuanced expression modulation. These cues are unified by a flow-matching-based transformer for coherent facial dynamics. Moreover, 3DXTalker also enables natural head-pose motion generation while supporting stylized control via prompt-based conditioning. Extensive experiments show that 3DXTalker integrates lip synchronization, emotional expression, and head-pose dynamics within a unified framework, achieves superior performance in 3D talking avatar generation.

Method

3DXTalker pipeline

Overview of the 3DXTalker framework.

Emotion Control

Happy

Sad

Angry

Head Pose Control

Camera Movements

Downstream Application

BibTeX

@article{wang20243dxtalker,
  title={3DXTalker: Unifying Identity, Lip Sync, Emotion, and Spatial Dynamics in Expressive 3D Talking Avatars},
  author={Wang, Zhongju and Sun, Zhenhong and Wang, Beier and Wang, Yifu and Dong, Daoyi and Mo, Huadong and Li, Hongdong},
  journal={arXiv preprint},
  year={2026}
}