Human motion video generation has advanced significantly, while existing methods still struggle with accurately rendering detailed body parts like hands and faces, especially in long sequences and intricate motions. Current approaches also rely on fixed resolution and struggle to maintain visual consistency. To address these limitations, we propose HumanDiT, a pose-guided Diffusion Transformer (DiT)-based framework trained on a large and wild dataset containing 14,000 hours of high-quality video to produce high-fidelity videos with fine-grained body rendering. Specifically, (i) HumanDiT, built on DiT, supports numerous video resolutions and variable sequence lengths, facilitating learning for long-sequence video generation; (ii) we introduce a prefix-latent reference strategy to maintain personalized characteristics across extended sequences. Furthermore, during inference, HumanDiT leverages Keypoint-DiT to generate subsequent pose sequences, facilitating video continuation from static images or existing videos. It also utilizes a Pose Adapter to enable pose transfer with given sequences. Extensive experiments demonstrate its superior performance in generating long-form, pose-accurate videos across diverse scenarios.
The overview of HumanDiT. HumanDiT focuses on generate videos from a single image using a pose-guided DiT model. A 3D VAE is employed to encode video segments into latent space. With 3D full attention, the initial frame (green border) serves as a noise-free prefix latent (green cube) for reference. The pose guider extracts body and background pose features, while the DiT-based denoising model renders the final pixel results. During inference, the keypoint-DiT model produces subsequent motions based on the pose of the first frame. With a guided pose sequence, the pose adapter transfers and refines poses via keypoint-DiT to animate the reference image.
By inputting a single character image and template pose video, our method can generate vocal avatar videos featuring not only pose-accurate rendering but also realistic body shapes.
Our model supports video continuation for single human image, generating diverse and realistic motions such as speeches and dancing.
@misc{gan2025humanditposeguideddiffusiontransformer,
title={HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation},
author={Qijun Gan and Yi Ren and Chen Zhang and Zhenhui Ye and Pan Xie and Xiang Yin and Zehuan Yuan and Bingyue Peng and Jianke Zhu},
year={2025},
eprint={2502.04847},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.04847},
}