SteadyDancer: Harmonized Human Image Animation with First-Frame Preservation
Download printable cheat-sheet (CC-BY 4.0)13 Feb 2026, 00:00 Z
TL;DR SteadyDancer reframes human image animation around an image-to-video pipeline so generation starts from a stable first frame instead of loosely binding identity in a reference-to-video setup. The project ships open inference code, 14B weights, and a dedicated X-Dance benchmark aimed at harder source and driving mismatches.
What is SteadyDancer?
SteadyDancer is the official implementation of the paper SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation. It targets a common production problem: when the source image and driving video are not perfectly aligned, identity and structure drift over time.
The core design choice is to prefer an image-to-video (I2V) generation path over a reference-to-video (R2V) path. In the authors' framing, this helps preserve first-frame appearance and reduces abrupt transitions when source identity and driving motion are mismatched.
Links:
- Project page: https://mcg-nju.github.io/steadydancer-web
- Model weights (HF): https://huggingface.co/MCG-NJU/SteadyDancer-14B
- X-Dance benchmark: https://huggingface.co/datasets/MCG-NJU/X-Dance
Why this matters for teams shipping AI video
Most identity-preserving pipelines look good on clean demos but break on realistic inputs where:
- The source portrait and driving performer differ in framing, body layout, or garment structure.
- Motion starts at a different temporal state than the source frame.
- Real footage includes blur, occlusion, and non-trivial camera changes.
SteadyDancer is explicitly built around those failure cases. The repo positions this as a practical response to spatio-temporal misalignment, not just a benchmark-only increment.
Release snapshot (exact dates)
Based on the public README timeline:
- 2025-11-24: paper published on arXiv, inference code and 14B weights released.
- 2025-11-24: X-Dance benchmark released.
- 2025-11-27: multi-GPU inference support added (FSDP + xDiT USP).
- 2025-12-04: GGUF format weights released.
- 2025-12-08: ModelScope weights released.
- 2025-12-11 and 2025-12-12: additional ComfyUI workflows added, including multi-person workflow support.