dragapart.github.io - DragAPart: Learning a Part-Level Motion Prior for Articulated Objects

Description: Our model, DragAPart, can synthesize nuanced part-level object dynamics. Trained on synthetic data, at inference time it generalizes to real data and unseen categories.

3d reconstruction (42) unsupervised (4) 3d fauna (1)

Example domain paragraphs

Check this Gradio Demo page to interact with your favorite articulated objects!

We introduce DragAPart, a method that, given an image and a set of drags as input, can generate a new image of the same object in a new state, compatible with the action of the drags. Differently from prior works that focused on repositioning objects, DragAPart predicts part-level interactions, such as opening and closing a drawer. We study this problem as a proxy for learning a generalist motion model, not restricted to a specific kinematic structure or object category. To this end, we start from a pre-tra

Our model is capable of preserving fine-grained texture details, generating reasonable shades, handling thin structures, compositing multi-part motion, "dreaming" up internal structures of the object, and generalizing to categories not seen during training.

Links to dragapart.github.io (2)