banmo-www.github.io - BANMo: Building Animatable 3D Neural Models from Many Casual Videos

Example domain paragraphs

Gengshan Yang 2 Minh Vo 3 Natalia Neverova 1 Deva Ramanan 2 Andrea Vedaldi 1 Hanbyul Joo 1 1 Meta AI 2 Carnegie Mellon University 3 Meta Reality Labs

Left : Input videos; Right : Reconstruction at each time instance. Correspondences are shown as the same colors. Abstract Prior work for articulated 3D shape reconstruction often relies on specialized sensors (e.g., synchronized multi-camera systems), or pre-built 3D deformable models (e.g., SMAL or SMPL). Such methods are not able to scale to diverse sets of objects in the wild. We present BANMo, a method that requires neither a specialized sensor nor a pre-defined template shape. BANMo builds high-fidelit

[Comparison] [Human-cap] [Cat-Coco] [Penguins] [Dog-Tetres] [Robot-Laikago] [Dancer-AMA] [Eagle] [Hands]

Links to banmo-www.github.io (4)