Recent work has made significant progress on using implicit functions, as a continuous representation for 3D rigid object shape reconstruction. However, much less effort has been devoted to modeling general articulated objects. Compared to rigid objects, articulated objects have higher degrees of freedom, which makes it hard to generalize to unseen shapes. To deal with the large shape variance, we introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space, where we have separate codes for encoding shape and articulation. With this disentangled continuous representation, we demonstrate that we can control the articulation input and animate unseen instances with unseen joint angles. Furthermore, we propose a Test-Time Adaptation inference algorithm to adjust our model during inference. We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
Given one real-world depth image, the proposed A-SDF is capable of generating laptops with continuous changing articulations in 3D.
Use the slider to interact with the generated shapes.
Input Depth Image
Input Depth Image
We first learn a category-level disentangled Signed Distance Function (A-SDF) with a structured latent space. Once learned, unseen shapes at unseen articulations can be generated by just manipulating the articulation code. Use the slider to interact with the generated shapes.
Interpolated shapes by DeepSDF show unrealistic deformation, whereas the proposed A-SDF faithfully interpolates shapes in between.