Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

Published at

European Conference on Computer Vision (ECCV), 2020

Project Links


@article{chen2020category, title={Category Level Object Pose Estimation via Neural Analysis-by-Synthesis}, author={Chen, Xu and Dong, Zijian and Song, Jie and Geiger, Andreas and Hilliges, Otmar}, year= {2020}, booktitle = {European Conference on Computer Vision (ECCV)}, }