Learning to Predict 3D Rotational Dynamics from Images of a Rigid Body with Unknown Mass Distribution
Published in Aerospace, 2023
Recommended citation: Mason, J. J., Allen-Blanchette, C., Zolman, N., Davison, E., & Leonard, N. E. (2023). Learning to predict 3D rotational dynamics from images of a rigid body with unknown mass distribution. Aerospace, 10(11), 921. https://www.mdpi.com/2226-4310/10/11/921
Learning to Predict 3D Rotational Dynamics from Images of a Rigid Body with Unknown Mass Distribution
Abstract
In many real-world settings, image observations of freely rotating 3D rigid bodies may be available when low-dimensional measurements are not. However, the high-dimensionality of image data precludes the use of classical estimation techniques to learn the dynamics. The usefulness of standard deep learning methods is also limited, because an image of a rigid body reveals nothing about the distribution of mass inside the body, which, together with initial angular velocity, is what determines how the body will rotate. We present a physics-based neural network model to estimate and predict 3D rotational dynamics from image sequences. We achieve this using a multi-stage prediction pipeline that maps individual images to a latent representation homeomorphic to $\mathbf{SO}(3)$, computes angular velocities from latent pairs, and predicts future latent states using the Hamiltonian equations of motion. We demonstrate the efficacy of our approach on new rotating rigid-body datasets of sequences of synthetic images of rotating objects, including cubes, prisms and satellites, with unknown uniform and non-uniform mass distributions. Our model outperforms competing baselines on our datasets, producing better qualitative predictions and reducing the error observed for the state-of-the-art Hamiltonian Generative Network by a factor of 2.