摘要

With growing demands for accuracy in sensor fusion, increasing attention is being paid to temporal offsets as a source of deterministic error when processing data from multiple devices. Established approaches for the calibration of temporal offsets exploit domain-specific heuristics of common sensor suites and utilize simplifications to circumvent some of the challenges arising when both temporal and spatial parameters are not accurately known a priori. These properties make it difficult to generalize the work to other applications or different combinations of sensors. This work presents a general and principled approach to joint estimation of temporal offsets and spatial transformations between sensors. Our framework exploits recent advances in continuous-time batch estimation and thus exists within the rigorous theoretical framework of maximum likelihood estimation. The derivation is presented without relying on unique properties of specific sensors and, therefore, represents the first general technique for temporal calibration in robotics. The broad applicability of this approach is demonstrated through spatiotemporal calibration of a camera with respect to an inertial measurement unit as well as between a stereo camera and a laser range finder. The method is shown to be more repeatable and accurate than the current state of the art, estimating spatial displacements to millimeter precision and temporal offsets to a fraction of the fastest measurement interval.

  • 出版日期2016-4