When Monocular and Stereo Meet at the Tail of 3D Human Detection

Lorenzo Bertoni, Sven Kreiss,
Taylor Mordan, Alexandre Alahi
International Conference on Robotics and Automation (ICRA) 2021

Monocular and stereo visions are cost-effective solutions for 3D human localization in the context of self-driving cars or social robots. However, they are usually developed independently and have their respective strengths and limitations. We propose a novel unified learning framework that leverages the strengths of both monocular and stereo cues for 3D human localization. Our method jointly (i) associates humans in left-right images, (ii) deals with occluded and distant cases in stereo settings by relying on the robustness of monocular cues, and (iii) tackles the intrinsic ambiguity of monocular perspective projection by exploiting prior knowledge of the human height distribution. We specifically evaluate outliers as well as challenging instances, such as occluded and far-away pedestrians, by analyzing the entire error distribution and by estimating calibrated confidence intervals. Finally, we critically review the official KITTI 3D metrics and propose a practical 3D localization metric tailored for humans.

Article       Code

 

MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization

L. Bertoni; S. Kreiss; T. Mordan; A. Alahi 

2021-03-22. International Conference on Robotics and Automation (ICRA), Xi’an, China, June 1-6, 2021. p. 5126-5132. DOI : 10.1109/ICRA48506.2021.9561820.