3D Object Detection (2020)
Self-supervised 3D Object Detection from Monocular Pseudo-LiDAR, Curie Kim, Ue-Hwan Kim, and Jong-Hwan Kim
- Published in The 2022 IEEE International Conference on Multisensor Fusion and Integration.
- [Github Code]
- [Arxiv]
Network Architecture
Our 3D object detection network. Three sequential images, It−1, It, It+1 are used as inputs to estimate the camera pose, while the depth network feeds only It. Learning with supervised loss (D) or self-supervised loss (M) or both (MD) are available, and the predicted depth is converted into a pseudo-LiDAR from through a change of representation scheme proposed by [3]. Then the 3D object network detects 3D objects by considering it as a LiDAR sensor measurement result.
Abstract
There have been attempts to detect 3D objects by fusion of stereo camera images and LiDAR sensor data or using LiDAR for pre-training and only monocular images for testing, but there have been less attempts to use only monocular image sequences due to low accuracy. In addition, when depth prediction using only monocular images, only scale-inconsistent depth can be predicted, which is the reason why researchers are reluctant to use monocular images alone. Therefore, we propose a method for predicting absolute depth and detecting 3D objects using only monocular image sequences by enabling end-to-end learning of detection networks and depth prediction networks. As a result, the proposed method surpasses other existing methods in performance on the KITTI 3D dataset. Even when monocular image and 3D LiDAR are used together during training in an attempt to improve performance, ours exhibit is the best performance compared to other methods using the same input. In addition, end-to-end learning not only improves depth prediction performance, but also enables absolute depth prediction, because our network utilizes the fact that the size of a 3D object such as a car is determined by the approximate size.
Depth Scaled Loss
Due to the inherent scale ambiguity of monocular depth estimation, the process of monocular 3D object detection could become unstable. To deal with this, we propose a scaleaware depth estimation method. The key to overcoming the scale ambiguity is to represent depths as follows: \(\hat{d} = \frac{\bar{D}_{\text{prior}}}{\sigma_\text{min} + (\sigma_\text{max} - \sigma_\text{min}) \cdot x}\)
Depth Estimation Results
3D Object Detection Results