Can We Trust You? On Calibration of a Probabilistic Object Detector for Autonomous Driving

November 2019

tl;dr: Calibration of the network for a probabilistic object detector

Overall impression

The paper extends previous works in the probabilistic lidar detector and its successor. It is based on the work of Pixor.

Calibration: a probabilistic object detector should predict uncertainties that match the natural frequency of correct predictions. 90% of the predictions with 0.9 score from a calibrated detector should be correct. Humans have intuitive notion of probability in a frequentist sense. –> cf accurate uncertainty via calibrated regression and calib uncertainties in object detection.

A calibrated regression is a bit harder to interpret. P(gt < F^{-1}(p)) = p. F^{-1} = F_q is the inverse function of CDF, the quantile function.

Unreliable uncertainty estimation in object detectors can lead to wrong decision makings in autonomous driving (e.g. at planning stage).

The paper also has a very good way to visualize uncertainty in 2D object detector.

Key ideas

\[ECE = \sum_i^M \frac{N_m}{N}|p^m - \hat{p^m}|\]

Technical details