Learning-Deep-Learning

Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems

December 2019

tl;dr: Early fusion of radar and camera via range-azimuth map + IPM feature concatenation.

Overall impression

This is follow up work on Qualcomm’s ICCV 2019 paper on radar object detection. The dataset still only contains California highway driving.

Radar is acquired under the same technical spec as in radar object detection. The addition of camera info does not boost the performance of radar a lot (only about 0.05%), and it suffers less if the camera input is set to 0. The camera info did help reducing the lateral error.

Training the camera branch in advance (similar to BEV-IPM) and freeze it during joint training yielded the best results.

Key ideas

Technical details

Notes