Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous
Driving Scenarios 2023-01-7038
The results of monocular depth estimation are no satisfactory in the automatic
driving scenario. The combination of radar and camera for depth estimation is a
feasible solution to the problem of depth estimation in similar scenes. The
radar-camera pixel depth association model establishes a reliable correlation
between radar depth and camera pixel. In this paper, a new depth estimation
model named Deep-PDANet based on RC-PDA is proposed, which increases the depth
and width of the network and alleviates the problem of network degradation
through residual structure. Convolution kernels of different sizes are selected
in the basic units to further improve the ability to extract global information
while taking into account the extraction of information from a single pixel. The
convergence speed and learning ability of the network are improved by the
training strategy of multi-weight loss function in stages. In this paper,
comparison experiments and ablation study were performed on the NuScenes
dataset, and the accuracy of the multidimensional model was improved over the
baseline model, which exceeded the existing excellent algorithms.