Refine Your Search

Search Results

Viewing 1 to 4 of 4
Technical Paper

Micro Gesture Recognition of the Millimeter-Wave Radar Based on Multi-branch Residual Neural Network

2022-12-22
2022-01-7074
A formal gesture recognition based on optics has limitations, but millimeter-wave (MMW) radar has shown significant advantages in gesture recognition. Therefore, the MMW radar has become the most promising human-computer interaction equipment, which can be used for human-computer interaction of vehicle personnel. This paper proposes a multi-branch network based on a residual neural network (ResNet) to solve the problems of insufficient feature extraction and fusion of the MMW radar and immense algorithm complexity. By constructing the gesture sample library of six gestures, the MMW radar signal is processed and coupled to establish the relationship between the motion parameters of the distance, speed, and angle of the gesture information and time, and the depth features are extracted. Then the three depth features are fused. Finally, the classification and recognition of MMW radar gesture signals are realized through the full connection layer.
Technical Paper

Improved Joint Probabilistic Data Association Multi-target Tracking Algorithm Based on Camera-Radar Fusion

2021-04-15
2021-01-5002
A Joint Probabilistic Data Association (JPDA) multi-objective tracking improvement algorithm based on camera-radar fusion is proposed to address the problems of poor single-sensor tracking performance, unknown target detection probability, and missing valid targets in complex traffic scenarios. First, according to the correlation rule between the target track and the measurement, the correlation probability between the target and the measurement is obtained; then the measurement collection is divided into camera-radar measurement matched target, camera-only measurement matched target, radar-only measurement matched target, and no-match target; and the correlation probability is corrected with different confidence levels to avoid the use of unknown detection probability.
Technical Paper

Vehicle Detection Based on Deep Neural Network Combined with Radar Attention Mechanism

2020-12-29
2020-01-5171
In the autonomous driving perception task, the accuracy of target detection is an essential evaluation, especially for small targets. In this work, we propose a multi-sensor fusion neural network that combines radar and image data to improve the confidence level of the camera when detecting targets and the accuracy of the prediction box regression. The fusion network is based on the basic structure of single-shot multi-box detection (SSD). Inspired by the attention mechanism in image processing, our work incorporates the a priori knowledge of radar detection in the convolutional block attention module (CBAM), which forms a new attention mechanism module called radar convolutional block attention module (RCBAM). We add the RCBAM into the SSD target detection network to build a deep neural network fusing millimeter-wave radar and camera.
Technical Paper

Drivable Area Detection and Vehicle Localization Based on Multi-Sensor Information

2020-04-14
2020-01-1027
Multi-sensor information fusion framework is the eyes for unmanned driving and Advanced Driver Assistance System (ADAS) to perceive the surrounding environment. In addition to the perception of the surrounding environment, real-time vehicle localization is also the key and difficult point of unmanned driving technology. The disappearance of high-precision GPS signal suddenly and defect of the lane line will bring much more difficult and dangerous for vehicle localization when the vehicle is on unmanned driving. In this paper, a road boundary feature extraction algorithm is proposed based on multi-sensor information fusion of automotive radar and vision to realize the auxiliary localization of vehicles. Firstly, we designed a 79GHz (78-81GHz) Ultra-Wide Band (UWB) millimeter-wave radar, which can obtain the point cloud information of road boundary features such as guardrail or green belt and so on.
X