SLAM (Simultaneous Localization and Mapping) plays a key role in autonomous driving. Recently, 4D Radar has attracted widespread attention because it breaks through the limitations of 3D millimeter wave radar and can simultaneously detect the distance, velocity, horizontal azimuth and elevation azimuth of the target with high resolution. However, there are few studies on 4D Radar in SLAM. In this paper, RI-FGO, a 4D Radar-Inertial SLAM method based on Factor Graph Optimization, is proposed. The RANSAC (Random Sample Consensus) method is used to eliminate the dynamic obstacle points from a single scan, and the ego-motion velocity is estimated from the static point cloud. A 4D Radar velocity factor is constructed in GTSAM to receive the estimated velocity in a single scan as a measurement and directly integrated into the factor graph. The 4D Radar point clouds of consecutive frames are matched as the odometry factor.
Tire forces and moments play an important role in vehicle dynamics and safety. X-by-wire chassis components including active suspension, electronic powered steering, by-wire braking, etc can take the tire forces as inputs to improve vehicle’s dynamic performance. In order to measure the accurate dynamic wheel load, most of the researches focused on the kinematic parameters such as body longitudinal and lateral acceleration, load transfer and etc. In this paper, the authors focus on the suspension system, avoiding the dependence on accurate mass and aerodynamics model of the whole vehicle. The geometry of the suspension is equated by the spatial parallel mechanism model (RSSR model), which improves the calculation speed while ensuring the accuracy. A suspension force observer is created, which contains parameters including spring damper compression length, push rod force, knuckle accelerations, etc., combing the kinematic and dynamic characteristic of the vehicle.
As one of the pollutants that cannot be ignored, soot has a great impact on human health, environment, and energy conversion. In this investigation, the effect of residence time (25ms, 35ms, and 45ms) and ammonia on morphology and nanostructure of soot in laminar ethylene flames has been studied under atmospheric conditions and different flame heights (15 mm and 30 mm). The transmission electron microscopy (TEM) and high-resolution transmission electron microscope (HRTEM) are used to obtain morphology of aggregates and nanostructure of primary particles, respectively. In addition, to analyze the nanostructure of the particles, an analysis program is built based on MATLAB software, which is able to obtain the fringe separation distance, fringe length, and fringe tortuosity parameters of primary particles, and has been verified by the multilayer graphene interlayer distance.
Accurate and reliable localization in GNSS-denied environments is critical for autonomous driving. Nevertheless, LiDAR-based and camera-based methods are easily affected by adverse weather conditions such as rain, snow, and fog. The 4D Radar with all-weather performance and high resolution has attracted more interest. Currently, there are few localization algorithms based on 4D Radar, so there is an urgent need to develop reliable and accurate positioning solutions. This paper introduces RIO-Vehicle, a novel tightly coupled 4D Radar/IMU/vehicle dynamics within the factor graph framework. RIO-Vehicle aims to achieve reliable and accurate vehicle state estimation, encompassing position, velocity, and attitude. To enhance the accuracy of relative constraints, we introduce a new integrated IMU/Dynamics pre-integration model that combines a 2D vehicle dynamics model with a 3D kinematics model.
LiDAR and camera fusion have emerged as a promising approach for improving place recognition in robotics and autonomous vehicles. However, most existing approaches often treat sensors separately, overlooking the potential benefits of correlation between them. In this paper, we propose a Cross- Modality Module (CMM) to leverage the potential correlation of LiDAR and camera features for place recognition. Besides, to fully exploit potential of each modality, we propose a Local-Global Fusion Module to supplement global coarse-grained features with local fine-grained features. The experiment results on public datasets demonstrate that our approach effectively improves the average recall by 2.3%, reaching 98.7%, compared with simply stacking of LiDAR and camera.
Automated driving system is a multi-source sensor data fusion system. However different type sensor has different operating frequencies, different field of view, different detection capabilities and different sensor data transition delay. Aiming at these problems, this paper introduces the processing mechanism of out of sequence measurement data into the multi-target detection and tracking system based on millimeter wave radar and camera. After the comparison of ablation experiments, the longitudinal and lateral tracking performance of the fusion system is improved in different distance ranges.
Positioning system is a key module of autonomous driving. As for LiDAR SLAM system, it faces great challenges in scenarios where there are repetitive and sparse features. Without loop closure or measurements from other sensors, odometry match errors or accumulated errors cannot be corrected. This paper proposes a construction method of LiDAR anchor constraints to improve the robustness of the SLAM system in the above challenging environment. We propose a robust anchor extraction method that adaptively extracts suitable cylindrical anchors in the environment, such as tree trunks, light poles, etc. Skewed tree trunks are detected by feature differences between laser lines. Boundary points on cylinders are removed to avoid misleading. After the appropriate anchors are detected, a factor graph-based anchor constraint construction method is designed. Where direct scans are made to anchor, direct constraints are constructed.
The driver monitoring system (DMS) plays an essential role in reducing traffic accidents caused by human errors due to driver distraction and fatigue. The vision-based DMS has been the most widely used because of its advantages of non-contact and high recognition accuracy. However, the traditional RGB camera-based DMS has poor recognition accuracy under complex lighting conditions, while the IR-based DMS has a high cost. In order to improve the recognition accuracy of conventional RGB camera-based DMS under complicated illumination conditions, this paper proposes a lightweight low-illumination image enhancement network inspired by the Retinex theory. The lightweight aspect of the network structure is realized by introducing a pixel-wise adjustment function. In addition, the optimization bottleneck problem is solved by introducing the shortcut mechanism.
Centrifugal Pendulum Vibration Absorber (CPVA for short) is used to absorb torsional vibrations caused by the shifting motion of the engine. It is increasingly used in modern powertrains. In the research of the dynamic characteristics of the CPVA, it is necessary to obtain the real motion of the pendulum to compensate the fitting performance of mathematical model. The usual method is to install an angle sensor to measure the movement of the pendulum. On the one hand, the installation of the sensor will affect its movement to a certain extent, so that the measurement results do not match the actual motion. On the other hand, the motion of the pendulum is not only the rotational motion around the rotational axis of the CPVA rotor, but also has translation relative to it. As a result, it is difficult to obtain accurate motion only by the angle sensor. We proposed a non-contact centrifugal pendulum motion measurement method.
High-speed vehicles in low illumination environments severely blur the images used in object detectors, which poses a potential threat to object detector-based advanced driver assistance systems (ADAS) and autonomous driving systems. Augmenting the training images for object detectors is an efficient way to mitigate the threat from motion blur. However, little attention has been paid to the motion of the vehicle and the position of objects in the traffic scene, which limits the consistence between the resulting augmented images and traffic scenes. In this paper, we present a vehicle kinematics-based image augmentation algorithm by modeling and analyzing the traffic scenes to generate more realistic augmented images and achieve higher robustness improvement on object detectors against motion blur. Firstly, we propose a traffic scene model considering vehicle motion and the relationship between the vehicle and the object in the traffic scene.
Image corruptions due to noise, blur, contrast change, etc., could lead to a significant performance decline of Deep Neural Networks (DNN), which poses a potential threat to DNN-based autonomous vehicles. Previous works attempted to explain corruption from a Fourier perspective. By comparing the absolute Fourier spectrum difference between corrupted images and clean images in the RGB color space, they regard the noise from some corruptions (Gaussian noise, defocus blur, etc.) as concentrating on the high-frequency components while others (contrast, fog, etc.) concentrate on the low-frequency components. In this work, we present a new perspective that unifies corruptions as noise from high frequency and thus propose an image augmentation algorithm to achieve a more robust performance against common corruptions. First, we notice the 1/fα statistical rule of the natural image's spectrum and the channels-wise differential sensitivity on the YCbCr color space of the Human Visual System.
Multiple object detection and tracking are central aspects of modeling the environment of autonomous vehicles. Lidar is a necessary component in the autonomous driving system. Without Lidar sensors, we will most probably not see fully self-driving cars become a reality. Lidar sensing gives us high-resolution data by sending out thousands of laser signals. In advanced driver assistance systems or automated driving systems, 3-D point clouds from lidar scans are typically used to measure physical surfaces. Lidar is a powerful sensor that you can use in challenging environments where other sensors might prove inadequate. Lidar can provide a complete 360-degree view of a scene. This paper designs Lidar based multi-target detection and tracking system based on the traditional point cloud processing method including down-sampling, denoising, segmentation, and clustering objects.
Visual sensors are widely used in autonomous vehicles (AVs) for object detection due to the advantages of abundant information and low-cost. But the performance of visual sensors is highly affected by low light conditions when AVs driving at nighttime and in the tunnel. The low light conditions decrease the image quality and the performance of object detection, and may cause safety of the intended functionality (SOTIF) problems. Therefore, to analyze the performance limitations of visual sensors in low light conditions, a controlled light experiment on a proving ground is designed. The influences of low light conditions on the two-stage algorithm and the single-stage algorithm are compared and analyzed quantificationally by constructing an evaluation index set from three aspects of missing detection, classification, and positioning accuracy.
With the widespread application of autonomous driving technology, occupant comfort has become a key topic. Occupant comfort of autonomous vehicles depends on the driving system’s performance, so studying the causes of occupant discomfort will help design driving systems. In addition to the discomfort in NVH and thermal comfort, occupant comfort is also affected by other factors such as safety perception. To study the impact of safety perception on comfort, this paper designed a road experiment and focused on the overtaking scenarios. Because the interaction between the ego vehicle and others is strong during overtaking, the occupants are more likely to receive visual stimuli, resulting in discomfort caused by safety perception. In the experiment, occupant discomfort scores were collected in real-time by the tool developed in this paper.
A formal gesture recognition based on optics has limitations, but millimeter-wave (MMW) radar has shown significant advantages in gesture recognition. Therefore, the MMW radar has become the most promising human-computer interaction equipment, which can be used for human-computer interaction of vehicle personnel. This paper proposes a multi-branch network based on a residual neural network (ResNet) to solve the problems of insufficient feature extraction and fusion of the MMW radar and immense algorithm complexity. By constructing the gesture sample library of six gestures, the MMW radar signal is processed and coupled to establish the relationship between the motion parameters of the distance, speed, and angle of the gesture information and time, and the depth features are extracted. Then the three depth features are fused. Finally, the classification and recognition of MMW radar gesture signals are realized through the full connection layer.
Recent researches in autonomous driving mainly consider the uncertainty in perception and prediction modules for safety enhancement. However, obstacles which block the field-of-view (FOV) of sensors could generate blind areas and leaves environmental uncertainty a remaining challenge for autonomous vehicles. Current solutions mainly rely on passive obstacles avoidance in path planning instead of active perception to deal with unexplored high-risky areas. In view of the problem, this paper introduces the concept of information entropy, which quantifies uncertain information in the blind area, into the motion planning module of autonomous vehicles. Based on model predictive control (MPC) scheme, the proposed algorithm can plan collision-free trajectories while actively explore unknown areas to minimize environmental uncertainty. Simulation results under various challenging scenarios demonstrate the improvement in safety and comfort with the proposed perception-aware planning scheme.
Aiming at the problems of ineffective collision avoidance and vehicle instability in the process of vehicle emergency braking in road conditions with low adhesion and sudden change in adhesion coefficient, a stability-coordinated emergency braking and collision avoidance control system SEBCACS) is proposed. First, according to the motion of the ego vehicle and the target vehicle as well as the road adhesion conditions, a collision time model is proposed for evaluating the vehicle collision risk, and the expected deceleration required to avoid the collision is calculated. Then, the MPC method is used to calculate the yaw moment generated by the four-wheel braking force required to maintain vehicle stability according to the actual and reference yaw rate and side slip angle deviation. Then it is decided whether to implement additional yaw moment control according to the body stability evaluation results.
Gas diffusion layer (GDL), as a critical constituent of the proton exchange membrane fuel cell (PEMFC), plays a key role in mass, heat, electron, and species transport. GDL generally has two distinct layers: a macro-porous substrate (MPS) and a micro-porous layer (MPL). The fibers in MPS and the cracks formed during the deposition process on the surface of MPL change the overall transport capacity and effect the output performance of PEMFC. In this paper, methods of identifying the structural features of fibers and cracks in GDL images based on artificial intelligence are proposed. The block probabilistic Hough transform and the quadric voting based on the weighted K-means algorithm are programmed to realize the fiber feature extraction, and the crack feature extraction is realized by the regional connectivity algorithm and the geometric feature calculation based on the circumscribed graph of the crack region.
Deep neural network models have been widely used for environment perception of intelligent vehicles. However, due to models’ innate probabilistic property, the lack of transparency, and sensitivity to data, perception results have inevitable uncertainties. To compensate for the weakness of probabilistic models, many pieces of research have been proposed to analyze and quantify such uncertainties. For safety-critical intelligent vehicles, the uncertainty analysis of data and models for environment perception is especially important. Uncertainty estimation can be a way to quantify the risk of environment perception. In this regard, it is essential to deliver a comprehensive survey. This work presents a comprehensive overview of uncertainty estimation in deep neural networks for environment perception of intelligent vehicles.