Refine Your Search

Search Results

Viewing 1 to 6 of 6
Technical Paper

An Augmented around View Monitor System Fusing Depth and Image Information during the Reversing Process

2020-04-14
2020-01-0095
The around view monitor (AVM) system for vehicles usually suffers from the distortion of surrounding objects caused by incomplete rectification and stitching, which seriously affects the driver's judgment of the surrounding environment during the reversing process. In response to solve this problem, an augmented around view monitor (AAVM) system fusing image and depth information is proposed, which highlights the point clouds of persons or vehicles at the rear of the vehicle. First, an around view image is generated from four fisheye cameras. Then, the calibration of multi TOF cameras is conducted to improve their accuracy of depth estimation and obtain extrinsic camera positions. Next, the 2D-driven object point cloud detection method is proposed to localize and segment object point clouds like vehicles or persons.
Technical Paper

MTCNN-KCF-deepSORT:Driver Face Detection and Tracking Algorithm Based on Cascaded Kernel Correlation Filtering and Deep SORT

2020-04-14
2020-01-1038
The driver's face detection and tracking method important for Advanced Driver Assistance Systems (ADAS) and autonomous driving in various situations. The deep SORT algorithm has integrated appearance information, the motion model and the intersection-over-union (IOU) distance methods, and has been applied to face tracking, but it depends on detection information in every frame. Once the detection information lacks, the deep SORT algorithm will wait until the target detects bounding boxes appear again, even if the target didn’t disappear or shield. Hence, we propose to use a new tracker that not completely depend on the detection algorithm to cascade with the deep SORT algorithm to realize stable driver's face tracking. At first, the driver's face detection and tracking will be accomplished by the MTCNN-deep-SORT algorithm.
Technical Paper

Structural Improvement for the Crash Safety of Commercial Vehicle

2009-10-06
2009-01-2917
Statistic analysis on commercial vehicle crash accidents in China were done by using the annual traffic accident reports from Ministry of Public Security. The Chinese crash safety rules on commercial vehicle were introduced. The main reasons which cause severe injury to the passenger in the cab in frontal crash accidents were studied. HYPERMESH software was used to do the finite element modelling of the frontal structure and cab of a production truck. The swing hammer impact simulation was conducted by using LS-DYNA software and the results were compared with the test results to validate the model. A new supporting structure for the cab to improve the safety of the passenger in cab was proposed. Meanwhile, an extendable and retractable longitudinal beam energy absorbing structure was also studied by using the finite element model. The simulation results show that these structures can obviously improve the frontal crash safety of the commercial vehicle.
Technical Paper

Calibration and Stitching Methods of Around View Monitor System of Articulated Multi-Carriage Road Vehicle for Intelligent Transportation

2019-04-02
2019-01-0873
The around view monitor (AVM) system for the long-body road vehicle with multiple articulated carriages usually suffers from the incomplete distortion rectification of fisheye cameras and the irregular image stitching area caused by the change of relative position of the cameras on different carriages while the vehicle is in motion. In response to these problems, a set of calibration and stitching methods of AVM are proposed. First, a radial-distortion-based rectification method is adopted and improved. This method establishes two lost functions and solves the model parameters with the two-step optimization method. Then, AVM system calibration is conducted, and the perspective transformation matrix is calculated. After that, a static basic look-up table is generated based on the distortion rectification model and perspective transformation matrix.
Technical Paper

Object Segmentation and Augmented Visualization Based on Panoramic Image Segmentation

2021-04-06
2021-01-0089
Panoramic images can provide critical information for Advanced Driving Assistance Systems (ADAS), such as parking spaces and surrounding vehicles. However, the vehicle in the bird's-eye view image is severely distorted and incomplete, and the visual information becomes very blurred in some illumination insufficient environments. If the driver cannot see the surrounding environment information, the risk of collision will increase, especially during parking. To better percept the local environment with the help of panoramic images, we use panoramic image segmentation results to construct a virtual surround view monitoring system to provide drivers with clearer perception information. Firstly, a lightweight segmentation network is redesigned based on SegNet, which will improve the accuracy of the segmentation without increasing the model’s inference time. Secondly, we build an augment visualization around view monitor (AV-AVM) system with regards to the segmentation results.
Technical Paper

A Semantic Slam System Based on Visual-Inertial Information and around View Images for Underground Parking Lot

2021-04-06
2021-01-0078
As one of the most challenging driving tasks, parking is a common but particularly troublesome problem in large cities. Recently, an excellent solution-automated valet parking (AVP) has become a hot research topic, which allows the driver to leave the vehicle in a drop-off area, while the vehicle driving into the parking slot by itself. For AVP, the precise localization is an indispensable module. However, the global positioning system (GPS) cannot be used in the underground parking lot and the localization method based on lidar is too expensive. In response to solve this problem, we propose a simultaneous localization and mapping system with the semantic information of parking slots (PS-SLAM), which is based on visual-inertial and around view images. First, the calibration of multi-sensors is conducted to obtain their intrinsic and extrinsic parameters. In this way, the around view image and transformation matrices between sensors can be acquired.
X