Refine Your Search

Search Results

Viewing 1 to 4 of 4
Technical Paper

A Localization System for Autonomous Driving: Global and Local Location Matching Based on Mono-SLAM

2018-08-07
2018-01-1610
The utilization of the SLAM (Simultaneous Localization and Mapping) technique was extended from the robotics to the autonomous vehicles for achieving the positioning. However, SLAM cannot obtain the global position of the vehicle but a relative one to the start. For sake of this, a fast and accurate system was proposed to obtain both the local position and the global position of vehicles based on mono-SLAM which realized the SLAM by using monocular camera with a lower cost and power consumption. Firstly, the rough latitude and longitude of current position was obtained by using common GPS without differential signal. Then, the Mono-SLAM operated on the consecutive video frames to generate the localization and local trajectory and its accuracy was further improved by utilizing the IMU information. After that, a piece of Map centered in the rough position obtained by common GPS was downloaded from the Open Street Map.
Technical Paper

System Design and Model of a 3D 79 GHz High Resolution Ultra-Wide Band Millimeter-Wave Imaging Automotive Radar

2018-08-07
2018-01-1615
Automotive radar is an important environment perception sensor for advance driving assistance system. It can detect objects around the vehicle with high accuracy and it works in all bad weathers. For traditional automotive radar, it cannot measure the objects’ height. Thus, a manhole cover on the road surface or a guideboard high above the road would be taken erroneously as a non-moving car. In such cases, the adaptive cruise system would decelerate or stop the vehicle erroneously and make the driver uncomfortable. A 3D automotive radar with two-dimensional electronic scanning can measure the targets’ height as well as the targets’ azimuth angle. This paper presents a 79 GHz ultra-wide band automotive 3D imaging radar. Due to the 4 GHz wide bandwidth, the range resolution of this radar can be as small as 3.75 cm.
Technical Paper

Semantic Segmentation for Traffic Scene Understanding Based on Mobile Networks

2018-08-07
2018-01-1600
Real-time and reliable perception of the surrounding environment is an important prerequisite for advanced driving assistance system (ADAS) and automatic driving. And vision-based detection plays a significant role in environment perception for automatic vehicles. Although deep convolutional neural networks enable efficient recognition of various objects, it has difficulty in accurately detecting special vehicles, rocks, road pile, construction site, fence and so on. In this work, we address the task of traffic scene understanding with semantic image segmentation. Both driveable area and the classification of object can be attained from the segmentation result. First, we define 29 classes of objects in traffic scenarios with different labels and modify the Deeplab V2 network. Then in order to reduce the running time, MobileNet architecture is applied to generate the feature map instead of the original models.
Technical Paper

A New Method of Target Detection Based on Autonomous Radar and Camera Data Fusion

2017-09-23
2017-01-1977
Vehicle and pedestrian detection technology is the most important part of advanced driving assistance system (ADAS) and automatic driving. The fusion of millimeter wave radar and camera is an important trend to enhance the environmental perception performance. In this paper, we propose a method of vehicle and pedestrian detection based on millimeter wave radar and camera. Moreover, the proposed method complete the detection of vehicle and pedestrian based on dynamic region generated by the radar data and sliding window. First, the radar target information is mapped to the image by means of coordinate transformation. Then by analyzing the scene, we obtain the sliding windows. Next, the sliding windows are detected by HOG features and SVM classifier in a rough detect. Then using the match function to confirm the target. Finally detecting the windows in a precision detection and merging the detecting windows. The target detection process is carried out in the following three steps.
X