Refine Your Search




Search Results

Journal Article

3D Auditory Displays for Parking Assistance Systems

The objective of this study was to investigate if 3D auditory displays could be used to enhance parking assistance systems (PAS). Objective measurements and estimations of workload were used to assess the benefits of different 3D auditory displays. In today’s cars, PAS normally use a visual display together with simple sound signals to inform drivers of obstacles in close proximity. These systems rely heavily on the visual display, as the sound does not provide information about obstacles' location. This may cause the driver to lose focus on the surroundings and reduce situational awareness. Two user studies (during summer and winter) were conducted to compare three different systems. The baseline system corresponded to a system normally found in today’s cars. The other systems were designed with a 3D auditory display, conveying information of where obstacles were located through sound. A visual display was also available. Both normal parking and parallel parking was conducted.
Technical Paper

3D Automotive Millimeter-Wave Radar with Two-Dimensional Electronic Scanning

The radar-based advanced driver assistance systems (ADAS) like autonomous emergency braking (AEB) and forward collision warning (FCW) can reduce accidents, so as to make vehicles, drivers and pedestrians safer. For active safety, automotive millimeter-wave radar is an indispensable role in the automotive environmental sensing system since it can work effectively regardless of the bad weather while the camera fails. One crucial task of the automotive radar is to detect and distinguish some objects close to each other precisely with the increasingly complex of the road condition. Nowadays almost all the automotive radar products work in bidimensional area where just the range and azimuth can be measured. However, sometimes in their field of view it is not easy for them to differentiate some objects, like the car, the manhole covers and the guide board, when they align with each other in vertical direction.
Technical Paper

77 GHz Radar Based Multi-Target Tracking Algorithm on Expressway Condition

Multi-Target tracking is a central aspect of modeling the surrounding environment of autonomous vehicles. Automotive millimeter-wave radar is a necessary component in the autonomous driving system. One of the biggest advantages of radar is it measures the velocity directly. Another big advantage is that the radar is less influenced by environmental conditions. It can work day and night, in rainy or snowy conditions. In the expressway scenario, the forward-looking radar can generate multiple objects, to properly track the leading vehicle or neighbor-lane vehicle, a multi-target tracking algorithm is required. How to associate the track and the measurement or data association is an important question in a multi-target tracking system. This paper applies the nearest-neighbor method to solve the data association problem and uses an extended Kalman filter to update the state of the track.
Technical Paper

A Case Study in Applying a Product Line Approach for Car Periphery Supervision Systems

Car Periphery Supervision (CPS) systems comprise a family of automotive systems that are based on sensors installed around the vehicle to monitor its environment. The measurement and evaluation of sensor data enables the realization of several kinds of higher level applications such as parking assistance or blind spot detection. Although a lot of similarity can be identified among CPS applications, these systems are traditionally built separately. Usually, each single system is built with its own electronic control unit, and it is likely that the application software is bound to the controller's hardware. Current systems engineering therefore often leads to a large number of inflexible, dedicated systems in the automobile that together consume a large amount of power, weight, and installation space and produce high manufacturing and maintenance costs.
Technical Paper

A Collision Avoidance Strategy Based on Inevitable Collision State

This paper proposed a collision avoidance strategy that take over the control of ego vehicle when faced with urgent collision risk. To improve the applicability of collision avoidance strategy in complex scenarios, the theory of ICS (Inevitable Collision State) is introduced to evaluate the collision risk and compute the trigger flag of the system, and vehicle dynamic is taken into account when modeling ego vehicle to predict ego vehicle’s following moving. Vehicle specific characteristics including reaction time of the braking system and the braking force increasing process are taken into account. In order to reduce injury caused by collision accidents and minimize disruption to drivers, slight steering is added on top of emergency braking. The direction of the steering angle is determined according to IM (Imitating Maneuvers) The flow chart of the strategy is presented in the paper.
Technical Paper

A Comparative Study on ROS2 Middleware - Performance Aspects within ADAS Simulation Platforms

An autonomous vehicle is able to perceive and interpret exactly its surroundings and its interior (“Sensing”). then, it processes the information received and plan its driving strategy (“processing”). And finally, it uses its powertrain, steering and braking power to move its wheels in such a way that the planned driving strategy is put into practice (“Acting”). Testing an autonomous vehicle’s reaction to the erratic traffic scenarios using prototypes would be impractical. Physically testing these scenarios can also be risky to human life and equipment. Additionally, the repetition involved in the comprehensive testing of all these scenarios could lead to human errors. Various Self Driving car manufacturers have reported injuries and causalities while doing Functional testing [1].
Technical Paper

A Comparative Study on Various Methodologies and Solutions for Evaluation of Short-Range Radar to Validate the Features of Autonomous Vehicle

Autonomous vehicle is a vehicle capable of sensing its environment and taking decisions automatically with no human interventions. To achieve this goal, ADAS (Advance Driving Assistance System) technologies play an important role and the technologies are improving and emerging. The sensing of environment can be achieved with the help of sensors like Radar and Camera. Radar sensors are used in detecting the range, speed and directions of multiple targets using complex signal processing algorithms. Radar with long range and short range are widely used in the autonomous vehicles. Radar sensors with long range can be used to realize features like Adaptive Cruise Control, Advance Emergency Brake Assist. The short-range radar sensors are used for Blind Spot Monitoring, Lane Change Assist, Rear/Front Cross Traffic Alert and Occupant Safe Exit. To realize the Autonomous vehicle functionalities four short range radar sensors are required, two on front and two on rear (left and right).
Technical Paper

A Concise Camera-Radar Fusion Framework for Object Detection and Data Association

Multi-sensor fusion strategies have gradually become a consensus in autonomous driving research. Among them, radar-camera fusion has attracted wide attention for its improvement on the dimension and accuracy of perception at a lower cost, however, the processing and association of radar and camera data has become an obstacle to related research. Our approach is to build a concise framework for camera and radar detection and data association: for visual object detection, the state-of-the-art YOLOv5 algorithm is further improved and works as the image detector, and before the fusion process, the raw radar reflection data is projected onto image plane and hierarchically clustered, then the projected radar echoes and image detection results are matched based on the Hungarian algorithm. Thus, the category of objects and their corresponding distance and speed information can be obtained, providing reliable input for subsequent object tracking task.
Technical Paper

A Context Aware Automatic Image Enhancement Method Using Color Transfer

Advanced Driver Assistance Systems (ADAS) have become an inevitable part of most of the modern cars. Their use is mandated by regulations in some cases; and in other cases where vehicle owners have become more safety conscious. Vision / camera based ADAS systems are widely in use today. However, it is to be noted that the performance of these systems is depends on the quality of the image/video captured by the camera. Low illumination is one of the most important factors which degrades image quality. In order to improve the system performance under low illumination, it is required to first enhance the input images/frames. In this paper, we propose an image enhancement algorithm that would automatically enhance images to a near ideal condition. This is accomplished by mapping features taken from images acquired under ideal illumination conditions on to the target low illumination images/frames.
Technical Paper

A Data-Driven Radar Object Detection and Clustering Method Aided by Camera

The majority of road accidents are caused by human oversight. Advanced Driving Assistance System (ADAS) has the potential to reduce human error and improve road safety. With the rising demand for safety and comfortable driving experience, ADAS functions have become an important feature when car manufacturers developing new models. ADAS requires high accuracy and robustness in the perception system. Camera and radar are often combined to create a fusion result because the sensors have their own advantages and drawbacks. Cameras are susceptible to bad weather and poor lighting condition and radar has low resolution and can be affected by metal debris on the road. Clustering radar targets into objects and determine whether radar targets are valid objects are challenging tasks. In the literature, rule-based and thresholding methods have been proposed to filter out stationary objects and objects with low reflection power.
Technical Paper

A Digital Forensic Method to Detect Object based Video Forgery Security Attacks on Surround View ADAS Camera System

The present and futuristic surround view camera systems provide the bird-eye’s view of the driving environment to the driver through a real-time video feed in the digital cockpit infotainment display, which assists the driver in maneuvering, parking, lane changing by performing object detection, object tracking, maneuver estimation, blind spot detection, lane detection, etc., The functional safety of this surround view camera system is gets compromised, if it fails to alert the driver, when truly obstacles are present in the nearby driving environment of the vehicle, or if it alerts the driver when no obstacles are present in the nearby driving environment. This malfunctioning of surround view driver assistance system is due to integrity compromisation through cyberattacks, where attackers forge the displayed video data on the infotainment system, which has external world connectivity.
Technical Paper

A Driver Assistance System for Improving Commercial Vehicle Fuel Economy

Commercial vehicle operators and governments around the world are looking for ways to cut down on fuel consumption for economic and environmental reasons. Two main factors affecting the fuel consumption of a vehicle are the drive route and the driver behavior. The drive route can be specified by information such as speed limit, road grade, road curvature, traffic etc. The driver behavior, on the other hand, is difficult to classify and can be responsible for as much as 35% variation in fuel consumption. In this work, nearly 600,000 miles of drive data is utilized to identify driving behaviors that significantly affect fuel consumption. Based on this analysis, driving scenarios and related driver behaviors are identified that result in the most efficient vehicle operation. A driver assistance system is presented in this paper that assists the driver in driving more efficiently by issuing scenario specific advice.
Technical Paper

A Driving Simulator HMI Study Comparing a Steering Wheel Mounted Display to HUD, Instrument Panel and Center Stack Displays for Advanced Driver Assistance Systems and Warnings

Simple, effective, and appropriately placed visual information must be available to the driver as part of a well designed Human Machine Interface (HMI). Visual interfaces for Advanced Driver Assistance Systems (ADAS), secondary task control, and safety warnings should attempt to minimize both driver reaction time to warnings and the workload on the driver to comprehend a warning or respond to driving advice or information. A driving simulator study was designed and executed to assess the appropriateness and effectiveness of three display concepts. The study directly compared the driver warning reaction and overall workload for three visual HMIs: the conventional instrument panel and center-stack displays (IP/CS), an idealized heads up display (HUD), and the Communication Steering Wheel (CSW) display. Study participants were required to respond to secondary convenience control tasks (4 tasks); safety warnings (3 scenarios); and also a peripheral detection task (PDT).
Journal Article

A Flexible High-Performance Accelerator Platform for Automotive Sensor Applications

High-performance computer architectures for advanced driver assistance systems have become increasingly important in automotive research in the last several years. In order to achieve an optimal and robust perception of the vehicle's surroundings, current driver assistance applications typically rely on multiple sensor systems that deliver large amounts of incoming data from different sensor types. Such sensors include optical systems, which consist of a multi-camera setup combined with complex preprocessing algorithms. These algorithms exhibit high computation and data transport demands, as real-time image processing of multiple input streams is a mandatory requirement for these systems. At the same time, however, future driver assistance systems must adhere to strict power consumption requirements and automotive cost constraints in order to be considered for integration in series vehicles.
Journal Article

A Framework for Virtual Testing of ADAS

Virtual testing of advanced driver assistance systems (ADAS) using a simulation environment provides great potential in reducing real world testing and therefore currently much effort is spent on the development of such tools. This work proposes a simulation and hardware-in-the-loop (HIL) framework, which helps to create a virtual test environment for ADAS based on real world test drive. The idea is to reproduce environmental conditions obtained on a test drive within a simulation environment. For this purpose, a production standard BMW 320d is equipped with a radar sensor to capture surrounding traffic objects and used as vehicle for test drives. Post processing of recorded GPS raw data from the navigation system using an open source map service and the radar data allows an exact reproduction of the driven road including other traffic participants.
Technical Paper

A Framework for Vision-Based Lane Line Detection in Adverse Weather Conditions Using Vehicle-to-Infrastructure (V2I) Communication

Lane line detection is a very critical element for Advanced Driver Assistance Systems (ADAS). Although, there has been significant amount of research dedicated to the detection and localization of lane lines in the past decade, there is still a gap in the robustness of the implemented systems. A major challenge to the existing lane line detection algorithms stems from coping with bad weather conditions (e.g. rain, snow, fog, haze, etc.). Snow offers an especially challenging environment, where lane marks and road boundaries are completely covered by snow. In these scenarios, on-board sensors such as cameras, LiDAR, and radars are of very limited benefit. In this research, the focus is on solving the problem of improving robustness of lane line detection in adverse weather conditions, especially snow. A framework is proposed that relies on using Vehicle-to-Infrastructure (V2I) communication to access reference images stored in the cloud.
Technical Paper

A HiL Test Bench for Monocular Vision Sensors and Its Applications in Camera-Only AEBs

This paper presents a HiL test bench specifically designed for closed-loop testing of the monocular-vision based ADAS sensors, whereby the animated pictures of the virtual scene is calibrated and projected onto a 120-degree circular screen, such that the camera sensor installed has the same vision as the observation of the real-world scene. A high-fidelity AEBs model is established and deployed in the real-time target of the HiL system, making intervention decisions based on the instance-level detection information transmitted from the physical sensor. By referring to the 2018 edition of the C-NCAP testing protocol, the HiL tests of the rear-end collision scenarios is performed to investigate the performance and characteristics of the longitudinal-motion sensing of the sensor sample under test.
Journal Article

A Humanized Vehicle Speed Control to Improve the Acceptance of Automated Longitudinal Control

Vehicle speed controls, as adaptive cruise control and related automated evolutions, are control systems able to follow a desired vehicle reference speed that is set by the driver and fused with information as road signs, SD maps etc.. Current normal production systems don’t distinguish among the vehicle users, only some carmakers are doing first steps towards the introduction of learning from driver to adapt the traditional control. In our work, we follow up this content with a humanized speed control, based on learning of driver longitudinal behavior. This method is able to combine machine learning algorithms, vehicle positioning and recurrent trips into existing automated longitudinal control systems. Proposed algorithm can reduce the interactions between drivers and automated systems by improving the acceptance of automated longitudinal control. Furthermore, proposed integration works mainly on speed reference that dramatically simplifies the customization of the system.
Technical Paper

A Hybrid Classification of Driver’s Style and Skill Using Fully-Connected Deep Neural Networks

Driving style and skill classification are of great significance in human-oriented advanced driver-assistance system (ADAS) development. In this paper, we propose Fully-Connected Deep Neural Networks (FC-DNN) to classify drivers’ styles and skills with naturalistic driving data. Followed by the data collection and pre-processing, FC-DNN with a series of deep learning optimization algorithms are applied. In the experimental part, the proposed model is validated and compared with other commonly used supervised learning methods including the k-nearest neighbors (KNN), support vector machine (SVM), decision tree (DT), random forest (RF), and multilayer perceptron (MLP). The results show that the proposed model has a higher Macro F1 score than other methods. In addition, we discussed the effect of different time window sizes on experimental results. The results show that the driving information of 1s can improve the final evaluation score of the model.
Technical Paper

A Lane Departure Estimating Algorithm Based on Camera Vision, Inertial Navigation Sensor and GPS Data

In this paper, a sensor fusion approach is introduced to estimate lane departure. The proposed algorithm combines the camera, inertial navigation sensor, and GPS data with the vehicle dynamics to estimate the vehicle path and the lane departure time. The lane path and vehicle path are estimated by using Kalman filters. This algorithm can be used to provide early warning for lane departure in order to increase driving safety. By integrating inertial navigation sensor and GPS data, the inertial sensor biases can be estimated and the vehicle path can be estimated where the GPS data is not available or is poor. Additionally, the algorithm can be used to reduce the latency of information embedded in the controls, so that the vehicle lateral control performance can be significantly improved during lane keeping in Advanced Driver Assistance Systems (ADAS) or autonomous vehicles. Furthermore, it improves lane detection reliability in situations when camera fails to detect lanes.