Refine Your Search

Search Results

Viewing 1 to 6 of 6
Technical Paper

Operational Design Domain Feature Optimization Route Planning Tool for Automated Vehicle Open Road Testing

2023-04-11
2023-01-0686
Autonomous vehicles must be able to function safely in complex contexts, involving unpredictable situations and interactions. To ensure this, the system must be tested at various stages as described by the V-model. This process iteratively tests and validates distinct parts of the system, starting with small components to system level assessment. However, this framework presents challenges when adapted to deal with testing problems that face autonomous vehicles. Open road testing is an effective way to expose the system to real world scenarios in combination with specific driving situations described by the Operational Design Domain (ODD). The task of finding a path between two points that maximizes the ODD exposure is not a trivial task, without mentioning that in most cases, the developers must design routes in unfamiliar regions. This represents a significant effort and resources consumption, which makes it important to optimize this task.
Technical Paper

HD-Map Based Ground Truth to Test Automated Vehicles

2022-03-29
2022-01-0097
Over the past decade there has been significant development in Automated Driving (AD) with continuous evolution towards higher levels of automation. Higher levels of autonomy increase the vehicle Dynamic Driving Task (DDT) responsibility under certain predefined Operational Design Domains (in SAE level 3, 4) to unlimited ODD (in SAE level 5). The AD system should not only be sophisticated enough to be operable at any given condition but also be reliable and safe. Hence, there is a need for Automated Vehicles (AV) to undergo extensive open road testing to traverse a wide variety of roadway features and challenging real-world scenarios. There is a serious need for accurate Ground Truth (GT) to locate the various roadway features which helps in evaluating the perception performance of the AV at any given condition. The results from open road testing provide a feedback loop to achieve a mature AD system.
Technical Paper

Interactive Lane Change with Adaptive Vehicle Speed

2021-04-06
2021-01-0094
Advanced Driver Assistance Systems (ADAS) has gained an enormous interest in the past decade with growing complexity in systems software and hardware. One of the most challenging ADAS features to develop is lane change as it requires full awareness of the objects surrounding the Ego vehicle as well as performing safe and convenient maneuvers. This paper discusses a camera-based lane change approach that is designed to improve the driver’s safety and comfort with the help of LiDAR object detection. The forward-facing camera is capable of detecting the Ego and adjacent lane lines as well as the moving objects in the camera’s field of view. A Graphical User Interface (GUI) was also developed for the driver to interact with the lane change feature by visualizing the sensor data and optionally request the vehicle to change lanes when the system suggests that it is safe to do so.
Technical Paper

V2X Connectivity with ROS for AD Applications

2021-04-06
2021-01-0060
Increased levels of autonomy requires an increasing amount of sensory data for the vehicle to make appropriate decisions. Sensors like camera, Lidar and Radar will help to perceive the surroundings, exposing nearby objects and blind spots within the line of sight. With V2X connectivity, vehicles communicate to other vehicles and infrastructure using wireless communications even in non-line of sight conditions. This paper presents an approach to utilize a V2X system to develop Automated Driving (AD) features in a Robot Operating System (ROS) environment. This was achieved by developing ROS drivers and creating custom messages to enable the communication between the V2X system and vehicle sensors. The developed algorithms were tested on FEV’s Smart Vehicle Demonstrator. The test results show that the proposed V2X automated driving approach has increased reliability compared to that of camera, Radar and Lidar based autonomous driving.
Technical Paper

Multi-Sensor Fusion in Slow Lanes for Lane Keep Assist System

2021-04-06
2021-01-0084
Implementing Advanced Driver Assistance Systems (ADAS) features that are available in all road scenarios and weather conditions is a big challenge for automotive companies and considered key enablers to achieve autonomous Level 4 (L4) vehicles. One important feature is the Lane Keep Assist System (LKAS). Most LKAS systems are based on lane line detection cameras and lane coefficient estimations by the camera is the key point for LKAS where the camera recognizes the lane lines using edge detection. But when the lane markers are not available due to high traffic and slow driving on the roads, another source of data for the lane lines needs to be available for the LKAS. In this paper a multi-sensor fusion approach based on camera, Lidar, and GPS is used to allow the vehicle to maintain its lateral location within the lane.
Technical Paper

LiDAR-Based Predictive Cruise Control

2020-04-14
2020-01-0080
Advanced Driver Assistance Systems (ADAS) enable safer driving by relying on the inputs from various sensors including Radar, Camera, and LiDAR. One of the newly emerging ADAS features is Predictive Cruise Control (PCC). PCC aims to optimize the vehicle’s speed profile and fuel efficiency. This paper presents a novel approach of using the point cloud of a LiDAR sensor to develop a PCC feature. The raw point cloud is utilized to detect objects in the surrounding environment of the vehicle, estimate the grade of the road, and plan the route in drivable areas. This information is critical for the PCC to define the optimal speed profile of the vehicle while following the planned path. This paper also discusses the developed algorithms of the LiDAR data processing and PCC controller. These algorithms were tested on FEV’s Smart Vehicle Demonstrator platform.
X