Refine Your Search

Search Results

Viewing 1 to 8 of 8
Technical Paper

Higher Accuracy and Lower Computational Perception Environment Based Upon a Real-time Dynamic Region of Interest

2022-03-29
2022-01-0078
Robust sensor fusion is a key technology for enabling the safe operation of automated vehicles. Sensor fusion typically utilizes inputs of cameras, radars, lidar, inertial measurement unit, and global navigation satellite systems, process them, and then output object detection or positioning data. This paper will focus on sensor fusion between the camera, radar, and vehicle wheel speed sensors which is a critical need for near-term realization of sensor fusion benefits. The camera is an off-the-shelf computer vision product from MobilEye and the radar is a Delphi/Aptive electronically scanning radar (ESR) both of which are connected to a drive-by-wire capable vehicle platform. We utilize the MobilEye and wheel speed sensors to create a dynamic region of interest (DROI) of the drivable region that changes as the vehicle moves through the environment.
Technical Paper

Vehicle Lateral Offset Estimation Using Infrastructure Information for Reduced Compute Load

2023-04-11
2023-01-0800
Accurate perception of the driving environment and a highly accurate position of the vehicle are paramount to safe Autonomous Vehicle (AV) operation. AVs gather data about the environment using various sensors. For a robust perception and localization system, incoming data from multiple sensors is usually fused together using advanced computational algorithms, which historically requires a high-compute load. To reduce AV compute load and its negative effects on vehicle energy efficiency, we propose a new infrastructure information source (IIS) to provide environmental data to the AV. The new energy–efficient IIS, chip–enabled raised pavement markers are mounted along road lane lines and are able to communicate a unique identifier and their global navigation satellite system position to the AV. This new IIS is incorporated into an energy efficient sensor fusion strategy that combines its information with that from traditional sensor.
Technical Paper

Projecting Lane Lines from Proxy High-Definition Maps for Automated Vehicle Perception in Road Occlusion Scenarios

2023-04-11
2023-01-0051
Contemporary ADS and ADAS localization technology utilizes real-time perception sensors such as visible light cameras, radar sensors, and lidar sensors, greatly improving transportation safety in sufficiently clear environmental conditions. However, when lane lines are completely occluded, the reliability of on-board automated perception systems breaks down, and vehicle control must be returned to the human driver. This limits the operational design domain of automated vehicles significantly, as occlusion can be caused by shadows, leaves, or snow, which all occur in many regions. High-definition map data, which contains a high level of detail about road features, is an alternative source of the required lane line information. This study details a novel method where high-definition map data are processed to locate fully occluded lane lines, allowing for automated path planning in scenarios where it would otherwise be impossible.
Technical Paper

Road Snow Coverage Estimation Using Camera and Weather Infrastructure Sensor Inputs

2023-04-11
2023-01-0057
Modern vehicles use automated driving assistance systems (ADAS) products to automate certain aspects of driving, which improves operational safety. In the U.S. in 2020, 38,824 fatalities occurred due to automotive accidents, and typically about 25% of these are associated with inclement weather. ADAS features have been shown to reduce potential collisions by up to 21%, thus reducing overall accidents. But ADAS typically utilize camera sensors that rely on lane visibility and the absence of obstructions in order to function, rendering them ineffective in inclement weather. To address this research gap, we propose a new technique to estimate snow coverage so that existing and new ADAS features can be used during inclement weather. In this study, we use a single camera sensor and historical weather data to estimate snow coverage on the road. Camera data was collected over 6 miles of arterial roadways in Kalamazoo, MI.
Technical Paper

Quantitative Resilience Assessment of GPS, IMU, and LiDAR Sensor Fusion for Vehicle Localization Using Resilience Engineering Theory

2023-04-11
2023-01-0576
Practical applications of recently developed sensor fusion algorithms perform poorly in the real world due to a lack of proper evaluation during development. Existing evaluation metrics do not properly address a wide variety of testing scenarios. This issue can be addressed using proactive performance measurements such as the tools of resilience engineering theory rather than reactive performance measurements such as root mean square error. Resilience engineering is an established discipline for evaluating proactive performance on complex socio-technical systems which has been underutilized for automated vehicle development and evaluation. In this study, we use resilience engineering metrics to assess the performance of a sensor fusion algorithm for vehicle localization. A Kalman Filter is used to fuse GPS, IMU and LiDAR data for vehicle localization in the CARLA simulator.
Technical Paper

Assessing Resilience in Lane Detection Methods: Infrastructure-Based Sensors and Traditional Approaches for Autonomous Vehicles

2024-04-09
2024-01-2039
Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions.
Technical Paper

Real World Use Case Evaluation of Radar Retro-reflectors for Autonomous Vehicle Lane Detection Applications

2024-04-09
2024-01-2042
Lane detection plays a critical role in autonomous vehicles for safe and reliable navigation. Lane detection is traditionally accomplished using a camera sensor and computer vision processing. The downside of this traditional technique is that it can be computationally intensive when high quality images at a fast frame rate are used and has reliability issues from occlusion such as, glare, shadows, active road construction, and more. This study addresses these issues by exploring alternative methods for lane detection in specific scenarios caused from road construction-induced lane shift and sun glare. Specifically, a U-Net, a convolutional network used for image segmentation, camera-based lane detection method is compared with a radar-based approach using a new type of sensor previously unused in the autonomous vehicle space: radar retro-reflectors.
Technical Paper

Engineering Requirements that Address Real World Hazards from Using High-Definition Maps, GNSS, and Weather Sensors in Autonomous Vehicles

2024-04-09
2024-01-2044
Evaluating real-world hazards associated with perception subsystems is critical in enhancing the performance of autonomous vehicles. The reliability of autonomous vehicles perception subsystems are paramount for safe and efficient operation. While current studies employ different metrics to evaluate perception subsystem failures in autonomous vehicles, there still exists a gap in the development and emphasis on engineering requirements. To address this gap, this study proposes the establishment of engineering requirements that specifically target real-world hazards and resilience factors important to AV operation, using High-Definition Maps, Global Navigation Satellite System, and weather sensors. The findings include the need for engineering requirements to establish clear criteria for a high-definition maps functionality in the presence of erroneous perception subsystem inputs which enhances the overall safety and reliability of the autonomous vehicles.
X