Refine Your Search

Search Results

Viewing 1 to 14 of 14
Technical Paper

Observer for Faulty Perception Correction in Autonomous Vehicles

2020-04-14
2020-01-0694
Operation of an autonomous vehicle (AV) carries risk if it acts on inaccurate information about itself or the environment. The perception system is responsible for interpreting the world and providing the results to the path planning and other decision systems. The perception system performance is a result of the operating state of the sensors, e.g. is a sensor in fault or being adversely affected by the weather or environmental conditions, and approach to sensor measurement interpretation. We propose a trailing horizon switched system observer that minimizes the difference between reference tracking values developed from sensor fusion performed at an upper level and the values from a potentially faulty sensor based upon the convex combination of different sensor observation model outputs; the sensor observations models are associated with different sensor operating errors.
Technical Paper

Vehicle Velocity Prediction Using Artificial Neural Network and Effect of Real World Signals on Prediction Window

2020-04-14
2020-01-0729
Prediction of vehicle velocity is important since it can realize improvements in the fuel economy/energy efficiency, drivability, and safety. Velocity prediction has been addressed in many publications. Several references considered deterministic and stochastic approaches such as Markov chain, autoregressive models, and artificial neural networks. There are numerous new sensor and signal technologies like vehicle-to-vehicle and vehicle-to-infrastructure communication that can be used to obtain inclusive datasets. Using these inclusive datasets of sensors in deep neural networks, high accuracy velocity predictions can be achieved. This research builds upon previous findings that Long Short-Term Memory (LSTM) deep neural networks provide low error velocity prediction. We developed an LSTM deep neural network that uses different groups of datasets collected in Fort Collins, Colorado.
Technical Paper

Higher Accuracy and Lower Computational Perception Environment Based Upon a Real-time Dynamic Region of Interest

2022-03-29
2022-01-0078
Robust sensor fusion is a key technology for enabling the safe operation of automated vehicles. Sensor fusion typically utilizes inputs of cameras, radars, lidar, inertial measurement unit, and global navigation satellite systems, process them, and then output object detection or positioning data. This paper will focus on sensor fusion between the camera, radar, and vehicle wheel speed sensors which is a critical need for near-term realization of sensor fusion benefits. The camera is an off-the-shelf computer vision product from MobilEye and the radar is a Delphi/Aptive electronically scanning radar (ESR) both of which are connected to a drive-by-wire capable vehicle platform. We utilize the MobilEye and wheel speed sensors to create a dynamic region of interest (DROI) of the drivable region that changes as the vehicle moves through the environment.
Technical Paper

Vehicle Lateral Offset Estimation Using Infrastructure Information for Reduced Compute Load

2023-04-11
2023-01-0800
Accurate perception of the driving environment and a highly accurate position of the vehicle are paramount to safe Autonomous Vehicle (AV) operation. AVs gather data about the environment using various sensors. For a robust perception and localization system, incoming data from multiple sensors is usually fused together using advanced computational algorithms, which historically requires a high-compute load. To reduce AV compute load and its negative effects on vehicle energy efficiency, we propose a new infrastructure information source (IIS) to provide environmental data to the AV. The new energy–efficient IIS, chip–enabled raised pavement markers are mounted along road lane lines and are able to communicate a unique identifier and their global navigation satellite system position to the AV. This new IIS is incorporated into an energy efficient sensor fusion strategy that combines its information with that from traditional sensor.
Technical Paper

Projecting Lane Lines from Proxy High-Definition Maps for Automated Vehicle Perception in Road Occlusion Scenarios

2023-04-11
2023-01-0051
Contemporary ADS and ADAS localization technology utilizes real-time perception sensors such as visible light cameras, radar sensors, and lidar sensors, greatly improving transportation safety in sufficiently clear environmental conditions. However, when lane lines are completely occluded, the reliability of on-board automated perception systems breaks down, and vehicle control must be returned to the human driver. This limits the operational design domain of automated vehicles significantly, as occlusion can be caused by shadows, leaves, or snow, which all occur in many regions. High-definition map data, which contains a high level of detail about road features, is an alternative source of the required lane line information. This study details a novel method where high-definition map data are processed to locate fully occluded lane lines, allowing for automated path planning in scenarios where it would otherwise be impossible.
Technical Paper

Road Snow Coverage Estimation Using Camera and Weather Infrastructure Sensor Inputs

2023-04-11
2023-01-0057
Modern vehicles use automated driving assistance systems (ADAS) products to automate certain aspects of driving, which improves operational safety. In the U.S. in 2020, 38,824 fatalities occurred due to automotive accidents, and typically about 25% of these are associated with inclement weather. ADAS features have been shown to reduce potential collisions by up to 21%, thus reducing overall accidents. But ADAS typically utilize camera sensors that rely on lane visibility and the absence of obstructions in order to function, rendering them ineffective in inclement weather. To address this research gap, we propose a new technique to estimate snow coverage so that existing and new ADAS features can be used during inclement weather. In this study, we use a single camera sensor and historical weather data to estimate snow coverage on the road. Camera data was collected over 6 miles of arterial roadways in Kalamazoo, MI.
Technical Paper

Quantitative Resilience Assessment of GPS, IMU, and LiDAR Sensor Fusion for Vehicle Localization Using Resilience Engineering Theory

2023-04-11
2023-01-0576
Practical applications of recently developed sensor fusion algorithms perform poorly in the real world due to a lack of proper evaluation during development. Existing evaluation metrics do not properly address a wide variety of testing scenarios. This issue can be addressed using proactive performance measurements such as the tools of resilience engineering theory rather than reactive performance measurements such as root mean square error. Resilience engineering is an established discipline for evaluating proactive performance on complex socio-technical systems which has been underutilized for automated vehicle development and evaluation. In this study, we use resilience engineering metrics to assess the performance of a sensor fusion algorithm for vehicle localization. A Kalman Filter is used to fuse GPS, IMU and LiDAR data for vehicle localization in the CARLA simulator.
Technical Paper

Techno-Economic Analysis of Fixed-Route Autonomous and Electric Shuttles

2021-04-06
2021-01-0061
This paper takes a realistic approach to develop a techno-economic analysis for fixed-route autonomous shuttles. To develop a model for analysis, the current state of technology was used to approximate three timelines for achieving SAE level 5 capabilities: progressive, realistic, and conservative. Within these timelines, there are four different increments for advancements in the technology laid out as follows: SAE Level 0 - human driver, SAE Level 4 - in-vehicle safety operator, SAE Level 4 - remote safety operator, and SAE Level 5 - no safety operator. These increments in the changes of the technology were chosen based on the trends in the industry. Various shuttle models were used based on different rider quantities and drive-train requirements (electric vs gas) in this analysis. This allows for further understanding of how these deployment plans will vary the cost for shuttles operating in high, mid, and low ridership demand environments.
Technical Paper

High-Fidelity Modeling of Light-Duty Vehicle Emission and Fuel Economy Using Deep Neural Networks

2021-04-06
2021-01-0181
The transportation sector contributes significantly to emissions and air pollution globally. Emission models of modern vehicles are important tools to estimate the impact of technologies or controls on vehicle emission reductions, but developing a simple and high-fidelity model is challenging due to the variety of vehicle classes, driving conditions, driver behaviors, and other physical and operational constraints. Recent literature indicates that neural network-based models may be able to address these concerns due to their high computation speed and high-accuracy of predicted emissions. In this study, we seek to expand upon this initial research by utilizing several deep neural networks (DNN) architectures such as a recurrent neural network (RNN) and a convolutional neural network (CNN). These DNN algorithms are developed specific to the vehicle-out emissions prediction application, and a comprehensive assessment of their performances is done.
Journal Article

Analysis of LiDAR and Camera Data in Real-World Weather Conditions for Autonomous Vehicle Operations

2020-04-14
2020-01-0093
Autonomous vehicle technology has the potential to improve the safety, efficiency, and cost of our current transportation system by removing human error. With sensors available today, it is possible for the development of these vehicles, however, there are still issues with autonomous vehicle operations in adverse weather conditions (e.g. snow-covered roads, heavy rain, fog, etc.) due to the degradation of sensor data quality and insufficiently robust software algorithms. Since autonomous vehicles rely entirely on sensor data to perceive their surrounding environment, this becomes a significant issue in the performance of the autonomous system. The purpose of this study is to collect sensor data under various weather conditions to understand the effects of weather on sensor data. The sensors used in this study were one camera and one LiDAR. These sensors were connected to an NVIDIA Drive Px2 which operated in a 2019 Kia Niro.
Journal Article

Real-Time Estimation of Perception Sensor Misalignment in Autonomous Vehicles

2023-04-11
2023-01-0059
Autonomous vehicles rely upon accurate information about their surrounding environment to perform safe operational planning. The environment sense and perception system normally produces camera image data and LiDAR point cloud data that are processed and then fused to obtain a better perception of the environment than is possible from either alone. The accuracy of the fused data depends upon knowledge of the position of each sensor on the ego vehicle. Vehicle damage, improper sensor installation, sensor mount deformation, mount movement excited by vehicle motion, and/or other situations can result in an unexpected position of the sensor. This error adds uncertainty into the sensor measurement fusion that is normally not accounted for. LiDAR translational offset and angular orientation misalignment errors are investigated for correction.
Technical Paper

Assessing Resilience in Lane Detection Methods: Infrastructure-Based Sensors and Traditional Approaches for Autonomous Vehicles

2024-04-09
2024-01-2039
Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions.
Technical Paper

Real World Use Case Evaluation of Radar Retro-reflectors for Autonomous Vehicle Lane Detection Applications

2024-04-09
2024-01-2042
Lane detection plays a critical role in autonomous vehicles for safe and reliable navigation. Lane detection is traditionally accomplished using a camera sensor and computer vision processing. The downside of this traditional technique is that it can be computationally intensive when high quality images at a fast frame rate are used and has reliability issues from occlusion such as, glare, shadows, active road construction, and more. This study addresses these issues by exploring alternative methods for lane detection in specific scenarios caused from road construction-induced lane shift and sun glare. Specifically, a U-Net, a convolutional network used for image segmentation, camera-based lane detection method is compared with a radar-based approach using a new type of sensor previously unused in the autonomous vehicle space: radar retro-reflectors.
Technical Paper

Engineering Requirements that Address Real World Hazards from Using High-Definition Maps, GNSS, and Weather Sensors in Autonomous Vehicles

2024-04-09
2024-01-2044
Evaluating real-world hazards associated with perception subsystems is critical in enhancing the performance of autonomous vehicles. The reliability of autonomous vehicles perception subsystems are paramount for safe and efficient operation. While current studies employ different metrics to evaluate perception subsystem failures in autonomous vehicles, there still exists a gap in the development and emphasis on engineering requirements. To address this gap, this study proposes the establishment of engineering requirements that specifically target real-world hazards and resilience factors important to AV operation, using High-Definition Maps, Global Navigation Satellite System, and weather sensors. The findings include the need for engineering requirements to establish clear criteria for a high-definition maps functionality in the presence of erroneous perception subsystem inputs which enhances the overall safety and reliability of the autonomous vehicles.
X