Photographs and video recordings of vehicle crashes and accident sites are more prevalent than ever, with dash mounted cameras, surveillance footage, and personal cell phones now ubiquitous. The information contained in these pictures and videos provide critical information to understanding how crashes occurred, and analyze physical evidence. This course teaches the theory and techniques for getting the most out of digital media, including correctly processing raw video and photographs, correcting for lens distortion, and using photogrammetric techniques to convert the information in digital media to usable scaled three-dimensional data.
Convolutional neural networks are the de facto method of processing camera, radar, and lidar data for use in perception in ADAS and L4 vehicles, yet their operation is a black box to many engineers. Unlike traditional rules-based approaches to coding intelligent systems, networks are trained and the internal structure created during the training process is too complex to be understood by humans, yet in operation networks are able to classify objects of interest at error rates better than rates achieved by humans viewing the same input data.
Driving simulators allow the testing of driving functions, vehicle models and acceptance assessment at an early stage. For a real driving experience, it's necessary that all immersions are depicted as realistically as possible. When driving manually, the perceived haptic steering wheel torque plays a key role in conveying a realistic steering feel. To ensure this, complex multi-body systems are used with numerous of parameters that are difficult to identify. Therefore, this study shows a method how to generate a realistic steering feel with a nonlinear open-loop model which only contains significant parameters, particularly the friction of the steering gear. This is suitable for the steering feel in the most driving on-center area. Measurements from test benches and real test drives with an Electric Power Steering (EPS) were used for the Identification and Validation of the model.
In the evolving landscape of automated driving systems, the critical role of vehicle localization within the autonomous driving stack is increasingly evident. Traditional reliance on Global Navigation Satellite Systems (GNSS) proves to be inadequate, especially in urban areas where signal obstruction and multipath effects degrade accuracy. Addressing this challenge, this paper details the enhancement of a localization system for autonomous public transport vehicles, focusing on mitigating GNSS errors through the integration of a LiDAR sensor. The approach involves creating a 3D map using the factor graph-based LIO-SAM algorithm based on GNSS, vehicle odometry, IMU and LiDAR data. The algorithm is adapted to the use-case by adding a velocity factor and altitude data from a Digital Terrain model. Based on the map a state estimator is proposed, which combines high-frequency LiDAR odometry based on FAST-LIO with low-frequency absolute multiscale ICP-based LiDAR position estimation.