On-board diagnosis of engine and transmission systems has been mandated by government regulation for light and medium vehicles since the 1996 model year. The regulations specify many of the detailed features that on-board diagnostics must exhibit. In addition, the penalties for not meeting the requirements or providing in-field remedies can be very expensive. This course is designed to provide a fundamental understanding of how and why OBD systems function and the technical features that a diagnostic should have in order to ensure compliant and successful implementation.
Photographs and video recordings of vehicle crashes and accident sites are more prevalent than ever, with dash mounted cameras, surveillance footage, and personal cell phones now ubiquitous. The information contained in these pictures and videos provide critical information to understanding how crashes occurred, and analyze physical evidence. This course teaches the theory and techniques for getting the most out of digital media, including correctly processing raw video and photographs, correcting for lens distortion, and using photogrammetric techniques to convert the information in digital media to usable scaled three-dimensional data.
Convolutional neural networks are the de facto method of processing camera, radar, and lidar data for use in perception in ADAS and L4 vehicles, yet their operation is a black box to many engineers. Unlike traditional rules-based approaches to coding intelligent systems, networks are trained and the internal structure created during the training process is too complex to be understood by humans, yet in operation networks are able to classify objects of interest at error rates better than rates achieved by humans viewing the same input data.
In pursuit of safety validation of automated driving functions, efforts are being made to accompany real world test drives by test drives in virtual environments. To be able to transfer highly automated driving functions into a simulation, models of the vehicle’s perception sensors such as lidar, radar and camera are required. In addition to the classic pulsed time-of-flight (ToF) lidars, the growing availability of commercial frequency modulated continuous wave (FMCW) lidars sparks interest in the field of environment perception. This is due to advanced capabilities such as directly measuring the target’s relative radial velocity based on the Doppler effect. In this work, an FMCW lidar sensor simulation model is introduced, which is divided into the components of signal propagation and signal processing. The signal propagation is modeled by a ray tracing approach simulating the interaction of light waves with the environment.