Photographs and video recordings of vehicle crashes and accident sites are more prevalent than ever, with dash mounted cameras, surveillance footage, and personal cell phones now ubiquitous. The information contained in these pictures and video provide critical information to understanding how crashes occurred, and in analyzing physical evidence. This course teaches the theory and techniques for getting the most out of digital media, including correctly processing raw video and photographs, correcting for lens distortion, and using photogrammetric techniques to convert the information in digital media to usable scaled three-dimensional data.
This seminar is offered in China only and presented in Mandarin Chinese. The course materials are bilingual (English and Chinese). RTCA DO-178C is the worldwide accepted standard for civil aviation software development and certification. Compliance with the objectives of DO-178C is the primary means for meeting airworthiness requirements and obtaining approval of airborne software in TC/STC/TSO, etc. Even after learning the DO-178C, many people said they still lack of experience and still find it difficult to produce DO-178C compliant airborne software in real applications.
This seminar is offered in China only and presented in Mandarin Chinese. The course materials are bilingual (English and Chinese). With the development of Chinese civil aviation industry, more and more people realized the importance of airborne software. During the certification process of previous ARJ21 aircraft, airborne software had captured many concerns. Nowadays the certification process of C919 aircraft has also reached its peak after its maiden flight.
Autonomous driving is currently one of the most challenging Artificial Intelligence (AI) problems as it requires combination of state-of-the-art solutions in multiple areas including computer vision, sensor fusion, control theory and software engineering. Deep learning has been pivotal to solving some of these problems, especially in computer vision. This enabled some autonomous vehicle companies started leveraging the benefits of deep learning for creating smooth, natural, human-like motion planning systems. In particular, the plethora of driving data captured from modern cars is a key enabler for training data-driven path planning systems. , Developing deep learning-powered systems relies heavily on big and high-quality data required for training of the models, in which the intrinsic statistics of the data that the model is trained on can result in different agent behavior in different scenarios.
In 1930, John Maynard Keynes predicted, that due to software and automation, 15-hour work weeks would be a reality by the end of the century. While that envisioned “utopia” has not been realized, Mr. Keynes did have the radical vision to imagine a pretty radical low code highly automated future - one to which the future of software in mobility arguably depends on. So, what went wrong? Well, its not about as much about what went wrong but about how adoption is taking place and how it needs to change. In any software development, no matter where in history, as soon as software testing became a hot topic, automation tools started springing up and then "selective parts" that were iterative and time-consuming in the software were automated away. This begs several questions. The first and obvious, why automate these parts - and the second - whether software developers are making themselves obsolete by building automation tools.
Durability engineering for vehicles is about relating real operational loading to the actual strength of the product and its components. In the first part of this presentation, we show how to calculate failure probabilities and safety factors based on the load and strength distributions. We discuss the uncertainty within the estimations, which is considerably large in case of extremely small failure probabilities as required for safety critical components. In the second part, we focus on modelling and simulating the loads based on real vehicle usage, such that the resulting statistics allows to understand and quantify the usage variability. The idea is, to simulate thousands of vehicle life spans of, say, 300.000 km or 15.000 h of operation each. The input data for such simulations typically consists of a combination of geographic data (like road network, topography, road conditions, traffic data, and points of interest) and properly segmented rich data from measurement campaigns.
Mapping the luminance values of a visual scene is of broad interest to accident reconstructionists, human factors professionals, and lighting experts. Previous work has shown that pixel intensity captured by consumer-grade digital still cameras can be calibrated to estimate luminance. Suway and Suway presented a methodology for estimating luminance from digital images and video of a scene. This method requires the capture of dark images close in time to the capture of the image that will be used to estimate luminance. Additionally, Suway’s method requires the specific camera and lens combination used to be calibrated for luminance estimation. In this paper, the authors present results of estimating luminance with an exemplar camera and taking exemplar dark images. The analysis was completed with the commercially available luminance estimation software, Nitere.
Ground-based LiDAR using FARO 3d scanners and other brands of scanners have been shown to be an accurate way to capture the geometry of accident scenes, accident vehicles and exemplar vehicles, as well as corresponding evidence from these sources such as roadway gouge marks, vehicle crush depth, and burn areas. However, ground-based scanners do require expensive and larger equipment to be brought on-site, as well as other materials that may be required depending on the scenario. New technologies, such as Apple’s mobile phone LiDAR capture, have been released recently in their newer model phones, and they offer a chance to obtain similar results but with less cumbersome and less expensive equipment. This technology embeds photos with lidar data that can then be exported into point cloud data using different applications available in the app store. This paper will investigate the accuracy of Apple mobile phone LiDAR on obtaining geometry from multiple exemplar vehicles.
Photogrammetry, camera matching, and model-based image matching are commonly used techniques to analyze video for accident reconstruction and other forensic applications. Investigators are often tasked with determining the speed of a vehicle, person, or other object in a video of an incident, or with taking measurements from photographs. All such calculations are based on fundamental geometric principles governing image projection inside a camera. Most treatments in the literature express the image projection equations either in very simple terms for specific cases or in compact matrix notation that is difficult to apply. The purpose of this paper is to present a geometric derivation of the image projection equations in a straightforward form that can be readily applied by a qualified investigator without the need for specialized software. In addition, a simple brute force optimization procedure is described to perform camera matching and model-based image matching.
Laser scanners are typically used in vehicle accident reconstruction to measure roadway details; however, laser scans used near congested roadways digitize unwanted passing vehicles and produce a scan with noisy and poor image quality point clouds. On the other hand, drone images are unable to capture reflective objects and can struggle with vertical surfaces when creating 3D mesh for analysis. Prior research has tested the accuracy of drone-captured images processed with commercially available AgiSoft and Pixi4D software; however, a research gap still exists in examining the use of RealityCapture with drone images, FARO scans, and AeroPoint Ground Targets for use in accident reconstruction. The purpose of this study was to define a procedure and methodology for combining the ability of laser scans to capture vertical details that drones cannot with the color and roadway details of drone imagery not otherwise captured by FARO.
The modern automotive industry field is in the middle of a huge transformation of the Electric & Electronics (E/E) system design in order to meet the future mobility trends: driven by autonomy, electrification and connectivity. Autonomy (as defined by SAE J3016) implies five levels of driving automation and will include an explosion of sensors and computing power. As well, functional safety and cybersecurity constraints will increase. Electrification implies replacing energy from thermal sources with electricity from the wall and will include enhanced integration between sub-systems and components, along with higher speed in real time controls. Connectivity will provide huge data mining capability, along with enhanced off-board communication (so-called "Vehicle-to-Everything" or V2X) and remote software updates (FOTA).
As centralization of automotive E/E architectures becomes reality for future vehicles, it is crucial that existing assets be reused in the most efficient and effective manner. We report on our experience developing a new centralized E/E architecture for a propulsion domain, and migrating the corresponding propulsion elements of an existing decentralized, CAN-based architecture to a prototype of the centralized propulsion domain. Our migration adopts automotive Ethernet and supporting standards as a next-generation communications backbone technology; a next-generation computation platform from automotive supplier NXP; and a new automotive virtualization solution from OpenSynergy. We discuss aspects of legacy software re-use and adaptation; modification of vehicle HiL simulation models used in testing; existing vendor tool support; and implications arising from functional safety and the ISO 26262 standard.
Under operating conditions, the durability of vehicles is estimated by the resource, which also depends on the frequency of maintenance. The purpose of the study is to establish the impact of fuel consumption on the resource and frequency of car maintenance. The used of mathematical modeling to obtain analytical formulas that relate the resource to fuel consumption. It is proposed to adjust the resource and frequency of maintenance on the same principles as used to adjust fuel consumption. The deviation of the actual fuel consumption from the standard value can be obtained experimentally or using the adjustment factors specified in the methodology for rationing of fuels and lubricants. The values of track fuel consumption and resource change for the five categories of roads are calculated. For ease of application of the method, the coefficients of change of resource and change of fuel consumption were introduced, also a graphical relationship of change these coefficients was obtained.
The advancement of E/E architecture has made modern EV a sophisticated and high performance computing system. High level AD/ADAS functions and applications require real time and accurate perception, localization, fusion, planning and control using computer vision or deep learning based AI algorithms to ensure functional safety and comfort user experiences. Legacy AD/ADAS development from OEMs centers around developing functions on ECUs using services provided by Autosar CP to meet automobile grade and mass production requirements. Additionally, Autosar AP has been maturing and provide richer services and function abstractions. Still the applications development and supporting system software and closely coupled together, this makes application development and enhancement less scalable or flexible, resulting in longer development cycle and slower time to market.
In agriculture industry, increasing use of Vehicle Internet of Things (IoT), telematics and emerging technologies are resulting in smarter machines with connected solutions. Inter and Intra Communication with vehicle to vehicle and inside vehicle - Electronic Control Unit (ECU) to ECU or ECU to sensor, requirement for flow of data increased in-turn resulting in increased need for secure communication. In this paper, we focus on functional verification and validation of secure Controller Area Network (CAN) for intra vehicular communication to establish confidentiality, integrity, authenticity, and freshness of data, supporting safety, advanced automation, protection of sensitive data and IP (Intellectual Property) protection. Network security algorithms and software security processes are the layers supporting to achieve our cause. Test environment setup with secured hardware and simulated models, test scenarios and test data created to achieve our objective.
Computer vision (CV), a form of artificial intelligence (AI), is a foundational technology within the automotive industry for an increasing number of applications including active safety, motion control, and driver distraction monitoring. State-of-the-art CV models often rely on the use of Deep Neural Networks (DNNs) to achieve high levels of accuracy. While necessary for their accuracy, DNNs are computationally complex. For example, when compared to other AI model architectures, they have a large memory footprint and must perform a high number of operations to create an output or prediction. To meet performance goals in the face of such constraints, high performance processors such as Graphics Processing Units (GPUs) are typically required to run CV models on-board automobiles, creating a major hurdle to the deployment of CV applications.
In modern automobiles, many new complex features are enabled by software and sensors. When combined with the variability of real-world environments and scenarios, validation of this ever-increasing amount of software becomes complex, costly, and takes a lot of time. This challenges automakers ability to quickly and reliably develop and deploy new features and experiences that their customers want in the marketplace. While traditional validation methods and modern virtual validation environments can cover most new feature testing, it is challenging to cover certain real-world scenarios. These scenarios include variation in weather conditions, roadway environments, driver usage, and complex vehicle interactions. The current approach to covering these scenarios often relies on data collected from long vehicle test trips that try to capture as many of these unique situations as possible. These test trips contribute significantly to the validation cost and time of new features.
In order to accurately, conveniently and safely test the relevant mechanical response of lithium-ion battery under various charge and discharge conditions, it is proposed to develop the corresponding test bench. The test bench mainly includes mechanical system and measurement and control system. For the measurement and control system, a monitoring and test system software is developed, which cooperates with the relevant hardware to realize the reliability and convenience of data acquisition.