Refine Your Search

Topic

Search Results

Technical Paper

Pycrash: An Open-Source Tool for Accident Reconstruction

2021-04-06
2021-01-0896
Accident reconstructionists routinely rely on computer software to perform analyses. While there are a variety of software packages available to accident reconstructionists, many rely on custom spreadsheet-based applications for their analyses. Purchased packages provide an improved interface and the ability to produce sophisticated animations of vehicle motion but can be cost prohibitive. Pycrash is a free, open-source Python-based software package that, in its current state, can perform basic accident reconstruction calculations, automate data analyses, simulate single vehicle motion and, perform impulse-momentum based analyses of vehicle collisions. In this paper, the current capabilities of Pycrash are illustrated and its accuracy is assessed using matching PC-Crash simulations performed using PC-Crash.
Technical Paper

Video Based Simulation of Daytime and Nighttime Rain Affecting Driver Visibility

2021-04-06
2021-01-0854
This paper presents a methodology for generating video realistic computer simulated rain, and the effect rain has on driver visibility. Rain was considered under three different rain rates, light, moderate and heavy, and in nighttime and daytime conditions. The techniques and methodologies presented in this publication rely on techniques of video tracking and projection mapping that have been previous published. Neale et al. [2004, 2016], showed how processes of video tracking can convert two-dimensional image data from video images into three-dimensional scaled computer-generated environments. Further, Neale et al. [2013,2016] demonstrated that video projection mapping, when combined with video tracking, enables the production of video realistic simulated environments, where videographic and photographic baseline footage is combined with three-dimensional computer geometry.
Technical Paper

Calibrating Digital Imagery in Limited Time Conditions of Dawn, Dusk and Twilight

2021-04-06
2021-01-0855
This paper presents a methodology for accurately representing dawn and dusk lighting conditions (twilight) through photographs and video recordings. Attempting to generate calibrated photographs and video during twilight conditions can be difficult, since the time available to capture the light changes rapidly over time. In contrast, during nighttime conditions, when the sun is no longer contributing light directly or indirectly through the sky dome, matching a specific time of night is not as relevant, as man-made lights are the dominate source of illumination. Thus, the initial setup, calibration and collection of calibrated video, when it is dark, is not under a time constraint, but during twilight conditions the time frame may be narrow. This paper applies existing methods for capturing calibrated footage at night but develops a method for adjusting the footage in the event matching an exact time during twilight is necessary.
Technical Paper

Accuracies in Single Image Camera Matching Photogrammetry

2021-04-06
2021-01-0888
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment.
Technical Paper

Visualization of Driver and Pedestrian Visibility in Virtual Reality Environments

2021-04-06
2021-01-0856
In 2016, Virtual Reality (VR) equipment entered the mainstream scientific, medical, and entertainment industries. It became both affordable and available to the public market in the form of some of the technologies earliest successful headset: the Oculus Rift™ and HTC Vive™. While new equipment continues to emerge, at the time these headsets came equipped with a 100° field of view screen that allows a viewer a seamless 360° environment to experience that is non-linear in the sense that the viewer can chose where they look and for how long. The fundamental differences, however, between the conventional form of visualizations like computer animations and graphics and VR are subtle. A VR environment can be understood as a series of two-dimensional images, stitched together to be a seamless single 360° image. In this respect, it is only the number of images the viewer sees at one time that separates a conventional visualization from a VR experience.
Technical Paper

Speed Analysis from Video: A Method for Determining a Range in the Calculations

2021-04-06
2021-01-0887
This paper introduces a method for calculating vehicle speed and uncertainty range in speed from video footage. The method considers uncertainty in two areas; the uncertainty in locating the vehicle’s position and the uncertainty in time interval between them. An abacus style timing light was built to determine the frame time and uncertainty of time between frames of three different cameras. The first camera had a constant frame rate, the second camera had minor frame rate variability and the third had more significant frame rate variability. Video of an instrumented vehicle traveling at different, but known, speeds was recorded by all three cameras. Photogrammetry was conducted to determine a best fit for the vehicle positions. Deviation from that best fit position that still produced an acceptable range was also explored. Video metadata reported by iNPUT-ACE and Mediainfo was incorporated into the study.
Journal Article

Pedestrian Impact Analysis of Side-Swipe and Minor Overlap Conditions

2021-04-06
2021-01-0881
This paper presents analyses of 21real-world pedestrian versus vehicle collisions that were video recorded from vehicle dash mounted cameras or surveillance cameras. These pedestrian collisions have in common an impact configuration where the pedestrian was at the side of the vehicle, or with a minimal overlap at the front corner of the vehicle (less than one foot overlap). These impacts would not be considered frontal impacts [1], and as a result determining the speed of the vehicle by existing methods that incorporate the pedestrian travel distance post impact, or by assessing vehicle damage, would not be applicable. This research examined the specific interaction of non-frontal, side-impact, and minimal overlap pedestrian impact configurations to assess the relationship between the speed of the vehicle at impact, the motion of the pedestrian before and after impact, and the associated post impact travel distances.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

The Application of Augmented Reality to Reverse Camera Projection

2019-04-02
2019-01-0424
In 1980, research by Thebert introduced the use of photography equipment and transparencies for onsite reverse camera projection photogrammetry [1]. This method involved taking a film photograph through the development process and creating a reduced size transparency to insert into the cameras viewfinder. The photographer was then able to see both the image contained on the transparency, as well as the actual scene directly through the cameras viewfinder. By properly matching the physical orientation and positioning of the camera it was possible to visually align the image on the image on the transparency to the physical world as viewed through the camera. The result was a solution for where the original camera would have been located when the photograph was taken. With the original camera reverse-located, any evidence in the transparency that is no longer present at the site could then be replaced to match the evidences location in the transparency.
Journal Article

An Optimization of Small Unmanned Aerial System (sUAS) Image Based Scanning Techniques for Mapping Accident Sites

2019-04-02
2019-01-0427
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds.
Journal Article

Speed Analysis of Yawing Passenger Vehicles Following a Tire Tread Detachment

2019-04-02
2019-01-0418
This paper presents yaw testing of vehicles with tread removed from tires at various locations. A 2004 Chevrolet Malibu and a 2003 Ford Expedition were included in the test series. The vehicles were accelerated up to speed and a large steering input was made to induce yaw. Speed at the beginning of the tire mark evidence varied between 33 mph and 73 mph. Both vehicles were instrumented to record over the ground speed, steering angle, yaw angle and in some tests, wheel speeds. The tire marks on the roadway were surveyed and photographed. The Critical Speed Formula has long been used by accident reconstructionists for estimating a vehicle’s speed at the beginning of yaw tire marks. The method has been validated by previous researchers to calculate the speed of a vehicle with four intact tires. This research extends the Critical Speed Formula to include yawing vehicles following a tread detachment event.
Technical Paper

Low Speed Override of Passenger Vehicles with Heavy Trucks

2019-04-02
2019-01-0430
In low speed collisions (under 15 mph) that involve a heavy truck impacting the rear of a passenger vehicle, it is likely that the front bumper of the heavy truck will override the rear bumper beam of the passenger vehicle, creating an override/underride impact configuration. There is limited data available for study when attempting to quantify vehicle damage and crash dynamics in low-speed override/underride impacts. Low speed impact tests were conducted to provide new data for passenger vehicle dynamics and damage assessment for low speed override/underride rear impacts to passenger vehicles. Three tests were conducted, with a tractor-trailer impacting three different passenger vehicles at 5 mph and 10 mph. This paper presents data from these three tests in order to expand the available data set for low speed override/underride collisions.
Technical Paper

Braking and Swerving Capabilities of Three-Wheeled Motorcycles

2019-04-02
2019-01-0413
This paper reports testing and analysis of the braking and swerving capabilities of on-road, three-wheeled motorcycles. A three-wheeled vehicle has handling and stability characteristics that differ both from two-wheeled motorcycles and from four-wheeled vehicles. The data reported in this paper will enable accident reconstructionists to consider these different characteristics when analyzing a three-wheeled motorcycle operator’s ability to brake or swerve to avoid a crash. The testing in this study utilized two riders operating two Harley-Davidson Tri-Glide motorcycles with two wheels in the rear and one in the front. Testing was also conducted with ballast to explore the influence of passenger or cargo weight. Numerous studies have documented the braking capabilities of two-wheeled motorcycles with riders of varying skill levels and with a range of braking systems.
Technical Paper

Lateral and Tangential Accelerations of Left Turning Vehicles from Naturalistic Observations

2019-04-02
2019-01-0421
When reconstructing collisions involving left turning vehicles at intersections, accident reconstructionists are often required to determine the relative timing and spacing between two vehicles involved in such a collision. This time-space analysis frequently involves determining or prescribing a path and acceleration profile for the left turning vehicle. Although numerous studies have examined the straight-line acceleration of vehicles, only two studies have presented the tangential and lateral acceleration of left turning vehicles. This paper expands on the results of those limited studies and presents a methodology to automatically detect and track vehicles in a video file. The authors made observations of left turning vehicles at three intersections. Each intersection incorporated permissive green turn phases for left turning vehicles.
Journal Article

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

2018-04-03
2018-01-0516
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging.
Technical Paper

Mid-Range Data Acquisition Units UsingGPS and Accelerometers

2018-04-03
2018-01-0513
In the 2016 SAE publication “Data Acquisition using Smart Phone Applications,” Neale et al., evaluated the accuracy of basic fitness applications in tracking position and elevation using the GPS and accelerometer technology contained within the smart phone itself [1]. This paper further develops the research by evaluating mid-level applications. Mid-level applications are defined as ones that use a phone’s internal accelerometer and record data at 1 Hz or greater. The application can also utilize add-on devices, such as a Bluetooth enabled GPS antenna, which reports at a higher sample rate (10 Hz) than the phone by itself. These mid-level applications are still relatively easy to use, lightweight and affordable [2], [3], [4], but have the potential for higher data sample rates for the accelerometer (due to the software) and GPS signal (due to the hardware). In this paper, Harry’s Lap Timer™ was evaluated as a smart phone mid-level application.
Technical Paper

An Analytical Review and Extension of Two Decades of Research Related to PC-Crash Simulation Software

2018-04-03
2018-01-0523
PC-Crash is a vehicular accident simulation software that is widely used by the accident reconstruction community. The goal of this article is to review the prior literature that has addressed the capabilities of PC-Crash and its accuracy and reliability for various applications (planar collisions, rollovers, and human motion). In addition, this article aims to add additional analysis of the capabilities of PC-Crash for simulating planar collisions and rollovers. Simulation analysis of five planar collisions originally reported and analyzed by Bailey [2000] are reexamined. For all five of these collisions, simulations were obtained with the actual impact speeds that exhibited excellent visual agreement with the physical evidence. These simulations demonstrate that, for each case, the PC-Crash software had the ability to generate a simulation that matched the actual impact speeds and the known physical evidence.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Technical Paper

Video Analysis of Motorcycle and Rider Dynamics During High-Side Falls

2017-03-28
2017-01-1413
This paper investigates the dynamics of four motorcycle crashes that occurred on or near a curve (Edwards Corner) on a section of the Mulholland Highway called “The Snake.” This section of highway is located in the Santa Monica Mountains of California. All four accidents were captured on video and they each involved a high-side fall of the motorcycle and rider. This article reports a technical description and analysis of these videos in which the motion of the motorcycles and riders is quantified. To aid in the analysis, the authors mapped Edwards Corner using both a Sokkia total station and a Faro laser scanner. This mapping data enabled analysis of the videos to determine the initial speed of the motorcycles, to identify where in the curve particular rider actions occurred, to quantify the motion of the motorcycles and riders, and to characterize the roadway radius and superelevation throughout the curve.
Technical Paper

Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments

2016-04-05
2016-01-1415
This paper presents a methodology for generating photo realistic computer simulation environments of nighttime driving scenarios by combining nighttime photography and videography with video tracking [1] and projection mapping [2] technologies. Nighttime driving environments contain complex lighting conditions such as forward and signal lighting systems of vehicles, street lighting, and retro reflective markers and signage. The high dynamic range of nighttime lighting conditions make modeling of these systems difficult to render realistically through computer generated techniques alone. Photography and video, especially when using high dynamic range imaging, can produce realistic representations of the lighting environments. But because the video is only two dimensional, and lacks the flexibility of a three dimensional computer generated environment, the scenarios that can be represented are limited to the specific scenario recorded with video.
X