Refine Your Search

Search Results

Journal Article

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

2018-04-03
2018-01-0516
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging.
Technical Paper

Photogrammetric Measurement Error Associated with Lens Distortion

2011-04-12
2011-01-0286
All camera lenses contain optical aberrations as a result of the design and manufacturing processes. Lens aberrations cause distortion of the resulting image captured on film or a sensor. This distortion is inherent in all lenses because of the shape required to project the image onto film or a sensor, the materials that make up the lens, and the configuration of lenses to achieve varying focal lengths and other photographic effects. The distortion associated with lenses can cause errors to be introduced when photogrammetric techniques are used to analyze photographs of accidents scenes to determine position, scale, length and other characteristics of evidence in a photograph. This paper evaluates how lens distortion can affect images, and how photogrammetrically measuring a distorted image can result in measurement errors.
Technical Paper

Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments

2016-04-05
2016-01-1415
This paper presents a methodology for generating photo realistic computer simulation environments of nighttime driving scenarios by combining nighttime photography and videography with video tracking [1] and projection mapping [2] technologies. Nighttime driving environments contain complex lighting conditions such as forward and signal lighting systems of vehicles, street lighting, and retro reflective markers and signage. The high dynamic range of nighttime lighting conditions make modeling of these systems difficult to render realistically through computer generated techniques alone. Photography and video, especially when using high dynamic range imaging, can produce realistic representations of the lighting environments. But because the video is only two dimensional, and lacks the flexibility of a three dimensional computer generated environment, the scenarios that can be represented are limited to the specific scenario recorded with video.
Technical Paper

Pycrash: An Open-Source Tool for Accident Reconstruction

2021-04-06
2021-01-0896
Accident reconstructionists routinely rely on computer software to perform analyses. While there are a variety of software packages available to accident reconstructionists, many rely on custom spreadsheet-based applications for their analyses. Purchased packages provide an improved interface and the ability to produce sophisticated animations of vehicle motion but can be cost prohibitive. Pycrash is a free, open-source Python-based software package that, in its current state, can perform basic accident reconstruction calculations, automate data analyses, simulate single vehicle motion and, perform impulse-momentum based analyses of vehicle collisions. In this paper, the current capabilities of Pycrash are illustrated and its accuracy is assessed using matching PC-Crash simulations performed using PC-Crash.
Technical Paper

Video Based Simulation of Daytime and Nighttime Rain Affecting Driver Visibility

2021-04-06
2021-01-0854
This paper presents a methodology for generating video realistic computer simulated rain, and the effect rain has on driver visibility. Rain was considered under three different rain rates, light, moderate and heavy, and in nighttime and daytime conditions. The techniques and methodologies presented in this publication rely on techniques of video tracking and projection mapping that have been previous published. Neale et al. [2004, 2016], showed how processes of video tracking can convert two-dimensional image data from video images into three-dimensional scaled computer-generated environments. Further, Neale et al. [2013,2016] demonstrated that video projection mapping, when combined with video tracking, enables the production of video realistic simulated environments, where videographic and photographic baseline footage is combined with three-dimensional computer geometry.
Technical Paper

Using Data from a DriveCam Event Recorder to Reconstruct a Vehicle-to-Vehicle Impact

2013-04-08
2013-01-0778
This paper reports a method for analyzing data from a DriveCam unit to determine impact speeds and velocity changes in vehicle-to-vehicle impacts. A DriveCam unit is an aftermarket, in-vehicle, event-triggered video and data recorder. When the unit senses accelerations over a preset threshold, an event is triggered and the unit records video from two camera views, accelerations along three directions, and the vehicle speed with a GPS sensor. In conducting the research reported in this paper, the authors ran four front-to-rear crash tests with two DriveCam equipped vehicles. For each test, the front of the bullet vehicle impacted the rear of the stationary target vehicle. Each of the test vehicles was impacted in the rear twice - once at a speed of around 10 mph and again at a speed around 25 mph. The accuracy of the DriveCam acceleration data was assessed by comparing it to the data from other in-vehicle instrumentation.
Technical Paper

Comparison of Calculated Speeds for a Yawing and Braking Vehicle to Full-Scale Vehicle Tests

2012-04-16
2012-01-0620
Accurately reconstructing the speed of a yawing and braking vehicle requires an estimate of the varying rates at which the vehicle decelerated. This paper explores the accuracy of several approaches to making this calculation. The first approach uses the Bakker-Nyborg-Pacejka (BNP) tire force model in conjunction with the Nicolas-Comstock-Brach (NCB) combined tire force equations to calculate a yawing and braking vehicle's deceleration rate. Application of this model in a crash reconstruction context will typically require the use of generic tire model parameters, and so, the research in this paper explored the accuracy of using such generic parameters. The paper then examines a simpler equation for calculating a yawing and braking vehicle's deceleration rate which was proposed by Martinez and Schlueter in a 1996 paper. It is demonstrated that this equation exhibits physically unrealistic behavior that precludes it from being used to accurately determine a vehicle's deceleration rate.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

The Application of Augmented Reality to Reverse Camera Projection

2019-04-02
2019-01-0424
In 1980, research by Thebert introduced the use of photography equipment and transparencies for onsite reverse camera projection photogrammetry [1]. This method involved taking a film photograph through the development process and creating a reduced size transparency to insert into the cameras viewfinder. The photographer was then able to see both the image contained on the transparency, as well as the actual scene directly through the cameras viewfinder. By properly matching the physical orientation and positioning of the camera it was possible to visually align the image on the image on the transparency to the physical world as viewed through the camera. The result was a solution for where the original camera would have been located when the photograph was taken. With the original camera reverse-located, any evidence in the transparency that is no longer present at the site could then be replaced to match the evidences location in the transparency.
Technical Paper

Video Projection Mapping Photogrammetry through Video Tracking

2013-04-08
2013-01-0788
This paper examines a method for generating a scaled three-dimensional computer model of an accident scene from video footage. This method, which combines the previously published methods of video tracking and camera projection, includes automated mapping of physical evidence through rectification of each frame. Video Tracking is a photogrammetric technique for obtaining three-dimensional data from a scene using video and was described in a 2004 publication titled, “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction” (SAE 2004-01-1221).
Technical Paper

Determining Position and Speed through Pixel Tracking and 2D Coordinate Transformation in a 3D Environment

2016-04-05
2016-01-1478
This paper presents a methodology for determining the position and speed of objects such as vehicles, pedestrians, or cyclists that are visible in video footage captured with only one camera. Objects are tracked in the video footage based on the change in pixels that represent the object moving. Commercially available programs such as PFTracktm and Adobe After Effectstm contain automated pixel tracking features that record the position of the pixel, over time, two dimensionally using the video’s resolution as a Cartesian coordinate system. The coordinate data of the pixel over time can then be transformed to three dimensional data by ray tracing the pixel coordinates onto three dimensional geometry of the same scene that is visible in the video footage background.
Journal Article

An Optimization of Small Unmanned Aerial System (sUAS) Image Based Scanning Techniques for Mapping Accident Sites

2019-04-02
2019-01-0427
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Technical Paper

A Compendium of Passenger Vehicle Event Data Recorder Literature and Analysis of Validation Studies

2016-04-05
2016-01-1497
This paper presents a comprehensive literature review of original equipment event data recorders (EDR) installed in passenger vehicles, as well as a summary of results from the instrumented validation studies. The authors compiled 187 peer-reviewed studies, textbooks, legal opinions, governmental rulemaking policies, industry publications and presentations pertaining to event data recorders. Of the 187 total references, there were 64 that contained testing data. The authors conducted a validation analysis using data from 27 papers that presented both the EDR and corresponding independent instrumentation values for: Vehicle velocity change (ΔV) Pre-Crash vehicle speed The combined results from these studies highlight unique observations of EDR system testing and demonstrate the observed performance of original equipment event data recorders in passenger vehicles.
Technical Paper

Calibrating Digital Imagery in Limited Time Conditions of Dawn, Dusk and Twilight

2021-04-06
2021-01-0855
This paper presents a methodology for accurately representing dawn and dusk lighting conditions (twilight) through photographs and video recordings. Attempting to generate calibrated photographs and video during twilight conditions can be difficult, since the time available to capture the light changes rapidly over time. In contrast, during nighttime conditions, when the sun is no longer contributing light directly or indirectly through the sky dome, matching a specific time of night is not as relevant, as man-made lights are the dominate source of illumination. Thus, the initial setup, calibration and collection of calibrated video, when it is dark, is not under a time constraint, but during twilight conditions the time frame may be narrow. This paper applies existing methods for capturing calibrated footage at night but develops a method for adjusting the footage in the event matching an exact time during twilight is necessary.
Technical Paper

Comparing A Timed Exposure Methodology to the Nighttime Recognition Responses from SHRP-2 Naturalistic Drivers

2017-03-28
2017-01-1366
Collision statistics show that more than half of all pedestrian fatalities caused by vehicles occur at night. The recognition of objects at night is a crucial component in driver responses and in preventing nighttime pedestrian accidents. To investigate the root cause of this fact pattern, Richard Blackwell conducted a series of experiments in the 1950s through 1970s to evaluate whether restricted viewing time can be used as a surrogate for the imperfect information available to drivers at night. The authors build on these findings and incorporate the responses of drivers to objects in the road at night found in the SHRP-2 naturalistic database. A closed road outdoor study and an indoor study were conducted using an automatic shutter system to limit observation time to approximately ¼ of a second. Results from these limited exposure time studies showed a positive correlation to naturalistic responses, providing a validation of the time-limited exposure technique.
Technical Paper

A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush

2016-04-05
2016-01-1475
Video and photo based photogrammetry software has many applications in the accident reconstruction community including documentation of vehicles and scene evidence. Photogrammetry software has developed in its ease of use, cost, and effectiveness in determining three dimensional data points from two dimensional photographs. Contemporary photogrammetry software packages offer an automated solution capable of generating dense point clouds with millions of 3D data points from multiple images. While alternative modern documentation methods exist, including LiDAR technologies such as 3D scanning, which provide the ability to collect millions of highly accurate points in just a few minutes, the appeal of automated photogrammetry software as a tool for collecting dimensional data is the minimal equipment, equipment costs and ease of use.
Journal Article

The Relationship Between Tire Mark Striations and Tire Forces

2016-04-05
2016-01-1479
Tire mark striations are discussed often in the literature pertaining to accident reconstruction. The discussions in the literature contain many consistencies, but also contain disagreements. In this article, the literature is first summarized, and then the differences in the mechanism in which striations are deposited and interpretation of this evidence are explored. In previous work, it was demonstrated that the specific characteristics of tire mark striations offer a glimpse into the steering and driving actions of the driver. An equation was developed that relates longitudinal tire slip (braking) to the angle of tire mark striations [1]. The longitudinal slip equation was derived from the classic equation for tire slip and also geometrically. In this study, the equation for longitudinal slip is re-derived from equations that model tire forces.
Technical Paper

Accuracies in Single Image Camera Matching Photogrammetry

2021-04-06
2021-01-0888
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment.
Technical Paper

Visualization of Driver and Pedestrian Visibility in Virtual Reality Environments

2021-04-06
2021-01-0856
In 2016, Virtual Reality (VR) equipment entered the mainstream scientific, medical, and entertainment industries. It became both affordable and available to the public market in the form of some of the technologies earliest successful headset: the Oculus Rift™ and HTC Vive™. While new equipment continues to emerge, at the time these headsets came equipped with a 100° field of view screen that allows a viewer a seamless 360° environment to experience that is non-linear in the sense that the viewer can chose where they look and for how long. The fundamental differences, however, between the conventional form of visualizations like computer animations and graphics and VR are subtle. A VR environment can be understood as a series of two-dimensional images, stitched together to be a seamless single 360° image. In this respect, it is only the number of images the viewer sees at one time that separates a conventional visualization from a VR experience.
X