Refine Your Search

Search Results

Viewing 1 to 15 of 15
Journal Article

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

2018-04-03
2018-01-0516
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

A Comparison of Mobile Phone LiDAR Capture and Established Ground based 3D Scanning Methodologies

2022-03-29
2022-01-0832
Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as corresponding evidence from these sources such as roadway gouge marks, vehicle crush depth, debris fields, and burn areas. However, ground-based scanners require expensive and large equipment on-site, as well as other materials that may be required depending on the scenario, such as tripods and alignment spheres. New technologies, such as Apple’s mobile phone LiDAR capture, were released recently for their newer model phones, and these devices offer a way to obtain LiDAR data but with less cumbersome and less expensive equipment. This mobile LiDAR can be captured using many different applications from the App Store which can then be exported into point cloud data.
Journal Article

An Optimization of Small Unmanned Aerial System (sUAS) Image Based Scanning Techniques for Mapping Accident Sites

2019-04-02
2019-01-0427
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Technical Paper

Calibrating Digital Imagery in Limited Time Conditions of Dawn, Dusk and Twilight

2021-04-06
2021-01-0855
This paper presents a methodology for accurately representing dawn and dusk lighting conditions (twilight) through photographs and video recordings. Attempting to generate calibrated photographs and video during twilight conditions can be difficult, since the time available to capture the light changes rapidly over time. In contrast, during nighttime conditions, when the sun is no longer contributing light directly or indirectly through the sky dome, matching a specific time of night is not as relevant, as man-made lights are the dominate source of illumination. Thus, the initial setup, calibration and collection of calibrated video, when it is dark, is not under a time constraint, but during twilight conditions the time frame may be narrow. This paper applies existing methods for capturing calibrated footage at night but develops a method for adjusting the footage in the event matching an exact time during twilight is necessary.
Technical Paper

A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush

2016-04-05
2016-01-1475
Video and photo based photogrammetry software has many applications in the accident reconstruction community including documentation of vehicles and scene evidence. Photogrammetry software has developed in its ease of use, cost, and effectiveness in determining three dimensional data points from two dimensional photographs. Contemporary photogrammetry software packages offer an automated solution capable of generating dense point clouds with millions of 3D data points from multiple images. While alternative modern documentation methods exist, including LiDAR technologies such as 3D scanning, which provide the ability to collect millions of highly accurate points in just a few minutes, the appeal of automated photogrammetry software as a tool for collecting dimensional data is the minimal equipment, equipment costs and ease of use.
Journal Article

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

2022-03-29
2022-01-0830
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation. The resulting data is comprised of millions of three-dimensional data points commonly referred to as a point cloud. The accuracy and reliability of these point clouds is dependent on hardware, hardware settings, field documentation methods, software, software settings, and processing methods. Ground control points (GCPs) are commonly used in aerial photoscanning to achieve reliable results. This research examines multiple GCP types, flight patterns, software, hardware, and a ground based real-time kinematic (RTK) system. Multiple documentation and processing methods are examined and accuracies of each are compared for an understanding of how capturing methods will optimize site documentation.
Technical Paper

Accuracies in Single Image Camera Matching Photogrammetry

2021-04-06
2021-01-0888
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment.
Technical Paper

Speed Analysis from Video: A Method for Determining a Range in the Calculations

2021-04-06
2021-01-0887
This paper introduces a method for calculating vehicle speed and uncertainty range in speed from video footage. The method considers uncertainty in two areas; the uncertainty in locating the vehicle’s position and the uncertainty in time interval between them. An abacus style timing light was built to determine the frame time and uncertainty of time between frames of three different cameras. The first camera had a constant frame rate, the second camera had minor frame rate variability and the third had more significant frame rate variability. Video of an instrumented vehicle traveling at different, but known, speeds was recorded by all three cameras. Photogrammetry was conducted to determine a best fit for the vehicle positions. Deviation from that best fit position that still produced an acceptable range was also explored. Video metadata reported by iNPUT-ACE and Mediainfo was incorporated into the study.
Technical Paper

Accuracy and Repeatability of Mobile Phone LiDAR Capture

2023-04-11
2023-01-0614
Apple’s mobile phone LiDAR capabilities were previously evaluated to obtain geometry from multiple exemplar vehicles, but results were inconsistent and less accurate than traditional ground-based LiDAR (SAE Technical Paper 2022-01-0832. Miller, Hashemian, Gillihan, Helms). This paper builds upon existing research by utilizing the newest version of the mobile LiDAR hardware and software previously studied, as well as evaluating additional objects of varying sizes and a newly released software not yet studied. To better explore the accuracy achievable with Apple mobile phone LiDAR, multiple objects with varied surface textures, colors, and sizes were scanned. These objects included exemplar passenger vehicles (including a motorcycle), a fuel tank, and a spare tire mounted on a chrome wheel. To test the repeatability of the presented methodologies, four participants scanned each object multiple times and created three individual data sets per software.
Technical Paper

Video Analysis of Motorcycle and Rider Dynamics During High-Side Falls

2017-03-28
2017-01-1413
This paper investigates the dynamics of four motorcycle crashes that occurred on or near a curve (Edwards Corner) on a section of the Mulholland Highway called “The Snake.” This section of highway is located in the Santa Monica Mountains of California. All four accidents were captured on video and they each involved a high-side fall of the motorcycle and rider. This article reports a technical description and analysis of these videos in which the motion of the motorcycles and riders is quantified. To aid in the analysis, the authors mapped Edwards Corner using both a Sokkia total station and a Faro laser scanner. This mapping data enabled analysis of the videos to determine the initial speed of the motorcycles, to identify where in the curve particular rider actions occurred, to quantify the motion of the motorcycles and riders, and to characterize the roadway radius and superelevation throughout the curve.
Technical Paper

Validating the Sun System in Blender for Recreating Shadows

2024-04-09
2024-01-2476
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software have begun to include these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software Blender to determine its accuracy for use in accident reconstruction. A parking lot was scanned using Faro LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment and photographs were taken at various times throughout the day from the same location in the environment.
Technical Paper

Validation of the PC-Crash Single-Track Vehicle Driver Model for Simulating Motorcycle Motion

2024-04-09
2024-01-2475
This paper validates the single-track vehicle driver model available in PC-Crash simulation software. The model is tested, and its limitations are described. The introduction of this model eliminated prior limitations that PC-Crash had for simulating motorcycle motion. Within PC-Crash, a user-defined path can be established for a motorcycle, and the software will generate motion consistent with the user-defined path (within the limits of friction and stability) and calculate the motorcycle lean (roll) generated by following that path at the prescribed speed, braking, or acceleration levels. In this study, the model was first examined for a simple scenario in which a motorcycle traversed a pre-defined curve at several speeds. This resulted in the conclusion that the single-track driver model in PC-Crash yielded motorcycle lean angles consistent with the standard, simple lean angle formula widely available in the literature.
Technical Paper

Accuracy of Rectifying Oblique Images to Planar and Non-Planar Surfaces

2024-04-09
2024-01-2481
Emergency personnel and first responders have the opportunity to document crash scenes while evidence is still recent. The growth of the drone market and the efficiency of documentation with drones has led to an increasing prevalence of aerial photography for incident sites. These photographs are generally of high resolution and contain valuable information including roadway evidence such as tire marks, gouge marks, debris fields, and vehicle rest positions. Being able to accurately map the captured evidence visible in the photographs is a key process in creating a scaled crash-scene diagram. Image rectification serves as a quick and straightforward method for producing a scaled diagram. This study evaluates the precision of the photo rectification process under diverse roadway geometry conditions and varying camera incidence angles.
X