Refine Your Search

Search Results

Viewing 1 to 10 of 10
Technical Paper

A Comparison of Mobile Phone LiDAR Capture and Established Ground based 3D Scanning Methodologies

2022-03-29
2022-01-0832
Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as corresponding evidence from these sources such as roadway gouge marks, vehicle crush depth, debris fields, and burn areas. However, ground-based scanners require expensive and large equipment on-site, as well as other materials that may be required depending on the scenario, such as tripods and alignment spheres. New technologies, such as Apple’s mobile phone LiDAR capture, were released recently for their newer model phones, and these devices offer a way to obtain LiDAR data but with less cumbersome and less expensive equipment. This mobile LiDAR can be captured using many different applications from the App Store which can then be exported into point cloud data.
Technical Paper

A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush

2016-04-05
2016-01-1475
Video and photo based photogrammetry software has many applications in the accident reconstruction community including documentation of vehicles and scene evidence. Photogrammetry software has developed in its ease of use, cost, and effectiveness in determining three dimensional data points from two dimensional photographs. Contemporary photogrammetry software packages offer an automated solution capable of generating dense point clouds with millions of 3D data points from multiple images. While alternative modern documentation methods exist, including LiDAR technologies such as 3D scanning, which provide the ability to collect millions of highly accurate points in just a few minutes, the appeal of automated photogrammetry software as a tool for collecting dimensional data is the minimal equipment, equipment costs and ease of use.
Technical Paper

Accuracy and Repeatability of Mobile Phone LiDAR Capture

2023-04-11
2023-01-0614
Apple’s mobile phone LiDAR capabilities were previously evaluated to obtain geometry from multiple exemplar vehicles, but results were inconsistent and less accurate than traditional ground-based LiDAR (SAE Technical Paper 2022-01-0832. Miller, Hashemian, Gillihan, Helms). This paper builds upon existing research by utilizing the newest version of the mobile LiDAR hardware and software previously studied, as well as evaluating additional objects of varying sizes and a newly released software not yet studied. To better explore the accuracy achievable with Apple mobile phone LiDAR, multiple objects with varied surface textures, colors, and sizes were scanned. These objects included exemplar passenger vehicles (including a motorcycle), a fuel tank, and a spare tire mounted on a chrome wheel. To test the repeatability of the presented methodologies, four participants scanned each object multiple times and created three individual data sets per software.
Journal Article

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

2022-03-29
2022-01-0830
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation. The resulting data is comprised of millions of three-dimensional data points commonly referred to as a point cloud. The accuracy and reliability of these point clouds is dependent on hardware, hardware settings, field documentation methods, software, software settings, and processing methods. Ground control points (GCPs) are commonly used in aerial photoscanning to achieve reliable results. This research examines multiple GCP types, flight patterns, software, hardware, and a ground based real-time kinematic (RTK) system. Multiple documentation and processing methods are examined and accuracies of each are compared for an understanding of how capturing methods will optimize site documentation.
Technical Paper

Accuracy of Rectifying Oblique Images to Planar and Non-Planar Surfaces

2024-04-09
2024-01-2481
Emergency personnel and first responders have the opportunity to document crash scenes while evidence is still recent. The growth of the drone market and the efficiency of documentation with drones has led to an increasing prevalence of aerial photography for incident sites. These photographs are generally of high resolution and contain valuable information including roadway evidence such as tire marks, gouge marks, debris fields, and vehicle rest positions. Being able to accurately map the captured evidence visible in the photographs is a key process in creating a scaled crash-scene diagram. Image rectification serves as a quick and straightforward method for producing a scaled diagram. This study evaluates the precision of the photo rectification process under diverse roadway geometry conditions and varying camera incidence angles.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Journal Article

An Optimization of Small Unmanned Aerial System (sUAS) Image Based Scanning Techniques for Mapping Accident Sites

2019-04-02
2019-01-0427
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

Speed Analysis from Video: A Method for Determining a Range in the Calculations

2021-04-06
2021-01-0887
This paper introduces a method for calculating vehicle speed and uncertainty range in speed from video footage. The method considers uncertainty in two areas; the uncertainty in locating the vehicle’s position and the uncertainty in time interval between them. An abacus style timing light was built to determine the frame time and uncertainty of time between frames of three different cameras. The first camera had a constant frame rate, the second camera had minor frame rate variability and the third had more significant frame rate variability. Video of an instrumented vehicle traveling at different, but known, speeds was recorded by all three cameras. Photogrammetry was conducted to determine a best fit for the vehicle positions. Deviation from that best fit position that still produced an acceptable range was also explored. Video metadata reported by iNPUT-ACE and Mediainfo was incorporated into the study.
Technical Paper

Validating the Sun System in Blender for Recreating Shadows

2024-04-09
2024-01-2476
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software have begun to include these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software Blender to determine its accuracy for use in accident reconstruction. A parking lot was scanned using Faro LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment and photographs were taken at various times throughout the day from the same location in the environment.
X