Refine Your Search

Topic

Search Results

Journal Article

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

2018-04-03
2018-01-0516
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging.
Technical Paper

Photogrammetric Measurement Error Associated with Lens Distortion

2011-04-12
2011-01-0286
All camera lenses contain optical aberrations as a result of the design and manufacturing processes. Lens aberrations cause distortion of the resulting image captured on film or a sensor. This distortion is inherent in all lenses because of the shape required to project the image onto film or a sensor, the materials that make up the lens, and the configuration of lenses to achieve varying focal lengths and other photographic effects. The distortion associated with lenses can cause errors to be introduced when photogrammetric techniques are used to analyze photographs of accidents scenes to determine position, scale, length and other characteristics of evidence in a photograph. This paper evaluates how lens distortion can affect images, and how photogrammetrically measuring a distorted image can result in measurement errors.
Technical Paper

Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments

2016-04-05
2016-01-1415
This paper presents a methodology for generating photo realistic computer simulation environments of nighttime driving scenarios by combining nighttime photography and videography with video tracking [1] and projection mapping [2] technologies. Nighttime driving environments contain complex lighting conditions such as forward and signal lighting systems of vehicles, street lighting, and retro reflective markers and signage. The high dynamic range of nighttime lighting conditions make modeling of these systems difficult to render realistically through computer generated techniques alone. Photography and video, especially when using high dynamic range imaging, can produce realistic representations of the lighting environments. But because the video is only two dimensional, and lacks the flexibility of a three dimensional computer generated environment, the scenarios that can be represented are limited to the specific scenario recorded with video.
Technical Paper

Application of 3D Visualization in Modeling Wheel Stud Contact Patterns with Rotating and Stationary Surfaces

2017-03-28
2017-01-1414
When a vehicle with protruding wheel studs makes contact with another vehicle or object in a sideswipe configuration, the tire sidewall, rim and wheel studs of that vehicle can deposit distinct geometrical damage patterns onto the surfaces it contacts. Prior research has demonstrated how relative speeds between the two vehicles or surfaces can be calculated through analysis of the distinct contact patterns. This paper presents a methodology for performing this analysis by visually modeling the interaction between wheel studs and various surfaces, and presents a method for automating the calculations of relative speed between vehicles. This methodology also augments prior research by demonstrating how the visual modeling and simulation of the wheel stud contact can extend to almost any surface interaction that may not have any previous prior published tests, or test methods that would be difficult to setup in real life.
Technical Paper

Data Acquisition using Smart Phone Applications

2016-04-05
2016-01-1461
There are numerous publically available smart phone applications designed to track the speed and position of the user. By accessing the phones built in GPS receivers, these applications record the position over time of the phone and report the record on the phone itself, and typically on the application’s website. These applications range in cost from free to a few dollars, with some, that advertise greater functionality, costing significantly higher. This paper examines the reliability of the data reported through these applications, and the potential for these applications to be useful in certain conditions where monitoring and recording vehicle or pedestrian movement is needed. To analyze the reliability of the applications, three of the more popular and widely used tracking programs were downloaded to three different smart phones to represent a good spectrum of operating platforms.
Technical Paper

Mid-Range Data Acquisition Units UsingGPS and Accelerometers

2018-04-03
2018-01-0513
In the 2016 SAE publication “Data Acquisition using Smart Phone Applications,” Neale et al., evaluated the accuracy of basic fitness applications in tracking position and elevation using the GPS and accelerometer technology contained within the smart phone itself [1]. This paper further develops the research by evaluating mid-level applications. Mid-level applications are defined as ones that use a phone’s internal accelerometer and record data at 1 Hz or greater. The application can also utilize add-on devices, such as a Bluetooth enabled GPS antenna, which reports at a higher sample rate (10 Hz) than the phone by itself. These mid-level applications are still relatively easy to use, lightweight and affordable [2], [3], [4], but have the potential for higher data sample rates for the accelerometer (due to the software) and GPS signal (due to the hardware). In this paper, Harry’s Lap Timer™ was evaluated as a smart phone mid-level application.
Technical Paper

Video Based Simulation of Daytime and Nighttime Rain Affecting Driver Visibility

2021-04-06
2021-01-0854
This paper presents a methodology for generating video realistic computer simulated rain, and the effect rain has on driver visibility. Rain was considered under three different rain rates, light, moderate and heavy, and in nighttime and daytime conditions. The techniques and methodologies presented in this publication rely on techniques of video tracking and projection mapping that have been previous published. Neale et al. [2004, 2016], showed how processes of video tracking can convert two-dimensional image data from video images into three-dimensional scaled computer-generated environments. Further, Neale et al. [2013,2016] demonstrated that video projection mapping, when combined with video tracking, enables the production of video realistic simulated environments, where videographic and photographic baseline footage is combined with three-dimensional computer geometry.
Technical Paper

Comparison of Calculated Speeds for a Yawing and Braking Vehicle to Full-Scale Vehicle Tests

2012-04-16
2012-01-0620
Accurately reconstructing the speed of a yawing and braking vehicle requires an estimate of the varying rates at which the vehicle decelerated. This paper explores the accuracy of several approaches to making this calculation. The first approach uses the Bakker-Nyborg-Pacejka (BNP) tire force model in conjunction with the Nicolas-Comstock-Brach (NCB) combined tire force equations to calculate a yawing and braking vehicle's deceleration rate. Application of this model in a crash reconstruction context will typically require the use of generic tire model parameters, and so, the research in this paper explored the accuracy of using such generic parameters. The paper then examines a simpler equation for calculating a yawing and braking vehicle's deceleration rate which was proposed by Martinez and Schlueter in a 1996 paper. It is demonstrated that this equation exhibits physically unrealistic behavior that precludes it from being used to accurately determine a vehicle's deceleration rate.
Technical Paper

Evaluation of Photometric Data Files for Use in Headlamp Light Distribution

2010-04-12
2010-01-0292
Computer simulation of nighttime lighting in urban environments can be complex due to the myriad of light sources present (e.g., street lamps, building lights, signage, and vehicle headlamps). In these areas, vehicle headlamps can make a significant contribution to the lighting environment 1 , 2 . This contribution may need to be incorporated into a lighting simulation to accurately calculate overall light levels and to represent how the light affects the experience and quality of the environment. Within a lighting simulation, photometric files, such as the photometric standard light data file format, are often used to simulate light sources such as street lamps and exterior building lights in nighttime environments. This paper examines the validity of using these same photometric file types for the simulation of vehicle headlamps by comparing the light distribution from actual vehicle headlamps to photometric files of these same headlamps.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

Vehicle Acceleration Modeling in PC-Crash

2014-04-01
2014-01-0464
PC-Crash™, a widely used crash analysis software package, incorporates the capability for modeling non-constant vehicle acceleration, where the acceleration rate varies with speed, weight, engine power, the degree of throttle application, and the roadway slope. The research reported here offers a validation of this capability, demonstrating that PC-Crash can be used to realistically model the build-up of a vehicle's speed under maximal acceleration. In the research reported here, PC-Crash 9.0 was used to model the full-throttle acceleration capabilities of three vehicles with automatic transmissions - a 2006 Ford Crown Victoria Police Interceptor (CVPI), a 2000 Cadillac DeVille DTS, and a 2003 Ford F150. For each vehicle, geometric dimensions, inertial properties, and engine/drivetrain parameters were obtained from a combination of manufacturer specifications, calculations, inspections of exemplar vehicles and full-scale vehicle testing.
Technical Paper

The Application of Augmented Reality to Reverse Camera Projection

2019-04-02
2019-01-0424
In 1980, research by Thebert introduced the use of photography equipment and transparencies for onsite reverse camera projection photogrammetry [1]. This method involved taking a film photograph through the development process and creating a reduced size transparency to insert into the cameras viewfinder. The photographer was then able to see both the image contained on the transparency, as well as the actual scene directly through the cameras viewfinder. By properly matching the physical orientation and positioning of the camera it was possible to visually align the image on the image on the transparency to the physical world as viewed through the camera. The result was a solution for where the original camera would have been located when the photograph was taken. With the original camera reverse-located, any evidence in the transparency that is no longer present at the site could then be replaced to match the evidences location in the transparency.
Journal Article

Further Validation of Equations for Motorcycle Lean on a Curve

2018-04-03
2018-01-0529
Previous studies have reported and validated equations for calculating the lean angle required for a motorcycle and rider to traverse a curved path at a particular speed. In 2015, Carter, Rose, and Pentecost reported physical testing with motorcycles traversing curved paths on an oval track on a pre-marked range in a relatively level parking lot. Several trends emerged in this study. First, while theoretical lean angle equations prescribe a single lean angle for a given lateral acceleration, there was considerable scatter in the real-world lean angles employed by motorcyclists for any given lateral acceleration level. Second, the actual lean angle was nearly always greater than the theoretical lean angle. This prior study was limited in that it only examined the motorcycle lean angle at the apex of the curves. The research reported here extends the previous study by examining the accuracy of the lean angle formulas throughout the curves.
Journal Article

A Comparison of 25 High Speed Tire Disablements Involving Full and Partial Tread Separations

2013-04-08
2013-01-0776
Tire tread separation events, a category of tire disablements, can be sub-categorized into two main types of separations. These include full tread separations, in which the tread around the entire circumference of the tire separates from the tire carcass, and partial tread separations, in which a portion of the tread separates and the flap remains attached to the tire for an extended period of time. In either case, the tire can remain inflated or lose air. Relatively, there have been few partial tire tread separation tests presented in the literature compared to full tread separation tests. In this study, the results of 25 full and partial tire tread separation tests, conducted with a variety of vehicles at highway speeds, are reported. Cases in which the tire remains inflated and loses air pressure are both considered. The testing was performed on a straight section of road and primarily focused on rear tire disablements.
Technical Paper

Video Projection Mapping Photogrammetry through Video Tracking

2013-04-08
2013-01-0788
This paper examines a method for generating a scaled three-dimensional computer model of an accident scene from video footage. This method, which combines the previously published methods of video tracking and camera projection, includes automated mapping of physical evidence through rectification of each frame. Video Tracking is a photogrammetric technique for obtaining three-dimensional data from a scene using video and was described in a 2004 publication titled, “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction” (SAE 2004-01-1221).
Technical Paper

Nighttime Visibility in Varying Moonlight Conditions

2019-04-02
2019-01-1005
When the visibility of an object or person in the roadway from a driver’s perspective is an issue, the potential effect of moonlight is sometimes questioned. To assess this potential effect, methods typically used to quantify visibility were performed during conditions with no moon and with a full moon. In the full moon condition, measurements were collected from initial moon rise until the moon reached peak azimuth. Baseline ambient light measurements of illumination at the test surface were measured in both no moon and full moon scenarios. Additionally, a vehicle with activated low beam headlamps was positioned in the testing area and the change in illumination at two locations forward of the vehicle was recorded at thirty-minute intervals as the moon rose to the highest position in the sky. Also, two separate luminance readings were recorded during the test intervals, one location 75 feet in front and to the left of the vehicle, and another 150 feet forward of the vehicle.
Technical Paper

Determining Position and Speed through Pixel Tracking and 2D Coordinate Transformation in a 3D Environment

2016-04-05
2016-01-1478
This paper presents a methodology for determining the position and speed of objects such as vehicles, pedestrians, or cyclists that are visible in video footage captured with only one camera. Objects are tracked in the video footage based on the change in pixels that represent the object moving. Commercially available programs such as PFTracktm and Adobe After Effectstm contain automated pixel tracking features that record the position of the pixel, over time, two dimensionally using the video’s resolution as a Cartesian coordinate system. The coordinate data of the pixel over time can then be transformed to three dimensional data by ray tracing the pixel coordinates onto three dimensional geometry of the same scene that is visible in the video footage background.
Journal Article

An Optimization of Small Unmanned Aerial System (sUAS) Image Based Scanning Techniques for Mapping Accident Sites

2019-04-02
2019-01-0427
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Journal Article

Speed Analysis of Yawing Passenger Vehicles Following a Tire Tread Detachment

2019-04-02
2019-01-0418
This paper presents yaw testing of vehicles with tread removed from tires at various locations. A 2004 Chevrolet Malibu and a 2003 Ford Expedition were included in the test series. The vehicles were accelerated up to speed and a large steering input was made to induce yaw. Speed at the beginning of the tire mark evidence varied between 33 mph and 73 mph. Both vehicles were instrumented to record over the ground speed, steering angle, yaw angle and in some tests, wheel speeds. The tire marks on the roadway were surveyed and photographed. The Critical Speed Formula has long been used by accident reconstructionists for estimating a vehicle’s speed at the beginning of yaw tire marks. The method has been validated by previous researchers to calculate the speed of a vehicle with four intact tires. This research extends the Critical Speed Formula to include yawing vehicles following a tread detachment event.
X