Video Applications in Traffic Collision Reconstruction

This article presents how the application of security and onboard vehicle camera videos may be applied to traffic collision reconstruction. Several real-world examples are used to illustrate how images, even with poor quality, may be used to supplement conventional traffic collision reconstruction methodology. Finding a security camera in the area does not necessarily imply the

ByJoseph O'Neill

|

Published on January 26, 2017

|

Updated onJanuary 26, 2017

This article presents how the application of security and onboard vehicle camera videos may be applied to traffic collision reconstruction. Several real-world examples are used to illustrate how images, even with poor quality, may be used to supplement conventional traffic collision reconstruction methodology. Finding a security camera in the area does not necessarily imply the video will be helpful to the investigation. Some cameras are decoys, while others may not even point toward the area in question. When a camera is located, several concerns exist when obtaining a copy of the surveillance video. The foremost consideration is the time between the event and the investigation. A challenge the investigator may face is how to obtain a copy of the video without loss of image quality. Some video formats may require special viewing software. Oftentimes, even the poorest quality video will benefit the resulting analysis.

Resolution is the video frame dimensions, in units of pixels; this identifies the width and height of the video image (i.e., 1440×900, or 1280×1024). Image quality increases with pixel count. A frame rate, in units of frames per second (fps), is the number of images the system will record (or playback) per second in a specified resolution. A common capture rate for home security cameras is 30fps.

Environmental conditions may affect video image quality despite having a high-resolution system. The sun’s azimuth and altitude continually change, and sunlight directed on the security camera lens may washout portions of the video image. Accumulated dust on the camera lens may diminish video details. Recording the playback monitor with another video device can significantly reduce the image quality such that necessary detail may be lost.

Security videos may appear distorted, the result of wide-angle lenses. However, lens distortion can be corrected, or minimized. If the camera and lens properties are known, many software applications can correct automatically. Adjusting lens correction filters manually is an alternate method when camera/lens properties are unknown.

Video playback speed is the rate at which the images are displayed relative to time. An example is 30fps, or one frame every 0.0333 seconds. Security videos commonly displays a timestamp that may help determine the frame rate. Oftentimes, the keyboard arrow keys may be used to advance the video frames either forward or backward. The frame rate may then be calculated by first counting key strokes between timestamp increments, and then dividing the time increment by the frame count.

Security videos may be used to authenticate eyewitness statements, correctly determine the color of traffic lights, or verify headlight use at the time of collision. However, aside from these benefits, videos are commonly used to determine vehicle speed(s) before or at impact. In simple terms, to determine vehicle speed (s), the time (t) and distance (d) over which the vehicle travels is needed. The parameters time and distance must be determined as accurately as possible to yield reliable results.

Distance (d) is determined by first tracking the vehicle’s movements relative to stationary landmarks or reference lines drawn between objects in the video image. Then a time-position analysis using the distance traveled between consecutive vehicle positions and the corresponding time interval may be performed. Landmarks such as lane lines, raised pavement markers, or sign posts, may be chosen because they are conveniently adjacent to the vehicle passing in view of the camera. Alternately, reference lines between two points in the video (the camera location and a stationary object or two stationary objects) may be used.

Using a scale collision diagram, the identified landmarks and reference lines are drawn. Then the distance traveled between time-based vehicle positions may be directly measured. The time between vehicle positions is determined by subtraction; the corresponding time of one vehicle position is subtracted from the time corresponding to the next vehicle position. The vehicle speed between positions is calculated as distance divided by time.

Case Study Example #1 – Motorcycle vs. Vehicle Collision

This case involves a vehicle driver that briefly stopped on the right shoulder of a four-lane boulevard. It then attempted an illegal U-turn. During the U-turn, the vehicle crossed the path of an approaching motorcycle coming from behind in the adjacent traffic lane. The vehicle’s alignment is nearly perpendicular to the traffic lanes when the motorcycle slams into the driver’s door killing the rider.

A security camera was discovered during the investigation. The camera faced the street and captured the moments leading up to and including the collision event (figure 01). The investigator made a recording of the collision event from the playback monitor. Note the light fixture’s reflection on the playback monitor (figure 02). Despite the poor quality the video still provides important timing and pre-impact information.

The video shows the vehicle stopped along the curb for about 32 seconds while the driver waits for traffic to clear (figure 03). The vehicle’s headlights and taillights are illuminated, and brake light function is confirmed as the vehicle inches forward several times before commencing the U-turn (figure 04). The U-turn lasts about three seconds until the vehicle is observed to roll clockwise due to the force of impact (figure 05). The vehicle continues after impact and eventually parks at the far side shoulder.

The motorcycle’s approach is announced by its headlight beam on the roadway for more than a second before the motorcycle enters the video image. Indeed, the motorcycle’s headlights were functioning (figure 06). The motorcycle travels across the video screen for less than a second until impact (figure 07). Before impact, however, an increased intensity of the headlight beam on the roadway is detected. This critical observation indicates the rider at least applied the front brake; the front suspension compressed and the motorcycle pitched downward causing the headlight beam angle to change. Therefore, in this case the perception of an impact hazard by the rider was confirmed. This is despite the motorcycle having ABS and where no tire friction marks observed or documented at the scene.

Case Study Example #2 – Semi Tractor-Trailer vs. Vehicle Collision

This case involves a collision between a semi tractor-trailer and a passenger vehicle merging onto a highway. As the vehicle moves left, it slows until its left rear bumper corner was impacted by the right edge of the tractor’s front bumper. After contact, the vehicle rotated counterclockwise and was redirected to the left. The vehicle then crossed adjacent traffic lanes and collided with another vehicle in what became a three-car event.

The trucking company installed an event recorder in its fleet vehicles (figure 08). The video system simultaneously records one camera’s view forward through the windshield and a second camera directed toward the driver (figure 09). The video recording assisted the analysis of the events, including the driver’s actions, to the developing collision sequence.

The playback speed is in realtime at 30fps. However, the camera records at 4fps, so video frames are displayed in one quarter second increment. At the bottom of the image the tractor’s speed is displayed along with forward and lateral acceleration, and a timestamp relative to the trigger event. Vehicle speed was not in dispute since the tractor’s speed is based on the GPS satellite signals and is reasonably accurate.

The passenger vehicle comes along the right side of the tractor and initiates a merge to the left. After approximately two seconds into the merge, the vehicle’s left side tires roll over the lane separator line (figure 10). The vehicle continues moving left, and then slows until contact with the tractor’s bumper is made (figure 11). Impact is the moment when the vehicle’s heading change is detected. After impact, the vehicle rotates and translates left until the vehicle enters the adjacent left lane, an action that takes approximately 1.25 seconds (figure 12).

Without the onboard video camera the pre-impact and impact events would not be known due to the lack of documented physical evidence. In this case no tire marking or other roadway evidence was recorded. The “AOI’s” were only estimated because access to the roadway was limited due to vehicle traffic speed.

Case Study Example #3 – Wrong-Way DUI Driver Head-on Collision

This case involves a law enforcement deputy intervening on the report of a wrong-way driver. Up ahead, several vehicles avoided a potential head-on collision before the deputy entered the highway, driving toward the offending SUV. Approaching from behind the deputy, in the same lane, were two vehicles one behind the other. These vehicles changed lanes to the left and passed the slow merging law enforcement vehicle. The deputy had not yet activated the overhead lights (figure 13). In the left lane, the passing vehicles were now in a collision course with the offending vehicle. At the last moment, the lead vehicle of the two swerved right. However, the trailing vehicle plowed head-on into the wrong-way vehicle (figure 14).

This patrol car is equipped with an onboard video system with a camera installed on the windshield header adjacent to the interior rearview mirror (figure 15). The video camera captures images continuously, but doesn’t record until the overhead lights are activated, including a 60-second buffer. When the law enforcement officer noticed the passing vehicles, he activated the overhead lights. In doing so, he captured the fatal collision event with a closing speed in excess of 120mph.

A time-position analysis was employed to determine the speed of 1) the patrol car at the time of merge and 2) the speeds of the passing vehicles. The position of the patrol car on the roadway was determined using the push guard as a reference, because its position is fixed relative to the video camera (figure 16). And, the position of this reference line was tracked relative to artifacts on the roadway, such as asphalt patches, lane lines, and raised reflectors. The passing vehicles were tracked in a similar fashion. However the headlight beams on the roadway were tracked throughout the video sequence to establish their roadway positions in time (figure 17).

The frame rate of the video playback provided the time increments between the artifacts. The on-ramp and highway were surveyed by a licensed professional land surveyor who prepared a scale roadway diagram. The distance between artifacts, i.e., corresponding vehicle positions, was measured. It was determined that the law enforcement vehicle had merged onto the highway at approximately 32mph. This is less than half the posted speed limit of 65mph. The vehicles that passed the law enforcement vehicle were determined to be traveling at approximately 60mph.

Traffic collision reconstruction is a multi-disciplinary field that encompasses the proper application of physics and engineering principles to quantify motion and collision dynamics of traffic collisions. Conventional reconstruction methods include, but are not limited to momentum or damage energy analyses, collision reconstruction or simulation software, interpreting crash data, or evaluating physical evidence at the collision scene, such as tire friction marks and other debris. The analysis of video is a capable tool.

The security videos vary greatly with regard to quality. But oftentimes, some collision detail would not be discovered without the video. Conventional collision reconstruction methods don’t fail due to a lack of a video capturing the event in question. However, security videos have been shown to advance the understanding of traffic collisions when they are available.

About the author

Joseph O'Neill

Joseph O'Neill

Joe has extensive experience in online journalism and technical writing across a range of legal topics, including personal injury, meidcal malpractice, mass torts, consumer litigation, commercial litigation, and more. Joe spent close to six years working at Expert Institute, finishing up his role here as Director of Marketing. He has considerable knowledge across an array of legal topics pertaining to expert witnesses. Currently, Joe servces as Owner and Demand Generation Consultant at LightSail Consulting.

background image

Subscribe to our newsletter

Join our newsletter to stay up to date on legal news, insights and product updates from Expert Institute.