A Digital Forensic Method to Detect Object based Video Forgery Security Attacks on Surround View ADAS Camera System 2021-01-0146
The present and futuristic surround view camera systems provide the bird-eye’s view of the driving environment to the driver through a real-time video feed in the digital cockpit infotainment display, which assists the driver in maneuvering, parking, lane changing by performing object detection, object tracking, maneuver estimation, blind spot detection, lane detection, etc., The functional safety of this surround view camera system is gets compromised, if it fails to alert the driver, when truly obstacles are present in the nearby driving environment of the vehicle, or if it alerts the driver when no obstacles are present in the nearby driving environment. This malfunctioning of surround view driver assistance system is due to integrity compromisation through cyberattacks, where attackers forge the displayed video data on the infotainment system, which has external world connectivity. This passive spatial tampering is performed by modifying the actual surround view real-time video content at pixel level, block level or scene level in order to insert the new fake objects or to remove the existing real objects from the received original video frames, there by fooling the driving system. The successful intra-frame passive forgery attacks results in the false interpretation by the driver about the presence or the absence of objects in the driving environment, which causes the collision of the ego vehicle with the surroundings. Hence, here a deep learning based digital forensic approach is proposed to detect the forgery attacks on the surround view camera systems. The attacker’s successful launch of forgery attacks on the surround view video information is detected intelligently using deep learning techniques before displaying the driving assistance video information on the cockpit dashboard. The proposed approach detects the splicing and copy-move object forgery attacks on the incoming real-time video frames by learning feature representations using convolutional neural networks. The proposed approach validates the authenticity of the video frames before displaying on the infotainment driver interface system. Hence, the proposed approach successfully detects the video forgery attacks, and provides only the authentic driver assistance data by enhancing the trust on surround view camera systems from cyberattacks by ensuring the safety and system integrity.