IJCNN 2025, INNS International Joint Conference on Neural Networks, VERIMEDIA: International Workshop on Media Verification and Integrity, 30 June-5 July 2025, Rome, Italy
The rapid advancement of deepfake technology has raised significant concerns about the authenticity of digital media and its potential misuse. While much progress has been made in developing methods to detect whether a video is fake or not, a critical question remains: Can we go one step further? What additional information can be derived once a deepfake is identified? Beyond merely flagging manipulated content, understanding
the source of the manipulation holds significant value for forensics and investigation. This paper addresses one aspect of this challenge by demonstrating how to recover information
from the driving video, i.e., the input video guiding the deepfake generation, to identify the person acting in the driving video (suspected driver). By learning facial expressions and movements unique to a suspected driver, we can identify which deepfake has been generated using videos of the suspected driver in a pool of deepfakes. While the current limitation of this work implies having a large quantity of data concerning your suspected
identity, this work proves the feasibility of deducing information on driving videos directly from the deepfakes. Code available at: https://github.com/Thiresias/BRT-driver-identification
Type:
Conference
City:
Rome
Date:
2025-06-30
Department:
Digital Security
Eurecom Ref:
8285
Copyright:
© 2025 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also: