Brain Responses to Deepfakes and Real Videos of Emotional Facial Expressions Reveal Detection Without Awareness by Casey Becker, Russell Conduit, Philippe A. Chouinard, Robin Laycock

Authors: ...
 12th May 2025  SSRN
Posted by Alumni
June 3, 2025
AI-generated deepfakes may aid biological vision research, but only if they elicit similar neural responses to real video counterparts. We investigated neural (EEG) responses to happy and fearful facial expressions in videos, deepfakes, and dynamic morphs, on participants who were unaware of stimulus manipulation. The N400 event-related potential, associated with expectation violation, showed higher amplitudes for dynamic morphs and deepfakes than for videos. The Late Positive Potential (LPP), associated with motivational significance, was lowest for dynamic morphs compared to videos and deepfakes. Delta oscillations, broadly associated with stimulus salience, differed in their topography depending on the type of display. Delta power was highest at frontal electrodes for videos, and highest at posterior electrodes for dynamic morphs. For deepfakes, both frontal and posterior delta was higher than central. Delta phase-locking, which measured synchronisation across trials regardless of power, was increased in the frontal region for videos compared to dynamic morphs. Post-experiment interviews revealed similar perceptions of deepfakes and videos, while dynamic morphs were seen as manipulated or unsettling. Even after disclosing deepfake presence, participants struggled to identify them in an explicit detection task. Despite the high degree of realism in deepfakes, our findings suggest that the human brain can implicitly detect subtle discrepancies, indicated by increased N400 amplitude for deepfakes. Our study provides valuable insights into the perception of deepfakes and the neural mechanisms underlying face perception and emotional processing. learn more on SSRN
AUTHORS
Deepfake
Deepfake
Deepfake