Security Implications of Deepfakes in Face Authentication
Loading...
Date
Authors
Šalko, Milan
Firc, Anton
Malinka, Kamil
Advisor
Referee
Mark
Journal Title
Journal ISSN
Volume Title
Publisher
Association for Computing Machinery
Altmetrics
Abstract
Deepfakes are media generated by deep learning and are nearly indistinguishable from real content to humans. Deepfakes have seen a significant surge in popularity in recent years. There have been numerous papers discussing their effectiveness in deceiving people. What's equally, if not more concerning, is the potential vulnerability of facial and voice recognition systems to deepfakes. The misuse of deepfakes to spoof automated facial recognition systems can threaten various aspects of our lives, including financial security and access to secure locations. This issue remains largely unexplored. Thus, this paper investigates the technical feasibility of a spoofing attack on facial recognition. Firstly, we perform a threat analysis to understand what facial recognition use cases allow the execution of deepfake spoofing attacks. Based on this analysis, we define the attacker model for these attacks on facial recognition systems. Then, we demonstrate the ability of deepfakes to spoof two commercial facial recognition systems. Finally, we discuss possible means to prevent such spoofing attacks.
Deepfakes are media generated by deep learning and are nearly indistinguishable from real content to humans. Deepfakes have seen a significant surge in popularity in recent years. There have been numerous papers discussing their effectiveness in deceiving people. What's equally, if not more concerning, is the potential vulnerability of facial and voice recognition systems to deepfakes. The misuse of deepfakes to spoof automated facial recognition systems can threaten various aspects of our lives, including financial security and access to secure locations. This issue remains largely unexplored. Thus, this paper investigates the technical feasibility of a spoofing attack on facial recognition. Firstly, we perform a threat analysis to understand what facial recognition use cases allow the execution of deepfake spoofing attacks. Based on this analysis, we define the attacker model for these attacks on facial recognition systems. Then, we demonstrate the ability of deepfakes to spoof two commercial facial recognition systems. Finally, we discuss possible means to prevent such spoofing attacks.
Deepfakes are media generated by deep learning and are nearly indistinguishable from real content to humans. Deepfakes have seen a significant surge in popularity in recent years. There have been numerous papers discussing their effectiveness in deceiving people. What's equally, if not more concerning, is the potential vulnerability of facial and voice recognition systems to deepfakes. The misuse of deepfakes to spoof automated facial recognition systems can threaten various aspects of our lives, including financial security and access to secure locations. This issue remains largely unexplored. Thus, this paper investigates the technical feasibility of a spoofing attack on facial recognition. Firstly, we perform a threat analysis to understand what facial recognition use cases allow the execution of deepfake spoofing attacks. Based on this analysis, we define the attacker model for these attacks on facial recognition systems. Then, we demonstrate the ability of deepfakes to spoof two commercial facial recognition systems. Finally, we discuss possible means to prevent such spoofing attacks.
Description
Citation
Proceedings of the ACM Symposium on Applied Computing. 2024, p. 1376-1384.
https://dl.acm.org/doi/10.1145/3605098.3635953
https://dl.acm.org/doi/10.1145/3605098.3635953
Document type
Peer-reviewed
Document version
Published version
Date of access to the full text
Language of document
en
Study field
Comittee
Date of acceptance
Defence
Result of defence
Collections
Endorsement
Review
Supplemented By
Referenced By
Creative Commons license
Except where otherwised noted, this item's license is described as Creative Commons Attribution 4.0 International

0009-0004-9604-168X 