Perturbation Inactivation Based Adversarial Defense for Face Recognition
Ren, Min1,2; Zhu, Yuhao3; Wang, Yunlong2; Sun, Zhenan2,4
刊名IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
2022
卷号17页码:2947-2962
关键词Face recognition Perturbation methods Robustness Immune system Principal component analysis Deep learning Training Adversarial machine learning deep learning graph neural network face recognition
ISSN号1556-6013
DOI10.1109/TIFS.2022.3195384
通讯作者Wang, Yunlong(yunlong.wang@cripac.ia.ac.cn)
英文摘要Deep learning-based face recognition models are vulnerable to adversarial attacks. To curb these attacks, most defense methods aim to improve the robustness of recognition models against adversarial perturbations. However, the generalization capacities of these methods are quite limited. In practice, they are still vulnerable to unseen adversarial attacks. Deep learning models are fairly robust to general perturbations, such as Gaussian noises. A straightforward approach is to inactivate the adversarial perturbations so that they can be easily handled as general perturbations. In this paper, a plug-and-play adversarial defense method, named perturbation inactivation (PIN), is proposed to inactivate adversarial perturbations for adversarial defense. We discover that the perturbations in different subspaces have different influences on the recognition model. There should be a subspace, called the immune space, in which the perturbations have fewer adverse impacts on the recognition model than in other subspaces. Hence, our method estimates the immune space and inactivates the adversarial perturbations by restricting them to this subspace. The proposed method can be generalized to unseen adversarial perturbations since it does not rely on a specific kind of adversarial attack method. This approach not only outperforms several state-of-the-art adversarial defense methods but also demonstrates a superior generalization capacity through exhaustive experiments. Moreover, the proposed method can be successfully applied to four commercial APIs without additional training, indicating that it can be easily generalized to existing face recognition systems.
资助项目National Natural Science Foundation of China[U1836217] ; National Natural Science Foundation of China[62006225] ; National Natural Science Foundation of China[61622310] ; National Natural Science Foundation of China[62071468] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA27040700]
WOS关键词DEEP ; ATTACK
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000845059500001
资助机构National Natural Science Foundation of China ; Strategic Priority Research Program of Chinese Academy of Sciences
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/50011]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Wang, Yunlong
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
2.Chinese Acad Sci, Natl Lab Pattern Recognit, Ctr Res Intelligent Percept & Comp, Inst Automat, Beijing, Peoples R China
3.China Acad Railway Sci, Postgraduate Dept, Beijing 100081, Peoples R China
4.Univ Chinese Acad Sci, Sch Artificial Intelligence, CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Ren, Min,Zhu, Yuhao,Wang, Yunlong,et al. Perturbation Inactivation Based Adversarial Defense for Face Recognition[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2022,17:2947-2962.
APA Ren, Min,Zhu, Yuhao,Wang, Yunlong,&Sun, Zhenan.(2022).Perturbation Inactivation Based Adversarial Defense for Face Recognition.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,17,2947-2962.
MLA Ren, Min,et al."Perturbation Inactivation Based Adversarial Defense for Face Recognition".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 17(2022):2947-2962.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace