Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models
Zhang, Xingwei1,2; Zheng, Xiaolong1,2; Mao, Wenji1,2; Zeng, Daniel Dajun1,2; Wang, Fei-Yue1,2
刊名IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS
2022-09-30
页码11
关键词Perturbation methods Semantics Computational modeling Codes Task analysis Privacy Software Adversarial perturbation deep neural network (DNN) privacy protection semantic retrieval
ISSN号2329-924X
DOI10.1109/TCSS.2022.3204120
通讯作者Zheng, Xiaolong(xiaolong.zheng@ia.ac.cn)
英文摘要The wide application of deep neural networks (DNNs) has significantly improved the performance of hashing models on multimodal retrieval issues. DNN-based deep models can automatically learn semantic features from raw data to make human-level decisions. However, the superior generalization leads to potential privacy leakage risks. Strong DNN-based retrieval models enable malicious crawlers to search for nontag private information based on semantic similarity matching. Hence, executing effective privacy protection mechanisms against those retrieval software is essential for reliable social website construction. In this article, we propose a retrieval task-based adversarial perturbation generation method called Hashing Fake to meet this request. Specifically, DNNs are recently found to be vulnerable to a specific set of attacks called adversarial perturbations, which denote some magnitude-restricted signals added on objective samples to misguide well-crafted DNN models, and perturbations' magnitudes are small enough that will not induce humans' perception. Moreover, since existing adversarial perturbation generation methods are designed for supervised tasks, Hashing Fake constructs a differential approximation substitution for perturbation production on unsupervised retrieval tasks. Through extensive experiments on several deep retrieval benchmarks, we demonstrate that well-crafted perturbations using Hashing Fake can effectively misguide objective models' recognitions to make false predictions. The added norm-restricted perturbations on objective samples will not alter humans' perception; hence, Hashing Fake can be applied on real-world social websites to protect subscribers' privacy against malicious retrieval software.
资助项目Ministry of Science and Technology of China[2020AAA0108401] ; Natural Science Foundation of China[72225011] ; Natural Science Foundation of China[71621002]
WOS研究方向Computer Science
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000865074900001
资助机构Ministry of Science and Technology of China ; Natural Science Foundation of China
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/50352]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队
通讯作者Zheng, Xiaolong
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China
2.Chinese Acad Sci, State Key Lab Management & Control Complex Syst, Inst Automat, Beijing 100080, Peoples R China
推荐引用方式
GB/T 7714
Zhang, Xingwei,Zheng, Xiaolong,Mao, Wenji,et al. Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models[J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS,2022:11.
APA Zhang, Xingwei,Zheng, Xiaolong,Mao, Wenji,Zeng, Daniel Dajun,&Wang, Fei-Yue.(2022).Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models.IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS,11.
MLA Zhang, Xingwei,et al."Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models".IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS (2022):11.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace