当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AugSteal: Advancing Model Steal With Data Augmentation in Active Learning Frameworks
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2024-04-03 , DOI: 10.1109/tifs.2024.3384841
Lijun Gao 1 , Wenjun Liu 1 , Kai Liu 1 , Jiehong Wu 1
Affiliation  

With the proliferation of machine learning models in diverse applications, the issue of model security has increasingly become a focal point. Model steal attacks can cause significant financial losses to model owners and potentially threaten the security of their application scenarios. Traditional model steal attacks are primarily directed at soft-label black boxes, but their effectiveness significantly diminishes or even fails in hard-label scenarios. To address this, for hard-label black boxes, this study proposes an active learning-based Fusion Augmentation Model Stealing Framework (AugSteal). This framework initially utilizes large-scale irrelevant public datasets for deep filtering and feature extraction to generate reliable, diverse, and representative high-quality data subsets as the stealing dataset. Subsequently, we developed an adaptive active learning selection strategy that selects data samples with significant information gain for different black-box models, enhancing the attack’s specificity and effectiveness. Finally, to further address the trade-off between query budget and steal precision, this paper designed a Fusion Augmentation training method constituted of two different loss functions, enabling the substitute model to closely approximate the decision distribution of the target black box.The comprehensive experimental results indicate that, compared to the current state-of-the-art attack methods, our approach achieved a maximum performance gain of 8.21% in functional similarity for the substitute models in simulated black-box scenarios CIFAR10, SVHN, CALTECH256, and the real-world application Tencent Cloud API.

中文翻译:

AugSteal:通过主动学习框架中的数据增强推进模型窃取

随着机器学习模型在各种应用中的激增,模型安全问题日益成为焦点。模型窃取攻击可能会给模型所有者带来重大的经济损失,并可能威胁其应用场景的安全。传统的模型窃取攻击主要针对软标签黑盒,但在硬标签场景中其有效性显着降低甚至失败。为了解决这个问题,针对硬标签黑盒,本研究提出了一种基于主动学习的融合增强模型窃取框架(AugSteal)。该框架最初利用大规模不相关的公共数据集进行深度过滤和特征提取,生成可靠、多样化、有代表性的高质量数据子集作为窃取数据集。随后,我们开发了一种自适应主动学习选择策略,为不同的黑盒模型选择具有显着信息增益的数据样本,增强攻击的特异性和有效性。最后,为了进一步解决查询预算和窃取精度之间的权衡,本文设计了一种由两种不同损失函数构成的融合增强训练方法,使替代模型能够紧密逼近目标黑盒的决策分布。结果表明,与当前最先进的攻击方法相比,我们的方法在模拟黑盒场景 CIFAR10、SVHN、CALTECH256 和真实场景中的替代模型的功能相似性方面实现了 8.21% 的最大性能增益-世界应用腾讯云API。
更新日期:2024-04-03
down
wechat
bug