当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ATTACK-COSM: attacking the camouflaged object segmentation model through digital world adversarial examples
Complex & Intelligent Systems ( IF 5.8 ) Pub Date : 2024-05-07 , DOI: 10.1007/s40747-024-01455-7
Qiaoyi Li , Zhengjie Wang , Xiaoning Zhang , Yang Li

The camouflaged object segmentation model (COSM) has recently gained substantial attention due to its remarkable ability to detect camouflaged objects. Nevertheless, deep vision models are widely acknowledged to be susceptible to adversarial examples, which can mislead models, causing them to make incorrect predictions through imperceptible perturbations. The vulnerability to adversarial attacks raises significant concerns when deploying COSM in security-sensitive applications. Consequently, it is crucial to determine whether the foundational vision model COSM is also susceptible to such attacks. To our knowledge, our work represents the first exploration of strategies for targeting COSM with adversarial examples in the digital world. With the primary objective of reversing the predictions for both masked objects and backgrounds, we explore the adversarial robustness of COSM in full white-box and black-box settings. In addition to the primary objective of reversing the predictions for masked objects and backgrounds, our investigation reveals the potential to generate any desired mask through adversarial attacks. The experimental results indicate that COSM demonstrates weak robustness, rendering it vulnerable to adversarial example attacks. In the realm of COS, the projected gradient descent (PGD) attack method exhibits superior attack capabilities compared to the fast gradient sign (FGSM) method in both white-box and black-box settings. These findings reduce the security risks in the application of COSM and pave the way for multiple applications of COSM.



中文翻译:

ATTACK-COSM:通过数字世界对抗示例攻击伪装对象分割模型

伪装对象分割模型(COSM)由于其卓越的伪装对象检测能力,最近受到了广泛关注。然而,人们普遍认为深度视觉模型容易受到对抗性例子的影响,这可能会误导模型,导致它们通过难以察觉的扰动做出错误的预测。在安全敏感应用程序中部署 COSM 时,对抗性攻击的脆弱性引起了严重关注。因此,确定基础视觉模型 COSM 是否也容易受到此类攻击至关重要。据我们所知,我们的工作代表了对数字世界中针对 COSM 的策略的首次探索。我们的主要目标是逆转对蒙版物体和背景的预测,我们探索了 COSM 在全白盒和黑盒设置中的对抗鲁棒性。除了逆转对蒙面物体和背景的预测的主要目标之外,我们的调查还揭示了通过对抗性攻击生成任何所需蒙版的潜力。实验结果表明,COSM 的鲁棒性较弱,容易受到对抗性示例攻击。在 COS 领域,无论是白盒还是黑盒设置,投影梯度下降(PGD)攻击方法都比快速梯度符号(FGSM)方法表现出更优越的攻击能力。这些发现降低了COSM应用中的安全风险,为COSM的多种应用铺平了道路。

更新日期:2024-05-09
down
wechat
bug