Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 519 of 1253
Back to Result List

Detecting AutoAttack Perturbations in the Frequency Domain

  • Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% ofRecently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/5294
Bibliografische Angaben
Title (English):Detecting AutoAttack Perturbations in the Frequency Domain
Conference:A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning. Workshop at ICML 2021, July 24, 2021
Author:Peter Lorenz, Paula Harder, Dominik Strassel, Margret Keuper, Janis KeuperStaff MemberORCiDGND
Year of Publication:2021
Page Number:7
First Page:1
Last Page:7
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:autoattack; cifar; defense; fourier; imagenet; spectral defense
Formale Angaben
Open Access: Open Access 
Licence (German):License LogoUrheberrechtlich geschützt
Comment:
Preprint ; accepted by the ICML 2021 workshop on A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning
ArXiv Id:http://arxiv.org/abs/2111.08785