Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 350 of 1253
Back to Result List

Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?

  • Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of theRecently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers. We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/6449
Bibliografische Angaben
Title (English):Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?
Conference:The AAAI-22 Workshop on Adversarial Machine Learning and Beyond (AAAI-22 AdvML Workshop), Feb 28 2022, Vancouver, BC, Canada
Author:Peter Lorenz, Dominik Strassel, Margret Keuper, Janis KeuperStaff MemberORCiDGND
Year of Publication:2022
Creating Corporation:Association for the Advancement of Artificial Intelligence
First Page:1
Last Page:7
Parent Title (English):The AAAI-22 Workshop on Adversarial Machine Learning and Beyond
URL:https://openreview.net/forum?id=aLB3FaqoMBs
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:Machine Learning; adversarial; autoattack; lid; mahalanobis; spectraldefense
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index < 30
Open Access: Open Access 
 Bronze 
Licence (German):License LogoUrheberrechtlich geschützt