Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 19 of 566
Back to Result List

Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers

  • Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate labelAssessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack. Project website: https://github.com/paulgavrikov/adversarial_solarizationshow moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Article (unreviewed)
Zitierlink: https://opus.hs-offenburg.de/8399
Bibliografische Angaben
Title (English):Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers
Author:Paul GavrikovStaff MemberORCiDGND, Janis KeuperStaff MemberORCiDGND
Year of Publication:2023
Date of first Publication:2023/08/24
First Page:1
Last Page:5
DOI:https://doi.org/10.48550/arXiv.2308.12661
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:adversarial attacks; deep learning; image classification; robustness
Formale Angaben
Relevance:Keine Relevanz
Open Access: Open Access 
 Bronze 
Licence (German):License LogoUrheberrechtlich geschützt
ArXiv Id:http://arxiv.org/abs/2308.12661