Volltext-Downloads (blau) und Frontdoor-Views (grau)

Aliasing coincides with CNNs vulnerability towards adversarial attacks

  • Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by usingMany commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/6448
Bibliografische Angaben
Title (English):Aliasing coincides with CNNs vulnerability towards adversarial attacks
Conference:The AAAI-22 Workshop on Adversarial Machine Learning and Beyond (AAAI-22 AdvML Workshop), Feb 28 2022, Vancouver, BC, Canada
Author:Julia GrabinskiStaff MemberORCiD, Janis KeuperStaff MemberORCiDGND, Margret Keuper
Year of Publication:2022
Creating Corporation:Association for the Advancement of Artificial Intelligence
First Page:1
Last Page:5
Parent Title (English):The AAAI-22 Workshop on Adversarial Machine Learning and Beyond
URL:https://openreview.net/forum?id=vKc1mLxBebP
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:Adversarial Attacks; Aliasing; CNNs; Machine Learning; Nyquist-Shannon; Sampling
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index < 30
Open Access: Open Access 
 Bronze 
Licence (German):License LogoUrheberrechtlich geschützt