Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 40 of 69
Back to Result List

Aliasing and adversarial robust generalization of CNNs

  • Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examplesMany commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Article (reviewed)
Zitierlink: https://opus.hs-offenburg.de/6445
Bibliografische Angaben
Title (English):Aliasing and adversarial robust generalization of CNNs
Author:Julia GrabinskiStaff MemberORCiD, Janis KeuperStaff MemberORCiDGND, Margret Keuper
Year of Publication:2022
Publisher:Springer
First Page:3925
Last Page:3951
Parent Title (English):Machine Learning
Volume:111
Issue:11
ISSN:1573-0565 (Elektronisch)
ISSN:0885-6125 (Print)
DOI:https://doi.org/10.1007/s10994-022-06222-8
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Tag:Adversarial robustness; Aliasing; CNNs; Roubst overfitting
Formale Angaben
Relevance:Wiss. Zeitschriftenartikel reviewed: Listung in Master Journal List
Open Access: Open Access 
 Hybrid 
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International