Volltext-Downloads (blau) und Frontdoor-Views (grau)

Adversarial Robustness through the Lens of Convolutional Filters

  • Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries,Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/6443
Bibliografische Angaben
Title (English):Adversarial Robustness through the Lens of Convolutional Filters
Conference:2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 19-20 June 2022, New Orleans, LA, USA
Author:Paul GavrikovStaff MemberORCiDGND, Janis KeuperStaff MemberORCiDGND
Year of Publication:2022
Publisher:IEEE
First Page:138
Last Page:146
Parent Title (English):Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2022)
ISBN:978-1-6654-8739-9 (Elektronisch)
ISBN:978-1-6654-8740-5 (Print on Demand)
ISSN:2160-7516 (Elektronisch)
ISSN:2160-7508 (Print on Demand)
DOI:https://doi.org/10.1109/CVPRW56347.2022.00025
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:Robustness
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index > 30
Open Access: Closed 
Licence (German):License LogoUrheberrechtlich geschützt
ArXiv Id:http://arxiv.org/abs/2204.02481v1