Volltext-Downloads (blau) und Frontdoor-Views (grau)

Robust Models are less Over-Confident

  • Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robustDespite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/6454
Bibliografische Angaben
Title (English):Robust Models are less Over-Confident
Conference:New frontiers in adversarial machine learning (AdvML Frontiers @ ICML 2022), ICML 2022, Baltimore, MD, USA
Author:Julia GrabinskiStaff MemberORCiD, Paul GavrikovStaff MemberORCiDGND, Janis KeuperStaff MemberORCiDGND, Margret Keuper
Year of Publication:2022
First Page:1
Last Page:17
Article Number:12
Parent Title (English):ICML 2022 Workshop on Adversarial Machine Learning
URL:https://advml-frontier.github.io/past/icml2022/pdf/12/CameraReady/ICML_Workshop_Adv_2022.pdf
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:Robustness
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index < 30
Open Access: Open Access 
 Bronze 
Licence (German):License LogoUrheberrechtlich geschützt