Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 13 of 17
Back to Result List

Robust Models are less Over-Confident

  • Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by addingDespite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluationshow moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/6444
Bibliografische Angaben
Title (English):Robust Models are less Over-Confident
Conference:NeurIPS: Conference on Neural Information Processing Systems (36. : Nov 28 2022 : New Orleans, Louisiana, United States of America)
Author:Julia GrabinskiStaff MemberORCiD, Paul GavrikovStaff MemberORCiDGND, Janis KeuperStaff MemberORCiDGND, Margret Keuper
Year of Publication:2022
Page Number:17, 11
Parent Title (English):Proceedings of Conference on Neural Information Processing Systems 2022
ISBN:9781713871088
URL:https://openreview.net/forum?id=5K3uopkizS
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
Tag:Adversarial Robustness; Computer Vision; Model Calibration; Robustness
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index > 30
Open Access: Open Access 
 Bronze 
Licence (German):License LogoUrheberrechtlich geschützt