Adversarial Attacks on Object Detection Models in the Automotive Domain
- The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems.The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.…
Document Type: | Master's Thesis |
---|---|
Zitierlink: | https://opus.hs-offenburg.de/8086 | Bibliografische Angaben |
Title (English): | Adversarial Attacks on Object Detection Models in the Automotive Domain |
Author: | Michael Asamoah Darko Asare |
Advisor: | Janis Keuper |
Year of Publication: | 2023 |
Granting Institution: | Hochschule Offenburg |
Contributing Corporation: | Kopernikus Automotive |
Place of publication: | Offenburg |
Publisher: | Hochschule Offenburg |
Page Number: | 60 |
Language: | English | Inhaltliche Informationen |
Institutes: | Fakultät Medien (M) (ab 22.04.2021) |
Collections of the Offenburg University: | Abschlussarbeiten / Master-Studiengänge / ENITS |
DDC classes: | 000 Allgemeines, Informatik, Informationswissenschaft |
GND Keyword: | Angriff; Computersicherheit; Künstliche Intelligenz; Maschinelles Lernen |
Tag: | Adversarial Attacks; Artificial Intelligence; Cybersecurity; Machine Learning | Formale Angaben |
Open Access: | Closed |
Licence (German): | Creative Commons - CC BY - Namensnennung 4.0 International |