ENITS
Refine
Document Type
- Master's Thesis (35)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- IT-Sicherheit (9)
- Maschinelles Lernen (4)
- Computersicherheit (3)
- Security (3)
- Artificial Intelligence (2)
- Cloud Security (2)
- Cybersecurity (2)
- Deep learning (2)
- Homomorphic Encryption (2)
- Identitätsverwaltung (2)
Institute
Open Access
- Closed Access (19)
- Closed (14)
- Open Access (2)
- Diamond (1)
The increasing integration of digital technologies in modern smart grids has significantly improved the efficiency, reliability, and automation of energy distribution. However, this transformation has also introduced critical cybersecurity risks, making smart grids vulnerable to cyber threats such as malware attacks, Distributed Denial of Service (DDoS), and intrusion attempts. Traditional security mechanisms, while effective in conventional IT systems, struggle to protect smart grids due to their complex interconnection of operational technology (OT) and IT systems.
This thesis looks at cybersecurity challenges in smart grids by analyzing vulnerabilities in smartmeters, Supervisory Control and Data Acquisition (SCADA) systems, and communication networks. A detailed review of existing security approaches, including encryption, authentication protocols, anomaly detection, and intrusion detection systems, highlights their limitations in securing smart grid infrastructure.
To address these challenges, this research proposes a cybersecurity framework that combines three defense mechanisms:
(1) Digital Immune System – An AI-driven anomaly detection system that continuously learns from grid data to identify and neutralize threats in real time.
(2) Genetic Algorithms for Cyber Defense – A self-optimizing security mechanism that evolves security configurations to improve grid resilience.
(3) Decentralized AI Collectives – A distributed defense system where multiple AI agents collaborate to detect and mitigate cyberattacks without reliance on a central authority.
This integrated defense mechanism ensures real-time threat detection, automated response, and adaptive security evolution. The proposed approach is validated through simulations, demonstrating its effectiveness in mitigating cyber threats and improving the overall security of smart grid systems.
This research contributes to the field of enterprise and IT security by presenting a comprehensive, adaptive, and scalable cybersecurity solution tailored for smart grids.
Web applications play a crucial role in modern business operations but remain prime targets for cyberattacks due to the sensitive data they handle. Despite continuous advancements in cybersecurity, many applications are still susceptible to common vulnerabilities such as SQL Injection (SQLi), Cross-Site Scripting (XSS), Local File Inclusion (LFI), and Remote Code Execution (RCE), many of which are listed in the OWASP Top 10. Existing security tools often provide limited coverage, focusing on specific aspects like SSL validation or static code analysis, while failing to comprehensively detect and confirm exploitation attempts in real-world scenarios.
This thesis addresses these gaps by leveraging AI-driven attack automation for vulnerability detection and analysis. The system integrates automated reconnaissance, penetration testing, and AI-assisted exploitation validation to identify security flaws dynamically. Unlike conventional tools that rely on static analysis, this approach executes real attack scenarios, analyzes system responses, and determines whether an exploit truly succeeded. The research specifically evaluates the effectiveness of AI models in generating attack execution commands, constructing multi-stage attack chains, and assessing post-exploitation outcomes. The system is tested against a controlled vulnerable web environment, measuring its accuracy, efficiency, and reliability in detecting and validating real vulnerabilities.
A structured methodology is followed, beginning with a comprehensive literature review of web vulnerabilities and attack automation techniques, followed by the design, development, and experimental evaluation of the AI-driven penetration testing framework. The results indicate significant challenges in AI-assisted exploitation validation, with both models exhibiting high false positive rates and misclassification of vulnerabilities. However, the study highlights key areas for improvement, including enhancing AI’s exploit validation mechanisms and reducing false positives through contextual analysis.
By bridging the gap between automated attack execution and intelligent exploit validation, this research contributes to the advancement of AI-driven penetration testing methodologies. The findings underscore the potential and limitations of current AI models in cybersecurity, paving the way for future enhancements in AI-assisted vulnerability assessment and exploitation validation techniques.
Cyber security programs in general and the governance discipline in particular have become a serious concern for organizations of all sizes and across all sectors; This is mainly driven by the needs of having a concrete IT security governance program that can align with the rapid techno- logical advancements and be matured enough to address security risks triggered by the escalating cyber-attacks of today. In parallel with the motivation toward cyber security governance, organi- zations are also increasingly relying on cloud computing to support their digital transformation objectives. Therefore, they become further exposed to an extended spectrum of cyber security threats and regulatory compliance constrains. We believe that organizations shall in response work on developing, yet on transforming their governance programs to meet the needs and challenges of security and compliance in the cloud since previous strategies that have been adopted for managing cyber security operations within on-primes environments would not truly be sufficient to meet the considerable shifts introduced by cloud environments. The objective of this thesis research is to develop and produce a comprehensive cloud-specific cyber security governance and compliance framework that incorporates detailed and tailored con- trols that can contribute to addressing security risks and fulfilling regulatory compliance con- strains in cloud computing environments. Our approach is to conduct detailed analysis that begins by identifying cloud-specific secu- rity risks and compliance challenges and then produce two main deliverables: A comprehensive governance framework with other specific controls for compliance assurance as well as another supportive model that’s fully meant for compliance management in the cloud and that is sup- posed to address the topics of data privacy, PII protection and other security-related regulatory constrains.
Biometric authentication is the process of using an individual’s unique physical or behavioral traits to confirm their identity, such as fingerprints, iris patterns, facial features, and voice recognition. Unlike traditional methods like ID cards or passwords, it relies on inherent attributes for identification. It is widely used in information security for its accurate identification and has revolutionized data protection. However, challenges in implementation include technical issues, user acceptance, and privacy concerns.
As software ecosystems grow increasingly complex, the effective management of software vulnerabilities has become critical to ensuring project security and stability. This process begins with the identification of potential vulnerabilities, which must be systematically tracked and verified. Organizations commonly utilize issue-tracking systems, such as JIRA, to log these vulnerabilities as specific ticket types, allowing for their confirmation or dismissal based on additional information.
In the DevSecOps framework, Software Composition Analysis (SCA) plays a vital role in identifying and managing vulnerabilities within third-party components. SCA tools automate the scanning of software dependencies to detect known vulnerabilities, licensing conflicts, and policy violations, while also generating issues in integrated tracking systems like JIRA to support mitigation efforts. This automation enhances efficiency in vulnerability management by providing actionable data.
This research investigates the automation of vulnerability management in the context of SCA, focusing on the integration between SCA tools and issue-tracking systems. Despite their effectiveness in detecting vulnerable dependencies, these tools face challenges in handling internal components, often failing to accurately link these dependencies to corresponding issues in issue-tracking systems. This gap can lead to inefficiencies and delays in vulnerability remediation. To address this limitation, the study proposes a proof of concept to improve the integration of the MEND SCA tool with issue-tracking systems, aiming to enhance the overall efficiency and effectiveness of vulnerability tracking and resolution processes.
Strong security measures are required to protect sensitive data and provide ongoing service as a result of the rising reliance on online applications for a range of purposes, including e-commerce, social networking, and commercial activities. This has brought to light the necessity of strengthening security measures. There have been multiple incidents of attackers acquiring access to information, holding providers hostage with distributed denial of service attacks, or accessing the company’s network by compromising the application.
The Bundesamt für Sicherheit in der Informationstechnik (BSI) has published a comprehensive set of information security principles and standards that can be utilized as a solid basis for the development of a web application that is secure.
The purpose of this thesis is to build and construct a secure web application that adheres to the requirements established in the BSI guideline. This will be done in order to answer the growing concerns regarding the security of web applications. We will also evaluate the efficacy of the recommendations by conducting security tests on the prototype application and determining whether or not the vulnerabilities that are connected with a web application that is not secure have been mitigated.
Though the basic concept of a ledger that anyone can view and verify has been around for quite some time, today’s blockchains bring much more to the table including a way to incentivize users. The coins given to the miner or validator were the first source of such incentive to make sure they fulfilled their duties. This thesis draws inspiration from other peer efforts and uses this same incentive to achieve certain goals. Primarily one where users are incentivised to discuss their opinions and find scientific or logical backing for their standpoint. While traditional chains form a consensus on a version of financial "truth", the same can be applied to ideological truths too. To achieve this, creating a modified or scaled proof of stake consensus mechanism is explored in this work. This new consensus mechanism is a Reputation Scaled - Proof of Stake. This reputation can be built over time by voting for the winning side consistently or by sticking to one’s beliefs strongly. The thesis hopes to bridge the gap in current consensus algorithms and incentivize critical reasoning.
Die vorliegende Arbeit beschäftigt sich mit der Nutzung von Reinforcement Learning in der Informationsbeschaffungs-Phase eines Penetration Tests. Es werden Kernprobleme in den bisherigen Ansätzen anderer das Thema betreffender wissenschaftlicher Arbeiten analysiert und praktische Lösungsansätze für diese bisherigen Hindernisse vorgestellt und implementiert. Die Arbeit zeigt damit eine beispielhafte Implementierung eines Reinforcement Learning Agenten zur Automatisierung der Informationsbeschaffungs-Phase eines Penetration Tests und stellt Lösungen für existierende Probleme in diesem Bereich dar.
Eingebettet wird diese wissenschaftliche Arbeit in die Anforderungen der Herrenknecht AG hinsichtlich der Absicherung des Tunnelbohrmaschinen-Netzwerks. Dabei werden praktische Ergebnisse des eigen entwickelten Reinforcement Learning Modells im Tunnelbohrmaschinen-Test-Netzwerk der Herrenknecht AG vorgestellt.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
Privacy is the capacity to keep some things private despite their social repercussions. It relates to a person’s capacity to control the amount, time, and circumstances under which they disclose sensitive personal information, such as a person’s physiology, psychology, or intelligence. In the age of data exploitation, privacy has become even more crucial. Our privacy is now more threatened than it was 20 years ago, outside of science and technology, due to the way data and technology highly used. Both the kinds and amounts of information about us and the methods for tracking and identifying us have grown a lot in recent years. It is a known security concern that human and machine systems face privacy threats. There are various disagreements over privacy and security; every person and group has a unique perspective on how the two are related. Even though 79% of the study’s results showed that legal or compliance issues were more important, 53% of the survey team thought that privacy and security were two separate things. Data security and privacy are interconnected, despite their distinctions. Data security and data privacy are linked with each other; both are necessary for the other to exist. Data may be physically kept anywhere, on our computers or in the cloud, but only humans have authority over it. Machine learning has been used to solve the problem for our easy solution. We are linked to our data. Protect against attackers by protecting data, which also protects privacy. Attackers commonly utilize both mechanical systems and social engineering techniques to enter a target network. The vulnerability of this form of attack rests not only in the technology but also in the human users, making it extremely difficult to fight against. The best option to secure privacy is to combine humans and machines in the form of a Human Firewall and a Machine Firewall. A cryptographic route like Tor is a superior choice for discouraging attackers from trying to access our system and protecting the privacy of our data There is a case study of privacy and security issues in this thesis. The problems and different kinds of attacks on people and machines will then be briefly talked about. We will explain how Human Firewalls and machine learning on the Tor network protect our privacy from attacks such as social engineering and attacks on mechanical systems. As a real-world test, we will use genomic data to try out a privacy attack called the Membership Inference Attack (MIA). We’ll show Machine Firewall as a way to protect ourselves, and then we’ll use Differential Privacy (DP), which has already been done. We applied the method of Lasso and convolutional neural networks (CNN), which are both popular machine learning models, as the target models. Our findings demonstrate a logarithmic link between the desired model accuracy and the privacy budget.