Refine
Year of publication
Document Type
- Master's Thesis (61) (remove)
Language
- English (61) (remove)
Has Fulltext
- yes (61)
Is part of the Bibliography
- no (61)
Keywords
- IT-Sicherheit (6)
- Maschinelles Lernen (4)
- Deep learning (3)
- security (3)
- Cloud Computing (2)
- Computersicherheit (2)
- Energiemanagement (2)
- Energiewende (2)
- Homomorphic Encryption (2)
- Identitätsverwaltung (2)
Institute
- Fakultät Medien (M) (ab 22.04.2021) (25)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (14)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (12)
- Fakultät Wirtschaft (W) (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (3)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (2)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (2)
- INES - Institut für nachhaltige Energiesysteme (2)
- IUAS - Institute for Unmanned Aerial Systems (1)
Open Access
- Closed (31)
- Closed Access (24)
- Open Access (6)
- Diamond (2)
Threat Modeling is a vital approach to implementing ”Security by Design” because it enables the discovery of vulnerabilities and mitigation of threats during the early stage of the Software Development Life Cycle as opposed to later on when they will be more expensive to fix. This thesis makes a review of the current threat Modeling approaches, methods, and tools. It then creates a meta-model adaptation of a fictitious cloud-based shop application which is tested using STRIDE and PASTA to check for vulnerabilities, weaknesses, and impact risk. The Analysis is done using Microsoft Threat Modeling Tool and IriusRisk. Finally, an evaluation of the results is made to ascertain the effectiveness of the processes involved with highlights of the challenges in threat modeling and recommendations on how security developers can make improvements.
The Internet of Things is spreading significantly in every sector, including the household, a variety of industries, healthcare, and emergency services, with the goal of assisting all of those infrastructures by providing intelligent means of service delivery. An Internet of Vulnerabilities (IoV) has emerged as a result of the pervasiveness of the Internet of Things (IoT), which has led to a rise in the use of applications and devices connected to the IoT in our day-to-day lives. The manufacture of IoT devices are growing at a rapid pace, but security and privacy concerns are not being taken into consideration. These intelligent Internet of Things devices are especially vulnerable to a variety of attacks, both on the hardware and software levels, which leaves them exposed to the possibility of use cases. This master’s thesis provides a comprehensive overview of the Internet of Things (IoT) with regard to security and privacy in the area of applications, security architecture frameworks, a taxonomy of various cyberattacks based on various architecture models, such as three-layer, four-layer, and five-layer. The fundamental purpose of this thesis is to provide recommendations for alternate mitigation strategies and corrective actions by using a holistic rather than a layer-by-layer approach. We discussed the most effective solutions to the problems of privacy and safety that are associated with the Internet of Things (IoT) and presented them in the form of research questions. In addition to that, we investigated a number of further possible directions for the development of this research.
This thesis focuses on the development and implementation of a Datagram Transport Layer Security (DTLS) communication framework within the ns-3 network simulator, specifically targeting the LoRaWAN model network. The primary aim is to analyse the behaviour and performance of DTLS protocols across different network conditions within a LoRaWAN context. The key aspects of this work include the following.
Utilization of ns-3: This thesis leverages ns-3’s capabilities as a powerful discrete event network simulator. This platform enables the emulation of diverse network environments, characterized by varying levels of latency, packet loss, and bandwidth constraints.
Emulation of Network Challenges: The framework specifically addresses unique challenges posed by certain network configurations, such as duty cycle limitations. These constraints, which limit the time allocated for data transmission by each device, are crucial in understanding the real-world performance of DTLS protocols.
Testing in Multi-client-server Scenarios: A significant feature of this framework is its ability to test DTLS performance in complex scenarios involving multiple clients and servers. This is vital for assessing the behaviour of a protocol under realistic network conditions.
Realistic Environment Simulation: By simulating challenging network conditions, such as congestion, limited bandwidth, and resource constraints, the framework provides a realistic environment for thorough evaluation. This allows for a comprehensive analysis of DTLS in terms of security, performance, and scalability.
Overall, this thesis contributes to a deeper understanding of DTLS protocols by providing a robust tool for their evaluation under various and challenging network conditions.
It is generally agreed that the development and deployment of an important amount of IoT devices throughout the world has revolutionized our lives in a way that we can rely on these devices to complete certain tasks that may have not been possible just years ago which also brought a new level of convenience and value to our lives.
This technology is allowing us in a smart home environment to remotely control doors, windows, and fridges, purchase online, stream music easily with the use of voice assistants such as Amazon Echo Alexa, also close a garage door from anywhere in the world to cite some examples as this technology has added value to several domains ranging from household environments, cites, industries by exchanging and transferring data between these devices and customers. Many of these devices’ sensors, collect and share information in real-time which enables us to make important business decisions.
However, these devices pose some risks and also some security and privacy challenges that need to be addressed to reach their full potential or be considered to be secure. That is why, comprehensive risk analysis techniques are essential to enhance the security posture of IoT devices as they can help evaluate the robustness and reliability towards potential susceptibility to risks, and vulnerabilities that IoT devices in a smart home setting might possess.
This approach relies on the basis of ISO/IEC 27005 methodology and risk matrix method to highlight the level of risks, impact, and likelihood that an IoT device in smart home settings can have, map the related vulnerability, threats and risks and propose the necessary mitigation strategies or countermeasures that can be taken to secure a device and therefore satisfying some security principles. Around 30 risks were identified on Amazon Echo and the related IoT system using the methodology. A detailed list of countermeasures is proposed as a result of the risk analysis. These results, in turn, can be used to elevate the security posture of the device.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
This Master's Thesis discusses intelligent sensor networks considering autonomous sensor placement strategies and system health management. Sensor networks for an intelligent system design process have been researched recently. These networks consist of a distributed collective of sensing units, each with the abilities of individual sensing and computation. Such systems can be capable of self-deployment and must be scalable, long-lived and robust. With distributed sensor networks, intelligent sensor placement for system design and online system health management are attractive areas of research. Distributed sensor networks also cause optimization problems, such as decentralized control, system robustness and maximization of coverage in a distributed system. This also includes the discovery and analysis of points of interest within an environment. The purpose of this study was to investigate a method to control sensor placement in a world with several sources and multiple types of information autonomously. This includes both controlling the movement of sensor units and filtering of the gathered information depending on individual properties to increase system performance, defined as a good coverage. Additionally, online system health management was examined in this study regarding the case of agent failures and autonomous policy reconfiguration if sensors are added to or removed from the system. Two different solution strategies were devised, one where the environment was fully observable, and one with only partial observability. Both strategies use evolutionary algorithms based on artificial neural networks for developing control policies. For performance measurement and policy evaluation, different multiagent objective functions were investigated. The results of the study show that in the case of a world with multiple types of information, individual control strategies performed best because of their abilities to control the movement of a sensor entity and to filter the sensed information. This also includes system robustness in case of sensor failures where other sensing units must recover system performance. Additionally, autonomous policy reconfiguration after adding or removing of sensor agents was successful. This highlights that intelligent sensor agents are able to adapt their individual control policies considering new circumstances.
AI-based Ground Penetrating Radar Signal Processing for Thickness Estimation of Subsurface Layers
(2023)
This thesis focuses on the estimation of subsurface layer thickness using Ground Penetrating Radar (GPR) A-scan and B-scan data through the application of neural networks. The objective is to develop accurate models capable of estimating the thickness of up to two subsurface layers.
Two different approaches are explored for processing the A-scan data. In the first approach, A-scans are compressed using Principal Component Analysis (PCA), and a regression feedforward neural network is employed to estimate the layers’ thicknesses. The second approach utilizes a regression one-dimensional Convolutional Neural Network (1-D CNN) for the same purpose. Comparative analysis reveals that the second approach yields superior results in terms of accuracy.
Subsequently, the proposed 1-D CNN architecture is adapted and evaluated for Step Frequency Continuous Wave (SFCW) radar, expanding its applicability to this type of radar system. The effectiveness of the proposed network in estimating subsurface layer thickness for SFCW radar is demonstrated.
Furthermore, the thesis investigates the utilization of GPR B-scan images as input data for subsurface layer thickness estimation. A regression CNN is employed for this purpose, although the results achieved are not as promising as those obtained with the 1-D CNN using A-scan data. This disparity is attributed to the limited availability of B-scan data, as B-scan generation is a resource-intensive process.
Global energy demand is still on an increase during the last decade, with a lot of impact on the climate change due to the intensive use of conventional fossil-based fuels power plants to cover this demand. Most recently, leaders of the globe met in 2015 to come out with the Paris Agreement, stating that the countries will start to take a more responsible and effective behaviour toward the global warming and climate change issues. Many studies have discussed how the future energy system will look like with respecting the countries’ targets and limits of greenhouse gases and their CO2 emissions. However, these studies rarely discussed the industry sector in detail even though it is one of the major role players in the energy sector. Moreover, many studies have simulated and modelled the energy system with huge jumps of intervals in terms of years and environmental goals. In the first part of this study, a model will be developed for the German electrical grid with high spatial and temporal resolutions and different scenarios of it will be analysed meticulously on shorter periods (annual optimization), with different flexibilities and used technologies and degrees of innovations within each scenario. Moreover, the challenge in this research is to adequately map the diverse and different characteristics of the medium-sized industrial sector. In order to be able to take a first step in assessing the relevance of the industrial sector in Germany for climate protection goals, the industrial sector will be mapped in PyPSA-Eur (an open-source model data set of the European energy system at the level of the transmission network) by detailing the demand for different types of industry and assigning flexibilities to the industrial types. Synthetically generated load profiles of various industrial types are available. Flexibilities in the industrial sector are described by the project partner Fraunhofer IPA in the GaIN project and can be used. Using a scenario analysis, the development of the industrial sector and the use of flexibilities are then to be assessed quantitatively.
The Lattice Boltzmann Method is a useful tool to calculate fluid flow and acoustic effects at the same time. Although the acoustic perturbation is much smaller than normal pressure differences in fluid flow, this direct calculation is a great advantage of the Lattice Boltzmann Method (LBM). But each border used in calculation produces a multitude of reflections with the acoustic waves, which lead to an unusable result. Therefore, it is worked on different absorbing techniques.
In this thesis three absorbing layer techniques are described, explained and reviewed with different simulations. The absorbing layers are implemented in a basic LBM code in C++, and with this umpteen simulations within a box were performed to compare the different absorbing layers. The Doppler effect and a cylinder flow are also examined to compare the damping efficiencies.
The three studied absorbing techniques are the sponge layer, the perfectly matched layer and a force based Term II absorbing layer. The sponge layer is easy to implement but gives worse results than a calculation without any absorbing layer. The perfectly matched layer and a force based absorbing term provide very good results but the perfectly matched layer has problems with instability. The force based absorbing layer represents the best compromise between the additional computation time due the absorbing layer and the achieved damping efficiency.
Truth is the first causality of war”, is a very often used statement. What rather intrigues the mind is what causes the causality of truth. If one dives deeper, one may also wonder why is this so-called truth the first target in a war. Who all see the truth before it dies. These questions rarely get answered as the media and general public tends to focus more on the human and economic losses in a war or war like situation. What many fail to realize is that these truthful pieces of information are critical to how a situation further develops. One correct information may change the course of the whole war saving millions and one mis-information may do the opposite.
Since its inception, some studies have been conducted to propose and develop new applications for OSINT in various fields. In addition to OSINT, Artificial Intelligence is a worldwide trend that is being used in conjunction witThe question here is, what is this information. Who transmits this and how? What is the source. Although, there has been an extensive use of the information provided by the secret services of any nation, which have come handy to many, another kind of information system is using the one that is publicly available, but in different pieces. This kind of information may come from people posting on social media, some publicly available records and much more. The key part in this publicly available information is that these are just pieces of information available across the globe from various different sources. This could be seen as small pieces of a puzzle that need to be put together to see the bigger picture. This is where OSINT comes in place.
h other areas (AI). AI is the branch of computer science that is in charge of developing intelligent systems. In terms of contribution, this work presents a 9-step systematic literature review as well as consolidated data to support future OSINT studies. It was possible to understand where the greatest concentration of publications was, which countries and continents developed the most research, and the characteristics of these publications using this information. What are the trends for the next OSINT with AI studies? What AI subfields are used with OSINT? What are the most popular keywords, and how do they relate to others over time?A timeline describing the application of OSINT is also provided. It was also clear how OSINT was used in conjunction with AI to solve problems in various areas with varying objectives. Private investigators and journalists are no longer the primary users of open-source intelligence gathering and analysis (OSINT) techniques. Approximately 80-90 percent of data analysed by intelligence agencies is now derived from publicly available sources. Furthermore, the massive expansion of the internet, particularly social media platforms, has made OSINT more accessible to civilians who simply want to trawl the Web for information on a specific individual, organisation, or product. The General Data Protection Regulation (GDPR) of the European Union was implemented in the United Kingdom in May 2018 through the new Data Protection Act, with the goal of protecting personal data from unauthorised collection, storage, and exploitation. This document presents a preliminary review of the literature on GDPR-related work.
The reviewed literature is divided into six sections: ’What is OSINT?’, ’What are the risks?’ and benefits of OSINT?’, ’What is the rationale for data protection legislation?’, ’What are the current legislative frameworks in the UK and Europe?’, ’What is the potential impact of the GDPR on OSINT?’, and ’Have the views of civilian and commercial stakeholders been sought and why is this important?’. Because OSINT tools and techniques are available to anyone, they have the unique ability to be used to hold power accountable. As a result, it is critical that new data protection legislation does not impede civilian OSINT capabilities.
In this paper we see how OSINT has played an important role in the wars across the globe in the past. We also see how OSINT is used in our everyday life. We also gain insights on how OSINT is playing a role in the current war going on between Russia and Ukraine. Furthermore, we look into some of these OSINT tools and how they work. We also consider a use case where OSINT is used as an anti terrorism tool. At the end, we also see how OSINT has evolved over the years, and what we can expect in the future as to what OSINT may look like.
One of the most critical areas of research and expansion has been exploiting new technologies in supply chain risk management. One example of this is the use of Digital Twins. The performance of physical systems can be analyzed and simulated using digital twins, virtual versions of these systems that use real-time data, and sophisticated algorithms. Inside the supply chain risk management field, digital twins present a one-of-a-kind opportunity to improve an organization's ability to anticipate, address, and react to the possibility of problems within the supply chain.
The objective of this study is to identify and assess the advantages that accrue to supply chain risk management as a result of Digital Twins' adoption into the system, as well as to identify the challenges associated with achieving those benefits. In the context of supply chain risk management, a thorough literature study is conducted to analyze the essential traits and capabilities of digital twins and how these qualities lead to enhanced risk management methods. This study investigates the essential properties and capacities of digital twins. In addition, the state of digital twin technology and its applications in supply chain risk management are evaluated, and prospective areas for further study and development are highlighted.
The primary purpose of this investigation is to provide a comprehensive and in-depth analysis of the digital twins' role in supply chain risk management through the utilization of digital twins, as well as to highlight the potential benefits and challenges associated with the implementation of digital twins. The research was carried out based on the existing body of written material and the replies of 27 individuals who had previous experience making use of digital twins and took part in an online questionnaire.
The results of this study will be relevant to a diverse group of stakeholders, including specialists in risk management and researchers, amongst others.
Total Cost of Ownership (TCO) is a key tool to have a complete understanding of the costs associated with an investment, as it allows to analyze not only the initial acquisition costs, but also the long-term costs related to operation, maintenance, depreciation, and other factors. In the context of the cement industry, TCO is especially important due to the complexity of the production processes and the wide variety of components and machinery involved in the process.
For this reason, a TCO analysis for the cement industry has been conducted in this study, with the objective of showing the different components of the cost of production. This analysis will allow the reader to gain knowledge about these costs, in the industrial model will be to make informed decisions on the adoption of technologies and practices that will allow them to reduce costs in the long run and improve their operational efficiency.
In particular, this study pursues to give visibility to technologies and practices that enable the reduction of carbon emissions in cement production, thus contributing to the sustainability of industry and the protection of the environment. By being at the forefront of sustainability issues, the cement industry can contribute to the achievement of environmentally friendly technologies and enable the development of people and industry.
The Oxyfuel technology has been selected as a carbon capture solution for the cement industry due to its practical application, low costs, and practical adaptation to non-capture processes. The adoption of this technology allows for a significant reduction in CO2 emissions, which is a crucial factor in achieving sustainability in the cement manufacturing process.
Carbon capture storage technologies represent a high investment, although these technologies increase the cost of production, the application of Oxyfuel technology is one of the most economically viable as the cheapest technology per capture according to the comparison. However, this price increase is a technical advantage as the carbon capture efficiency of this technology reaches 90%. This level of efficiency leads to a decrease in taxes for the generation of CO2 emissions, making the cement manufacturing process sustainable.
In each company Top Managers have the responsibility to take major decisions that supports the success of their company, Adopting TQM is one of these decisions, the decision to carry out companies’ operations and procedures within TQM frameworks. (ASQ , n.d.). Applying TQM, involves implementing practices that needs putting extra efforts, otherwise there will be no use of the practices and the execution. (Nicca Jirah F Campos1, 2022).
Specifically in service sector, where the key to success and increased profit, comes directly through a satisfied customer. Therefor there is a need for both management and staff to have big tolerance and willingness to achieve the needed satisfaction, in order to attain the results that every company wants. (Charantimath, 2013)
In Germany in terms of customer care practices there is a famous stereotype ‘Customer is not the king’ A reputation That after DW investigated it, DW expressed it as a phenomenon where both expats and Germans tend to believe that service companies in Germany should do a better job of treating their consumers. (DW, 2016)
New concepts of business have emerged in the late century, for example strategy, leadership, marketing, entrepreneurship and others, these concepts spread internationally among most of the companies around the world. Many studies have been done reviewing these new business structures, some of them addressed the cultural differences within countries upon the applying them. But not many studies concentrated on taking into consideration how cultural differences affects the Implementation of TQM. (Lagrosen, 2002). It was concluded in general that although the comprehensive fundamentals of quality management are applicable and similar worldwide in all nations, but when coming to real practice accurate tunning must be made, it must be taken into account aligning different standards, due to different work cultures and traditions in Europe. (Krueger, 1999)
The effects of climate change, including severe storms, heat waves, and melting glaciers, are highlighted as an urgent concern, emphasising the need to decrease carbon emissions to restrict global warming to 1.5°C. To accomplish this goal, it is vital to substitute fossil fuel-based power plants with renewable energy sources like solar, wind, hydro, and biofuels. Despite some progress being made, the proportion of renewables used in generating electricity is still lower than the levels needed for 2030 and 2050. Decarbonising the power grid is also critical in lowering the energy consumption of buildings, which is responsible for a substantial percentage of worldwide electricity usage. Even though there has been substantial expansion in the worldwide renewable energy market in the past 15 years, the transition to renewable energy sources also requires taking into account the importance of energy trading.
Peer-to-peer (P2P) electricity trading is an emerging type of energy exchange that can revolutionise the energy sector by providing a more decentralised and efficient way of trading energy. This research deals about P2P electricity trading in a carbon-neutral scenario. 'Python for Power System Analysis' (PyPSA) was used to develop models through which the P2P effect was tested. Data for the entire state of Baden-Württemberg (BW) was collected. Three scenarios were taken into consideration while developing models: 2019 (base), 2030 (coal phase-out), and 2040(climate neutral). Alongside this, another model with no P2P trading was developed to make a comparison. In addition, the use case of community storage in a P2P trading network is also presented.
The research concludes that P2P has a significant positive effect on a pathway to achieve climate neutrality. The findings show that the share of renewables in electricity generation is increasing compared to conventional sources in BW, which can be traded to meet the demand. From the storage analysis, it can be concluded that community storage can be effectively utilised in P2P trading. While the emissions are reduced, the operating costs are also reduced when the grid has P2P trading available. By highlighting the benefits of P2P trading, this research contributed to the growing body of research on the effectiveness of P2P trading in an electricity network grid.
In the past ten years, applications of artificial neural networks have changed dramatically. outperforming earlier predictions in domains like robotics, computer vision, natural language processing, healthcare, and finance. Future research and advancements in CNN architectures, Algorithms and applications are expected to revolutionize various industries and daily life further. Our task is to find current products that resemble the given product image and description. Deep learning-based automatic product identification is a multi-step process that starts with data collection and continues with model training, deployment, and continuous improvement. The caliber and variety of the dataset, the design selected, and ongoing testing and improvement all affect the model's effectiveness. We achieved 81.47% training accuracy and 72.43% validation accuracy for our combined text and image classification model. Additionally, we have discussed the outcomes from the other dataset and numerous methods for creating an appropriate model.
Automation research has become one of the most important tools for future thinking organizations. It includes studying the economic and social aspects to determine how accountants were affected by automating the accounting profession. Moreover, this research studied the social aspect of automation, including the accountants' satisfaction and agreement towards the shift from manual-based accounting to automated accounting. Additionally, the purpose of the research was to comprehend the aspects that affect the variance of the satisfaction and agreement levels before and after automation and whether there is a relationship between those satisfaction and agreement levels and the demographic profile of accountants.
A quantitative method was used to answer the research questions. The findings and results were gathered through an online survey. The respondents in the study represent forty-three accountants who are located and working in Germany. The implications and conclusions of the research were observed from the accountants' perspective.
The research results presented that the automation of accounting significantly impacted the accountants' profession. It indicated that accountants are satisfied with automated accounting and agree with its effects and impacts on their profession. Accountants agreed that automated accounting tasks made the accounting process more effective and valuable. The findings also showed that educational level and length of experience in automated accounting are correlated with the satisfaction of accountants towards automated accounting. It presented that the more experience in automation and higher education accountants have, the more satisfied they are with automated accounting. Due to this phenomenon, higher qualifications and more basic IT knowledge are required in comparison with previous times.
As e-commerce platforms have grown in popularity, new difficulties have emerged, such as the growing use of bots—automated programs—to engage with e-commerce websites. Even though some algorithms are helpful, others are malicious and can seriously hurt e-commerce platforms by making fictitious purchases, posting fictitious evaluations, and gaining control of user accounts. Therefore, the development of more effective and precise bot identification systems is urgently needed to stop such actions. This thesis proposes a methodology for detecting bots in E-commerce using machine learning algorithms such as K-nearest neighbors, Decision Tree, Random Forest, Support Vector Machine, and Neural Network. The purpose of the research is to assess and contrast the output of these machine learning methods. The suggested approach will be based on data that is readily accessible to the public, and the study’s focus will be on the research of bots in e-commerce.
The purpose of the study is to provide an overview of bots in e-commerce, as well as information on the different kinds and traits of bots, as well as current research on bots in e-commerce and associated work on bot detection in e-commerce. The research also seeks to create a more precise and effective bot detection system as well as find critical factors in detecting bots in e-commerce.
This research is significant because it sheds light on the increasing issue of bots in e-commerce and the requirement for more effective bot detection systems. The suggested approach for using machine learning algorithms to identify bots in ecommerce can give e-commerce platforms a more precise and effective bot detection system to stop malicious bot activities. The study’s results can also be used to create a more effective bot detection system and pinpoint key elements in detecting bots in e-commerce.
Organizations striving to achieve success in the long term must have a positive brand image which will have direct implications on the business. In the face of the rising cyber threats and intense competition, maintaining a threat-free domain is an important aspect of preserving that image in today's internet world. Domain names are often near-synonyms for brand names for numerous companies. There are likely thousands of domains that try to impersonate the big companies in a bid to trap unsuspecting users, usually falling prey to attacks such as phishing or watering hole. Because domain names are important for organizations for running their business online, they are also particularly vulnerable to misuse by malicious actors. So, how can you ensure that your domain name is protected while still protecting your brand identity? Brand Monitoring, for example, may assist. The term "Brand Monitoring" applies only to keep tabs on an organization's brand performance, reception, and overall online presence through various online channels and platforms [1]. There has been a rise in the need of maintaining one's domain clear of any linkages to malicious activities as the threat environment has expanded. Since attackers are targeting domain names of organizations and luring unsuspecting users to visit malicious websites, domain monitoring becomes an important aspect. Another important aspect of brand abuse is how attackers leverage brand logos in creating fake and phishing web pages. In this Master Thesis, we try to solve the problem of classification of impersonated domains using rule-based and machine learning algorithms and automation of domain monitoring. We first use a rule-based classifier and Machine Learning algorithms to classify the domains gathered into two buckets – "Parked" and "Non-Parked". In the project's second phase, we will deploy object detection models (Scale Invariant Feature Transform - SIFT and Multi-Template Matching – MTM) to detect brand logos from the domains of interest.
Server Side Rendering (SSR), Single Page Application (SPA), and Static Site Generation (SSG) are the three most popular ways of making modern Web applications today. If we go deep into these processes, this can be helpful for the developers and clients. Developers benefit since they do not need to learn other programming languages and can instead utilize their own experience to build different kinds of Web applications; for example, a developer can use only JavaScript in the three approaches. On the other hand, clients can give their users a better experience.
This Master Thesis’s purpose was to compare these processes with a demo application for each and give users a solid understanding of which process they should follow. We discussed the step-by-step process of making three applications in the above mentioned categories. Then we compared those based on criteria such as performance, security, Search Engine Optimization, developer preference, learning curve, content and purpose of the Web, user interface, and user experience. It also talked about the technologies such as JavaScript, React, Node.js, and Next.js, and why and where to use them. The goals we specified before the program creation were fulfilled and can be validated by comparing the solutions we gave for user problems, which was the application’s primary purpose.
Die Vision vom "Internet der Dinge" prägt seit Jahren Forschung und Entwicklung, wenn es um smarte Technologien und die Vernetzung von Geräten geht. In der Zukunft wird die reale Welt zunehmend mit dem Internet verknüpft, wodurch zahlreiche Gegenstände (Dinge) des normalen Alltags dazu befähigt werden, zu interagieren und sowohl online als auch autark zu kommunizieren. Viele Branchen wie Medizin, Automobilbau, Energieversorgung und Unterhaltungselektronik sind gleichermaßen betroffen, wodurch trotz Risiken auch neues wirtschaftliches Potential entsteht. Im Bereich "Connected Home" sind bereits Lösungen vorhanden, mittels intelligenter Vernetzung von Haushaltsgeräten und Sensoren, die Lebensqualität in den eigenen vier Wänden zu erhöhen. Diese Arbeit beschäftigt sich mit dem Thread Protokoll; einer neuen Technologie zur Integration mehrerer Kommunikationsschnittstellen innerhalb eines Netzwerks. Darüber hinaus wird die Implementierung auf Netzwerkebene (Network Layer) vorgestellt, sowie aufbereitete Informationen bezüglich verwendeter Technologien dargestellt.