Refine
Year of publication
Document Type
- Master's Thesis (66) (remove)
Language
- English (66) (remove)
Has Fulltext
- yes (66)
Is part of the Bibliography
- no (66)
Keywords
- IT-Sicherheit (8)
- Maschinelles Lernen (5)
- Deep learning (3)
- security (3)
- Cloud Computing (2)
- Computersicherheit (2)
- Energiemanagement (2)
- Energiewende (2)
- Homomorphic Encryption (2)
- Identitätsverwaltung (2)
Institute
- Fakultät Medien (M) (ab 22.04.2021) (27)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (16)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (13)
- Fakultät Wirtschaft (W) (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (4)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (2)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (2)
- INES - Institut für nachhaltige Energiesysteme (2)
- IUAS - Institute for Unmanned Aerial Systems (1)
Open Access
- Closed (36)
- Closed Access (24)
- Open Access (6)
- Diamond (2)
This Master's Thesis discusses intelligent sensor networks considering autonomous sensor placement strategies and system health management. Sensor networks for an intelligent system design process have been researched recently. These networks consist of a distributed collective of sensing units, each with the abilities of individual sensing and computation. Such systems can be capable of self-deployment and must be scalable, long-lived and robust. With distributed sensor networks, intelligent sensor placement for system design and online system health management are attractive areas of research. Distributed sensor networks also cause optimization problems, such as decentralized control, system robustness and maximization of coverage in a distributed system. This also includes the discovery and analysis of points of interest within an environment. The purpose of this study was to investigate a method to control sensor placement in a world with several sources and multiple types of information autonomously. This includes both controlling the movement of sensor units and filtering of the gathered information depending on individual properties to increase system performance, defined as a good coverage. Additionally, online system health management was examined in this study regarding the case of agent failures and autonomous policy reconfiguration if sensors are added to or removed from the system. Two different solution strategies were devised, one where the environment was fully observable, and one with only partial observability. Both strategies use evolutionary algorithms based on artificial neural networks for developing control policies. For performance measurement and policy evaluation, different multiagent objective functions were investigated. The results of the study show that in the case of a world with multiple types of information, individual control strategies performed best because of their abilities to control the movement of a sensor entity and to filter the sensed information. This also includes system robustness in case of sensor failures where other sensing units must recover system performance. Additionally, autonomous policy reconfiguration after adding or removing of sensor agents was successful. This highlights that intelligent sensor agents are able to adapt their individual control policies considering new circumstances.
Distributed Flow Control and Intelligent Data Transfer in High Performance Computing Networks
(2015)
This document contains my master thesis report, including problem definition, requirements, problem analysis, review of current state of the art, proposed solution,
designed prototype, discussions and conclusion.
During this work we propose a collaborative solution to run different types of operations in a broker-less network without relying on a central orchestrator.
Based on our requirements, we define and analyze a number of scenarios. Then we design a solution to address those scenarios using a distributed workflow management approach. We explain how we break a complicated operation into simpler parts and how we manage it in a non-blocking and distributed way. Then we show how we asynchronously launch them on the network and how we collect and aggregate results. Later on we introduce our prototype which demonstrates the proposed design.
Quarz crystal microbalances allow the monitoring of the adsorption process of mass from a liquid to their surface. The adsorbed mass can be analysed regarding to its protein content using mass spectromety. To ensure the protein identification the results of several measurements can be combined. A high content QCM-D array was developed to allow up to ten measurements parallel. The samples can be routed inside the array distributing one sample to several chips. The fluidic parts were prototyped using 3D printing. The assembled array was tight and the sample routing function could be demonstrated. A temperature controller was developed and implemented. The parameters for the PID controller were determined and the controller was shown to be able to keep the temperature constant over long time with high accuracy.
Die Vision vom "Internet der Dinge" prägt seit Jahren Forschung und Entwicklung, wenn es um smarte Technologien und die Vernetzung von Geräten geht. In der Zukunft wird die reale Welt zunehmend mit dem Internet verknüpft, wodurch zahlreiche Gegenstände (Dinge) des normalen Alltags dazu befähigt werden, zu interagieren und sowohl online als auch autark zu kommunizieren. Viele Branchen wie Medizin, Automobilbau, Energieversorgung und Unterhaltungselektronik sind gleichermaßen betroffen, wodurch trotz Risiken auch neues wirtschaftliches Potential entsteht. Im Bereich "Connected Home" sind bereits Lösungen vorhanden, mittels intelligenter Vernetzung von Haushaltsgeräten und Sensoren, die Lebensqualität in den eigenen vier Wänden zu erhöhen. Diese Arbeit beschäftigt sich mit dem Thread Protokoll; einer neuen Technologie zur Integration mehrerer Kommunikationsschnittstellen innerhalb eines Netzwerks. Darüber hinaus wird die Implementierung auf Netzwerkebene (Network Layer) vorgestellt, sowie aufbereitete Informationen bezüglich verwendeter Technologien dargestellt.
Singapore’s success in transforming itself from a poor, vulnerable economy to one of the richest countries in the world (IMF, 2016) is nothing short of inspirational to many small economies around the globe. Given its lack of resources, Singapore relied upon foreign investors to fuel its growth not only through cash injection into the economy in the form of Foreign Direct Investments (FDI) but also to help upgrade its skills and technological stock. This study looks at how Singapore inspired many Multi-National Corporations (MNCs) into pouring a large sum of investments into this small ailing citystate and if this idea can be generalized to apply it in other economies, especially in Oman.
In a bid to explain the large flow of Capital into an economy, this study moves on further to review most prominent literature in the field since Macdougall (1958) first laid the groundwork for the subsequent theories on FDI. Based on the review of several previous studies, the most significant determinants of FDI were found to be government policy and political stability, inflation rate as a proxy for economic stability, quality of infrastructure and institutions, market size of the host country, openness to trade, tax policies and access to low cost factors of production.
Through a case study method with the inductive approach, this study finds that Singapore excels in all of the determinants of FDI except for the market size of the host country and access to low-cost factors of production. However, it more than compensates for these shortcomings with its strategic geographical location and numerous bilateral and regional trade agreements that give it access to markets around the region. Oman like Singapore ranks well in many of these determinants that make it a potential destination for investment. However, the sultanate could gain more interest from the MNC’s to help its growth by optimizing its policies to lower existing barriers, easing immigration laws to meet the short term skill shortage, allowing for 100 percent foreign ownership, allowing for more liberal property rights, working to improve corruption perception and opting for more trade agreements to give it easy access to larger markets. Moreover, the economy’s heavy reliance on hydrocarbon exports is seen as a major risk by investors as it creates an economic vulnerability which could potentially overshadow many other benefits of investing in the sultanate. Besides the aforementioned determinants, a lot also depends on the success of Oman’s diversification plans.
Webassembly is a new technology to create application in a new way. Webassembly is being developed since 2017 by the worldwide web consortium (w3c). The primary task of webassembly is to improve web applications.
Today, more and more applications are being created as web applications. Web applications have some advantages - they are platform independent and even mobile platforms can run them, and no installation is needed apart from a modern web browser.
Currently, web applications are being developed in JavaScript (JS), hypertext mark-up language 5 (HTML 5), and cascading style sheets (CSS).
These technologies are not made for huge web applications, but they should not be replaced by webassembly; rather, webassembly is an extension to the currently existing technology.
The purpose of webassembly is to fix or improve the problems in web application development.
This master’s thesis reviews all of the aspects and checks whether the promises of webassembly are kept and where problems still exist.
Annotated training data is essential for supervised learning methods. Human annotation is costly and laborsome especially if a dataset consists of hundreds of thousands of samples and annotators need to be hired. Crowdsourcing emerged as a solution that makes it easier to get access to large amounts of human annotators. Introducing paid external annotators however introduces malevolent annotations, both intentional and unintentional. Both forms of malevolent annotations have negative effects on further usage of the data and can be summarized as spam. This work explores different approaches to post-hoc detection of spamming users and which kinds of spam can be detected by them. A manual annotation checking process resulted in the creation of a small user spam dataset which is used in this thesis. Finally an outlook for future improvements of these approaches will be made.
Communication protocols enable information exchange between different information systems. If protocol descriptions for these systems are not available, they can be reverse-engineered for interoperability or security reasons. This master thesis describes the analysis of such a proprietary binary protocol, named the DVRIP or Dahua private protocol from Dahua Technology. The analysis contains the identification of the DVRIP protocol header format, security mechanisms and vulnerabilities inside the protocol implementation. With the revealing insights of the protocol, an increase of the overall security is achieved. This thesis builds the foundation for further targeted security analyses.
The status quo of PROFINET, a commonly used industrial Ethernet standard, provides no inherent security in its communication protocols. In this thesis an approach for protecting real-time PROFINET RTC messages against spoofing, tampering and optionally information disclosure is specified and implemented into a real-world prototype setup. Therefor authenticated encryption is used, which relies on symmetric cipher schemes. In addition a procedure to update the used symmetric encryption key in a bumpless manner, e.g. without interrupting the real-time communication, is introduced and realized.
The concept for protecting the PROFINET RTC messages was developed in collaboration with a task group within the security working group of PROFINET International. The author of this thesis has also been part of that task group. This thesis contributes by proofing the practicability of the concept in a real-world prototype setup, which consists of three FPGA-based development boards that communicate with each other to showcase bumpless key updates.
To enable a bumpless key update without disturbing the deterministic real-time traffic by dedicated messages, the key update annunciation and status is embedded into the header. By provisioning two key slots, of which only one is in used, while the other is being prepared, a well-synchronized coordinated switch between the receiver and the sender performs the key update.
The developed prototype setup allows to test the concept and builds the foundation for further research and implementation activities, e.g. the impact of cryptographic operations onto the processing time.
Among the billions of smartphone users in the world, Android still holds more than 80% of the market share. The applications which the users install have a specific set of features that need access to some device functionalities and sensors that may hold sensitive information about the user. Therefore, Android releases have set permission standards to let the user know what information is being disclosed to the application. Along with other security and privacy improvements, significant changes to the permission scheme are introduced with the Android 6.0 version (API level 23). In this master thesis, the Android permission scheme is tested on two devices from different eras. The evolution of Android over the years is examined in terms of confidentiality. For each device, two applications are built; one focused on extracting every piece of information within the confidentiality scope with every permission declared and/or requested, and the other app focused on getting this type of information without user notification. The resulting analysis illustrates whether how and in what way the Android permission scheme declined or improved over time.
The Lattice Boltzmann Method is a useful tool to calculate fluid flow and acoustic effects at the same time. Although the acoustic perturbation is much smaller than normal pressure differences in fluid flow, this direct calculation is a great advantage of the Lattice Boltzmann Method (LBM). But each border used in calculation produces a multitude of reflections with the acoustic waves, which lead to an unusable result. Therefore, it is worked on different absorbing techniques.
In this thesis three absorbing layer techniques are described, explained and reviewed with different simulations. The absorbing layers are implemented in a basic LBM code in C++, and with this umpteen simulations within a box were performed to compare the different absorbing layers. The Doppler effect and a cylinder flow are also examined to compare the damping efficiencies.
The three studied absorbing techniques are the sponge layer, the perfectly matched layer and a force based Term II absorbing layer. The sponge layer is easy to implement but gives worse results than a calculation without any absorbing layer. The perfectly matched layer and a force based absorbing term provide very good results but the perfectly matched layer has problems with instability. The force based absorbing layer represents the best compromise between the additional computation time due the absorbing layer and the achieved damping efficiency.
In this work, an implementation of the somewhat homomorphic BV encryption scheme is presented. During the implementation, care was taken to ensure that the resulting program will be as efficient as possible i.e. fast and resource-saving. The basis for this is the work of Arndt Bieberstein, who implemented the BV scheme with respect to functionality. The presented implementation supports the basics of the BV scheme, namely (symmetric and asymmetric) encryption, decryption and evaluation of addition as well as multiplication. Additionally, it supports the encoding of positive and negative numbers, various gaussian sampling methods, basically infinitely large polynomial coefficients, the generation of suitable parameters for a use case, threading and relinearization to reduce the size of a ciphertext after multiplications. After presenting the techniques used in the implementation, it’s actual efficiency is determined by measuring the timings of the operations for various parameters.
The identification of vulnerabilities is an important element of the software development process to ensure the security of software. Vulnerability identification based on the source code is a well studied field. To find vulnerabilities on the basis of a binary executable without the corresponding source code is more challenging. Recent research has shown how such detection can be performed statically and thus runtime efficiently by using deep learning methods for certain types of vulnerabilities.
This thesis aims to examine to what extent this identification can be applied sufficiently for a variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. For this purpose, a dataset with 50,651 samples of 23 different vulnerabilities in the form of a standardised LLVM Intermediate Representation was prepared. The vectorised features of a Word2Vec model were then used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). For this purpose, a binary classification was trained for the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of vulnerabilities, as well as a strong limitation of the analysis scope for further analysis steps.
Threat Modeling is a vital approach to implementing ”Security by Design” because it enables the discovery of vulnerabilities and mitigation of threats during the early stage of the Software Development Life Cycle as opposed to later on when they will be more expensive to fix. This thesis makes a review of the current threat Modeling approaches, methods, and tools. It then creates a meta-model adaptation of a fictitious cloud-based shop application which is tested using STRIDE and PASTA to check for vulnerabilities, weaknesses, and impact risk. The Analysis is done using Microsoft Threat Modeling Tool and IriusRisk. Finally, an evaluation of the results is made to ascertain the effectiveness of the processes involved with highlights of the challenges in threat modeling and recommendations on how security developers can make improvements.
Global energy demand is still on an increase during the last decade, with a lot of impact on the climate change due to the intensive use of conventional fossil-based fuels power plants to cover this demand. Most recently, leaders of the globe met in 2015 to come out with the Paris Agreement, stating that the countries will start to take a more responsible and effective behaviour toward the global warming and climate change issues. Many studies have discussed how the future energy system will look like with respecting the countries’ targets and limits of greenhouse gases and their CO2 emissions. However, these studies rarely discussed the industry sector in detail even though it is one of the major role players in the energy sector. Moreover, many studies have simulated and modelled the energy system with huge jumps of intervals in terms of years and environmental goals. In the first part of this study, a model will be developed for the German electrical grid with high spatial and temporal resolutions and different scenarios of it will be analysed meticulously on shorter periods (annual optimization), with different flexibilities and used technologies and degrees of innovations within each scenario. Moreover, the challenge in this research is to adequately map the diverse and different characteristics of the medium-sized industrial sector. In order to be able to take a first step in assessing the relevance of the industrial sector in Germany for climate protection goals, the industrial sector will be mapped in PyPSA-Eur (an open-source model data set of the European energy system at the level of the transmission network) by detailing the demand for different types of industry and assigning flexibilities to the industrial types. Synthetically generated load profiles of various industrial types are available. Flexibilities in the industrial sector are described by the project partner Fraunhofer IPA in the GaIN project and can be used. Using a scenario analysis, the development of the industrial sector and the use of flexibilities are then to be assessed quantitatively.
An organized strategy to ensure the security of an organization is an information security management system. During various security crises, hazards, and breaches, this strategy aids an organization in maintaining the confidentiality, integrity, and accessibility of information. Organizations are getting ready to comply with information security management system criteria. Despite this, security concerns continue to plague ineffective controls, have poor connectivity, or cause a silo effect, which is a common cause. One of the causes is a low maturity model that is not synchronized with the organization’s business processes. For a higher level of maturity, it is best to evaluate the practices.
Different maturity models on information security and cyber security capacity, management processes, security controls, implementation level, and many more have already been developed by numerous international organizations, experts, and scholars. The present models, however, do not assess a particular organization's particular practices. The evaluation of the business process is frequently neglected because measurement requirements for models are typically more concentrated on examining specific elements. For this reason, it caused the maturity assessment to not be executed explicitly and broadly.
We developed an organizational information security maturity model, a combination of work of different maturity models currently existing. While making this model, we considered that any size or type of organization could use this model. The model considers the success elements of the information security management system when assessing the implementation's effectiveness. We employed a mixed-method strategy that included both qualitative and quantitative research. With the help of a questionnaire survey, we evaluated the previous research using a qualitative methodology. In the quantitative method, we'll figure out how mature the information security management system is now. The proposed model could be used to reduce security incidents by improving implementation gaps.
Privacy is the capacity to keep some things private despite their social repercussions. It relates to a person’s capacity to control the amount, time, and circumstances under which they disclose sensitive personal information, such as a person’s physiology, psychology, or intelligence. In the age of data exploitation, privacy has become even more crucial. Our privacy is now more threatened than it was 20 years ago, outside of science and technology, due to the way data and technology highly used. Both the kinds and amounts of information about us and the methods for tracking and identifying us have grown a lot in recent years. It is a known security concern that human and machine systems face privacy threats. There are various disagreements over privacy and security; every person and group has a unique perspective on how the two are related. Even though 79% of the study’s results showed that legal or compliance issues were more important, 53% of the survey team thought that privacy and security were two separate things. Data security and privacy are interconnected, despite their distinctions. Data security and data privacy are linked with each other; both are necessary for the other to exist. Data may be physically kept anywhere, on our computers or in the cloud, but only humans have authority over it. Machine learning has been used to solve the problem for our easy solution. We are linked to our data. Protect against attackers by protecting data, which also protects privacy. Attackers commonly utilize both mechanical systems and social engineering techniques to enter a target network. The vulnerability of this form of attack rests not only in the technology but also in the human users, making it extremely difficult to fight against. The best option to secure privacy is to combine humans and machines in the form of a Human Firewall and a Machine Firewall. A cryptographic route like Tor is a superior choice for discouraging attackers from trying to access our system and protecting the privacy of our data There is a case study of privacy and security issues in this thesis. The problems and different kinds of attacks on people and machines will then be briefly talked about. We will explain how Human Firewalls and machine learning on the Tor network protect our privacy from attacks such as social engineering and attacks on mechanical systems. As a real-world test, we will use genomic data to try out a privacy attack called the Membership Inference Attack (MIA). We’ll show Machine Firewall as a way to protect ourselves, and then we’ll use Differential Privacy (DP), which has already been done. We applied the method of Lasso and convolutional neural networks (CNN), which are both popular machine learning models, as the target models. Our findings demonstrate a logarithmic link between the desired model accuracy and the privacy budget.
Decarbonisation Strategies in Energy Systems Modelling: Biochar as a Carbon Capture Technology
(2022)
The energy system is changing since some years in order to achieve the climate goals from the Paris Agreement which wants to prevent an increase of the global temperature above 2 °C. Decarbonisation of the energy system has become for governments a big challenge and different strategies are being stablished. Germany has set greenhouse gas reduction limits for different years and keeps track of the improvement made yearly. The expansion of renewable energy systems (RES) together with decarbonisation technologies are a key factor to accomplish this objective.
This research is done to analyse the effect of introducing biochar, a decarbonisation technology, and study how it will affect the energy system. Pyrolysis is the process from which biochar is obtained and it is modelled in an open-source energy system model. A sensibility analysis is made in order to assess the effect of changing the biomass potential and the costs for pyrolysis.
The role of pyrolysis is analysed in the form of different future scenarios to evaluate the impact. The CO2 emission limits from the years 2030 and 2045 are considered to create the scenarios, as well as the integration of flexibility technologies. Four scenarios in total are assessed and the result from the sensibility analysis considering pyrolysis are always compared to the reference scenario, where pyrolysis is not considered.
Results show that pyrolysis has a bigger impact in the energy system when the CO2 limit is low. Biochar can be used to compensate the emissions from other conventional power plant and achieve an energy transition with lower costs. Furthermore, it was also found that pyrolysis can also reduce the need of flexibility. This study also shows that the biomass potential and the pyrolysis costs can affect a lot the behaviour of pyrolysis in the energy system.
As information technology continues to advance at a rapid speed around the world, new difficulties emerge. The growing number of organizational vulnerabilities is among the most important issues. Finding and mitigating vulnerabilities is critical in order to protect an organization’s environment from multiple attack vectors.
The study investigates and comprehends the complete vulnerability management process from the standpoint of the security officer job role, as well as potential improvements. Few strategies are used to achieve efficient mitigation and the de- velopment of a process for tracking and mitigating vulnerabilities. As a result, a qualitative study is conducted in which the objective is to create a proposed vulner- ability and risk management process, as well as to develop a system for analyzing and tracking vulnerabilities and presenting the vulnerabilities in a graphical dash- board format. This thesis’s data was gathered through an organized literature study as well as through the use of various web resources. We explored numerous ap- proaches to analyze the data, such as categorizing the vulnerabilities every 30, 60, and 90 days to see whether the vulnerabilities were reoccurring or new. According to our findings, tracking vulnerabilities can be advantageous for a security officer.
We come to the conclusion that if an organization has a proper vulnerability tracking system and vulnerability management process, it can aid security officers in having a better understanding of and making plans for reducing vulnerabilities. In terms of system patching and vulnerability remediation, it will also assist the security officer in identifying areas of weakness in the process. As a result, the suggested ways provide an alternate approach to managing and tracking vulnerabilities in an effective manner, although there is still a small area that needs additional analysis and research to make it even better.
In each company Top Managers have the responsibility to take major decisions that supports the success of their company, Adopting TQM is one of these decisions, the decision to carry out companies’ operations and procedures within TQM frameworks. (ASQ , n.d.). Applying TQM, involves implementing practices that needs putting extra efforts, otherwise there will be no use of the practices and the execution. (Nicca Jirah F Campos1, 2022).
Specifically in service sector, where the key to success and increased profit, comes directly through a satisfied customer. Therefor there is a need for both management and staff to have big tolerance and willingness to achieve the needed satisfaction, in order to attain the results that every company wants. (Charantimath, 2013)
In Germany in terms of customer care practices there is a famous stereotype ‘Customer is not the king’ A reputation That after DW investigated it, DW expressed it as a phenomenon where both expats and Germans tend to believe that service companies in Germany should do a better job of treating their consumers. (DW, 2016)
New concepts of business have emerged in the late century, for example strategy, leadership, marketing, entrepreneurship and others, these concepts spread internationally among most of the companies around the world. Many studies have been done reviewing these new business structures, some of them addressed the cultural differences within countries upon the applying them. But not many studies concentrated on taking into consideration how cultural differences affects the Implementation of TQM. (Lagrosen, 2002). It was concluded in general that although the comprehensive fundamentals of quality management are applicable and similar worldwide in all nations, but when coming to real practice accurate tunning must be made, it must be taken into account aligning different standards, due to different work cultures and traditions in Europe. (Krueger, 1999)
Server Side Rendering (SSR), Single Page Application (SPA), and Static Site Generation (SSG) are the three most popular ways of making modern Web applications today. If we go deep into these processes, this can be helpful for the developers and clients. Developers benefit since they do not need to learn other programming languages and can instead utilize their own experience to build different kinds of Web applications; for example, a developer can use only JavaScript in the three approaches. On the other hand, clients can give their users a better experience.
This Master Thesis’s purpose was to compare these processes with a demo application for each and give users a solid understanding of which process they should follow. We discussed the step-by-step process of making three applications in the above mentioned categories. Then we compared those based on criteria such as performance, security, Search Engine Optimization, developer preference, learning curve, content and purpose of the Web, user interface, and user experience. It also talked about the technologies such as JavaScript, React, Node.js, and Next.js, and why and where to use them. The goals we specified before the program creation were fulfilled and can be validated by comparing the solutions we gave for user problems, which was the application’s primary purpose.
On a regular basis, we hear of well-known online services that have been abused or compromised as a result of data theft. Because insecure applications jeopardize users' privacy as well as the reputation of corporations and organizations, they must be effectively secured from the outset of the development process. The limited expertise and experience of involved parties, such as web developers, is frequently cited as a cause of risky programs. Consequently, they rarely have a full picture of the security-related decisions that must be made, nor do they understand how these decisions affect implementation accurately.
The selection of tools and procedures that can best assist a certain situation in order to protect an application against vulnerabilities is a critical decision. Regardless of the level of security that results from adhering to security standards, these factors inadvertently result in web applications that are insufficiently secured. JavaScript is a language that is heavily relied on as a mainstream programming language for web applications with several new JavaScript frameworks being released every year.
JavaScript is used on both the server-side in web applications development and the client-side in web browsers as well.
However, JavaScript web programming is based on a programming style in which the application developer can, and frequently must, automatically integrate various bits of code from third parties. This potent combination has resulted in a situation today where security issues are frequently exploited. These vulnerabilities can compromise an entire server if left unchecked. Even though there are numerous ad hoc security solutions for web browsers, client-side attacks are also popular. The issue is significantly worse on the server side because the security technologies available for server-side JavaScript application frameworks are nearly non-existent.
Consequently, this thesis focuses on the server-side aspect of JavaScript; the development and evaluation of robust server-side security technologies for JavaScript web applications. There is a clear need for robust security technologies and security best practices in server-side JavaScript that allow fine-grained security.
However, more than ever, there is this requirement of reducing the associated risks without hindering the web application in its functionality.
This is the problem that will be tackled in this thesis: the development of secure security practices and robust security technologies for JavaScript web applications, specifically, on the server-side, that offer adequate security guarantees without putting too many constraints on their functionality.
Technology advancement has played a vital role in business development; however, it has opened a broad attack surface. Passwords are one of the essential concepts used in applications for authentication. Companies manage many corporate applications, so the employees must meet the password criteria, which leads to password fatigue. This thesis addressed this issue and how we can overcome this problem by theoretically implementing an IAM solution. In this, we disused MFA, SSO, biometrics, strong password policies and access control. We introduced the IAM framework that should be considered while implementing the IAM solution. Implementing an IAM solution adds an extra layer of security.
Even though the internet has only been there for a short period, it has grown tremendously. To- day, a significant portion of commerce is conducted entirely online because of increased inter- net users and technological advancements in web construction. Additionally, cyberattacks and threats have expanded significantly, leading to financial losses, privacy breaches, identity theft, a decrease in customers’ confidence in online banking and e-commerce, and a decrease in brand reputation and trust. When an attacker pretends to be a genuine and trustworthy institution, they can steal private and confidential information from a victim. Aside from that, phishing has been an ongoing issue for a long time. Billions of dollars have been shed on the global economy. In recent years, there has been significant progress in the development of phishing detection and identification systems to protect against phishing attacks. Phishing detection technologies frequently produce binary results, i.e., whether a phishing attempt was made or not, with no explanation. On the other hand, phishing identification methodologies identify phishing web- pages by visually comparing webpages with predetermined authentic references and reporting phishing together with its target brand, resulting in findings that are understandable. However, technical difficulties in the field of visual analysis limit the applicability of currently available solutions, preventing them from being both effective (with high accuracy) and efficient (with little runtime overhead). Here, we evaluate existed framework called Phishpedia. This hybrid deep learning system can recognize identity logos from webpage screenshots and match logo variants of the same brand with high precision. Phishpedia provides high accuracy with low run- time. Lastly, unlike other methods, Phishpedia does not require training on any phishing sam- ples whatsoever. Phishpedia exceeds baseline identification techniques (EMD, PhishZoo, and LogoSENSE), inaccurately detecting phishing pages in lengthy testing using accurate phishing data. The effectiveness of Phishpedia was tested and compared against other standard machine learning algorithms and some state-of-the-art algorithms. The given solutions performed better than different algorithms in the given dataset, which is impressive.
Organizations striving to achieve success in the long term must have a positive brand image which will have direct implications on the business. In the face of the rising cyber threats and intense competition, maintaining a threat-free domain is an important aspect of preserving that image in today's internet world. Domain names are often near-synonyms for brand names for numerous companies. There are likely thousands of domains that try to impersonate the big companies in a bid to trap unsuspecting users, usually falling prey to attacks such as phishing or watering hole. Because domain names are important for organizations for running their business online, they are also particularly vulnerable to misuse by malicious actors. So, how can you ensure that your domain name is protected while still protecting your brand identity? Brand Monitoring, for example, may assist. The term "Brand Monitoring" applies only to keep tabs on an organization's brand performance, reception, and overall online presence through various online channels and platforms [1]. There has been a rise in the need of maintaining one's domain clear of any linkages to malicious activities as the threat environment has expanded. Since attackers are targeting domain names of organizations and luring unsuspecting users to visit malicious websites, domain monitoring becomes an important aspect. Another important aspect of brand abuse is how attackers leverage brand logos in creating fake and phishing web pages. In this Master Thesis, we try to solve the problem of classification of impersonated domains using rule-based and machine learning algorithms and automation of domain monitoring. We first use a rule-based classifier and Machine Learning algorithms to classify the domains gathered into two buckets – "Parked" and "Non-Parked". In the project's second phase, we will deploy object detection models (Scale Invariant Feature Transform - SIFT and Multi-Template Matching – MTM) to detect brand logos from the domains of interest.
This study investigates the impact of global payroll outsourcing on organizational efficiency and cost reduction based on the analysis of diverse implications stemming from thirty one (31) survey results. The findings reveal multifaceted challenges and benefitsassociated with outsourcing global payroll processing.
The research also unveils the most benefits of global payroll outsourcing. Notably, there's a consensus on the reduction in time-to-process payroll, cost per payroll processed, and improved payroll accuracy rate. Outsourcing streamlines processes, enhances operational efficiency, and contributes to faster, more accurate financial reporting.
Despite these benefits and challenges, statistical analysis reveals weak correlations between outsourcing global payroll and cost reduction or improved efficiency in various parameters, indicating a lack of a significant relationship. Consequently, the results, suggest no substantial correlation between global payroll outsourcing and enhanced efficiency or cost reduction based on this study's data.
Truth is the first causality of war”, is a very often used statement. What rather intrigues the mind is what causes the causality of truth. If one dives deeper, one may also wonder why is this so-called truth the first target in a war. Who all see the truth before it dies. These questions rarely get answered as the media and general public tends to focus more on the human and economic losses in a war or war like situation. What many fail to realize is that these truthful pieces of information are critical to how a situation further develops. One correct information may change the course of the whole war saving millions and one mis-information may do the opposite.
Since its inception, some studies have been conducted to propose and develop new applications for OSINT in various fields. In addition to OSINT, Artificial Intelligence is a worldwide trend that is being used in conjunction witThe question here is, what is this information. Who transmits this and how? What is the source. Although, there has been an extensive use of the information provided by the secret services of any nation, which have come handy to many, another kind of information system is using the one that is publicly available, but in different pieces. This kind of information may come from people posting on social media, some publicly available records and much more. The key part in this publicly available information is that these are just pieces of information available across the globe from various different sources. This could be seen as small pieces of a puzzle that need to be put together to see the bigger picture. This is where OSINT comes in place.
h other areas (AI). AI is the branch of computer science that is in charge of developing intelligent systems. In terms of contribution, this work presents a 9-step systematic literature review as well as consolidated data to support future OSINT studies. It was possible to understand where the greatest concentration of publications was, which countries and continents developed the most research, and the characteristics of these publications using this information. What are the trends for the next OSINT with AI studies? What AI subfields are used with OSINT? What are the most popular keywords, and how do they relate to others over time?A timeline describing the application of OSINT is also provided. It was also clear how OSINT was used in conjunction with AI to solve problems in various areas with varying objectives. Private investigators and journalists are no longer the primary users of open-source intelligence gathering and analysis (OSINT) techniques. Approximately 80-90 percent of data analysed by intelligence agencies is now derived from publicly available sources. Furthermore, the massive expansion of the internet, particularly social media platforms, has made OSINT more accessible to civilians who simply want to trawl the Web for information on a specific individual, organisation, or product. The General Data Protection Regulation (GDPR) of the European Union was implemented in the United Kingdom in May 2018 through the new Data Protection Act, with the goal of protecting personal data from unauthorised collection, storage, and exploitation. This document presents a preliminary review of the literature on GDPR-related work.
The reviewed literature is divided into six sections: ’What is OSINT?’, ’What are the risks?’ and benefits of OSINT?’, ’What is the rationale for data protection legislation?’, ’What are the current legislative frameworks in the UK and Europe?’, ’What is the potential impact of the GDPR on OSINT?’, and ’Have the views of civilian and commercial stakeholders been sought and why is this important?’. Because OSINT tools and techniques are available to anyone, they have the unique ability to be used to hold power accountable. As a result, it is critical that new data protection legislation does not impede civilian OSINT capabilities.
In this paper we see how OSINT has played an important role in the wars across the globe in the past. We also see how OSINT is used in our everyday life. We also gain insights on how OSINT is playing a role in the current war going on between Russia and Ukraine. Furthermore, we look into some of these OSINT tools and how they work. We also consider a use case where OSINT is used as an anti terrorism tool. At the end, we also see how OSINT has evolved over the years, and what we can expect in the future as to what OSINT may look like.
This research presents a comprehensive exploration of hydroponic systems and their practical applications, with a focus on innovative solutions for managing environmental and analytical sensors in hydroponic setups. Hydroponic systems, which enable soilless cultivation, have gained increasing importance in modern agriculture due to their resource-efficient and high-yield nature.
The study delves into the development and deployment of the SensVert system, an adaptable solution tailored for hydroponic environments. SensVert offers adaptability and accessibility to farmers across various agricultural domains, addressing contemporary challenges in supervising and managing environmental and analytical sensors within hydroponic setups. Leveraging LoRa technology for seamless wireless data transmission, SensVert empowers users with a feature-rich dashboard for real-time monitoring and control. The study showcases the practical implementation of SensVert through a single sensor node, seamlessly integrating temperature, humidity, pressure, light, and pH sensors. The system automates pH regulation, employing the Henderson-Hasselbalch equation, and precisely controls liquid dosing using a PID controller. At the core of SensVert lies an architecture comprising The Things Stack as the network server, Node-Red as the application server, and Grafana as the user interface. These components synergize within a local network hosted on a Raspberry Pi; effectively mitigating challenges associated with data packet transmission in areas with limited internet connectivity.
As part of ongoing research, this work also paves the way for future advancements. These include the establishment of a wireless sensor network (WSN) utilizing LoRa technology, enabling seamless over-the-air sensor node updates for maintenance or replacement scenarios. These enhancements promise to further elevate the system's reliability and functionality within hydroponic cultivation, fostering sustainable agricultural practices.
As the population grows, so does the amount of biowaste. As demand for energy grows, biogas is a promising solution to the problem. Lignocellulosic materials are challenged of slow degradability due to the presence of polymers such as cellulose, lignin and hemicellulose. There are several pretreatment methods available to enhance the degradability of such materials, including enzymatic pretreatment. In this pretreatment, there are few parameters that can influence the results, the most important being the enzyme to solid ratio and the solid to liquid ratio. During this project, experiments were conducted to determine the optimal conditions for those two factors. It was discovered that a solid to liquid ratio of 31 g of buffer per 1 gram of organic dry matter produced the highest reducing sugar release in flasks when combined with 34 mg of protein per 1 gram of organic dry mass. Additionally, another experiment was carried out to investigate the impact of enzymatic pretreatment on biogas production using artificial biowaste as a substrate. Artificial biowaste produced 577,9 NL/kg oDM, while enzymatically pretreated biowaste produced 639,3 NL/kg oDM. This resulted in a 10,6% rise in cumulative biogas production compared to its use without enzymatic pretreatment. By the conclusion of the investigation, specific cumulative dry methane yields of 364,7 NL/kg oDM and 426,3 NL/kg oDM were obtained from artificial biowaste without and with enzymatic pretreatment, respectively. This resulted in a methane production boost of 16,9%. Additionally in case of the reactors with enzymatically pretreated substrate kinetic constant was lower more than double, where maximum volume of biogas increased, comparing to the reactors without enzymatic pretreatment.
Study of impact of change in market economics of Biosimilars due to SPC waiver on EU 469/2009
(2023)
This research was conducted to understand and investigate the impact of SPC waiver EU 933/2019 made as an amendment to EU 469/2019. The research was conducted for analysis and extraction of the data to compile the exact number of biological products impacted with the SPC waiver. The highest sale top-5 products were identified according to the expert’s opinion. The sales revenue opportunity valuable to the top-5 products in the top-5 non-EU markets for early exports is investigated. Additionally, a survey was conducted to assess the readiness of the industry for these changes. The information from this study will be very useful to students of the biopharmaceutical market research and to the stakeholders from the biopharmaceutical industry.
In the past ten years, applications of artificial neural networks have changed dramatically. outperforming earlier predictions in domains like robotics, computer vision, natural language processing, healthcare, and finance. Future research and advancements in CNN architectures, Algorithms and applications are expected to revolutionize various industries and daily life further. Our task is to find current products that resemble the given product image and description. Deep learning-based automatic product identification is a multi-step process that starts with data collection and continues with model training, deployment, and continuous improvement. The caliber and variety of the dataset, the design selected, and ongoing testing and improvement all affect the model's effectiveness. We achieved 81.47% training accuracy and 72.43% validation accuracy for our combined text and image classification model. Additionally, we have discussed the outcomes from the other dataset and numerous methods for creating an appropriate model.
Conceptualization and implementation of automated optimization methods for private 5G networks
(2023)
Today’s companies are adjusting to the new connectivity realities. New applications require more bandwidth, lower latency, and higher reliability as industries become more distributed and autonomous. Private 5th Generation (5G) networks known as 5G Non-Public Networks (5G-NPN), is a novel 3rd Generation Partnership Project (3GPP)- based 5G network that can deliver seamless and dedicated wireless access for a particular industrial use case by providing the mentioned application’s requirements. To meet these requirements, several radio-related aspects and network parameters should be considered. In many cases, the behavior of the link connection may vary based on wireless conditions, available network resources, and User Equipment (UE) requirements. Furthermore, Optimizing these networks can be a complex task due to the large number of network parameters and KPIs that need to be considered. For these reasons, traditional solutions and static network configuration are not affordable or simply impossible. Despite the existence of papers in the literature that address several optimization methods for cellular networks in industrial scenarios, more insight into these existing but complex or unknown methods is needed.
In this thesis, a series of optimization methods were implemented to deliver an optimal configuration solution for a 5G private network. To facilitate this implementation, a testing system was implemented. This system enables remote control over the UE and 5G network, establishment of a test environment, extraction of relevant KPI reports from both UE and network sides, assessment of test results and KPIs, and effective utilization of the optimization and sampling techniques.
The research highlights the advantageous aspects of automated testing by using OFAT, Simulated Annealing, and Random Forest Regressor methods. With OFAT, as a common sampling method, a sensitivity analysis and an impact of each single parameter variation on the performance of the network were revealed. With Simulated Annealing, an optimal solution with MSE of roughly 10 was revealed. And, in the Random Forest Regressor, it was seen that this method presented a significant advantage over the simulated annealing method by providing substantial benefits in time efficiency due to its machine- learning capability. Additionally, it was seen that by providing a larger dataset or using some other machine-learning techniques, the solution might be more accurate.
The goal of this thesis is to thoroughly investigate the concepts of stand-alone and decarbonization of optical fiber networks. Because of their dependability, fast speed, and capacity, optical fiber networks are vital inmodern telecommunications. Their considerable energy consumption and carbon emissions, on the other hand, constitute a danger to global sustainability objectives and must be addressed.
The first section of the thesis presents a summary of the current state of optical fiber networks, their
components, and the energy consumption connected with them. This part also goes over the difficulties of lowering energy usage and carbon emissions while preserving network performance and dependability.
The second section of the thesis focuses on the stand-alone idea, which entails powering the optical fiber network with renewable energy sources and energy-efficient technology. This section investigates and explores the possibilities of renewable energy sources like solar and wind power to power the network. It also investigates energy-efficient technologies like virtualization and cloud computing, as well as their potential to minimize network energy usage.
The third section of the thesis focuses on the notion of decarbonization, which entails lowering carbon emissions linked with the optical fiber network. This section looks at various carbon-reduction measures, such as employing low-carbon energy sources and improving energy efficiency. It also covers the relevance of carbon offsets and the difficulties associated with adopting decarbonization measures in the context of optical fiber networks.
The fourth section of the thesis compares the ideas of stand-alone and decarbonization. It investigates the advantages and disadvantages of each strategy, as well as their potential to minimize energy consumption and carbon emissions in optical fiber networks. It also explores the difficulties in applying these notions as well as potential hurdles to their wider adoption.
Finally, the need of addressing the energy consumption and carbon emissions connected with optical fiber networks is emphasized in this thesis.
It outlines important obstacles and potential impediments to adopting these initiatives and gives insights into potential ways for decreasing them.
It also makes suggestions for further study in this area.
Much of the research in the field of audio-based machine learning has focused on recreating human speech via feature extraction and imitation, known as deepfakes. The current state of affairs has prompted a look into other areas, such as the recognition of recording devices, and potentially speakers, by only analysing sound files. Segregation and feature extraction are at the core of this approach.
This research focuses on determining whether a recorded sound can reveal the recording device with which it was captured. Each specific microphone manufacturer and model, among other characteristics and imperfections, can have subtle but compounding effects on the results, whether it be differences in noise, or the recording tempo and sensitivity of the microphone while recording. By studying these slight perturbations, it was found to be possible to distinguish between microphones based on the sounds they recorded.
After the recording, pre-processing, and feature extraction phases we completed, the prepared data was fed into several different machine learning algorithms, with results ranging from 70% to 100% accuracy, showing Multi-Layer Perceptron and Logistic Regression to be the most effective for this type of task.
This was further extended to be able to tell the difference between two microphones of the same make and model. Achieving the identification of identical models of a microphone suggests that the small deviations in their manufacturing process are enough of a factor to uniquely distinguish them and potentially target individuals using them. This however does not take into account any form of compression applied to the sound files, as that may alter or degrade some or most of the distinguishing features that are necessary for this experiment.
Building on top of prior research in the area, such as by Das et al. in in which different acoustic features were explored and assessed on their ability to be used to uniquely fingerprint smartphones, more concrete results along with the methodology by which they were achieved are published in this project’s publicly accessible code repository.
Estimation and projecting total steel industry production costs from 2019 to 2030 for Germany
(2023)
This thesis analyses the total production cost of the German steel industry from 2019 to 2022, as well as a projection of the German steel industry's total production cost until 2030. The research separates the costs of steel production into their primary components, such as raw materials, energy, CO2 cost, capital expenses and operating expenses. The cost of steel production is determined separately for primary steelmaking with the blast furnace and basic oxygen furnace (BF-BOF) and secondary steelmaking with the electric arc furnace (EAF).
The analysis indicates that, following the COVID-19 disaster and the fuel crisis, the overall cost of producing steel in Germany has progressively risen over the previous few years, reaching its peak in the first half of 2022. In addition, there are considerable disparities between the production costs of primary and secondary steelmaking processes, with primary steelmaking generally being more expensive.
In this analysis, the total cost of production for the German steel industry in the year 2030 has been estimated by taking into account historical trends as well as other predictions that are currently available.
This thesis provides overall insights on the economics of the German steel sector. By giving thorough information on production costs and changes over time, this research can assist guide crucial future investment decisions in this essential industry. To ensure long-term success, our findings emphasize the significance of investing in more sustainable and ecologically friendly steel production processes.
Total Cost of Ownership (TCO) is a key tool to have a complete understanding of the costs associated with an investment, as it allows to analyze not only the initial acquisition costs, but also the long-term costs related to operation, maintenance, depreciation, and other factors. In the context of the cement industry, TCO is especially important due to the complexity of the production processes and the wide variety of components and machinery involved in the process.
For this reason, a TCO analysis for the cement industry has been conducted in this study, with the objective of showing the different components of the cost of production. This analysis will allow the reader to gain knowledge about these costs, in the industrial model will be to make informed decisions on the adoption of technologies and practices that will allow them to reduce costs in the long run and improve their operational efficiency.
In particular, this study pursues to give visibility to technologies and practices that enable the reduction of carbon emissions in cement production, thus contributing to the sustainability of industry and the protection of the environment. By being at the forefront of sustainability issues, the cement industry can contribute to the achievement of environmentally friendly technologies and enable the development of people and industry.
The Oxyfuel technology has been selected as a carbon capture solution for the cement industry due to its practical application, low costs, and practical adaptation to non-capture processes. The adoption of this technology allows for a significant reduction in CO2 emissions, which is a crucial factor in achieving sustainability in the cement manufacturing process.
Carbon capture storage technologies represent a high investment, although these technologies increase the cost of production, the application of Oxyfuel technology is one of the most economically viable as the cheapest technology per capture according to the comparison. However, this price increase is a technical advantage as the carbon capture efficiency of this technology reaches 90%. This level of efficiency leads to a decrease in taxes for the generation of CO2 emissions, making the cement manufacturing process sustainable.
The effects of climate change, including severe storms, heat waves, and melting glaciers, are highlighted as an urgent concern, emphasising the need to decrease carbon emissions to restrict global warming to 1.5°C. To accomplish this goal, it is vital to substitute fossil fuel-based power plants with renewable energy sources like solar, wind, hydro, and biofuels. Despite some progress being made, the proportion of renewables used in generating electricity is still lower than the levels needed for 2030 and 2050. Decarbonising the power grid is also critical in lowering the energy consumption of buildings, which is responsible for a substantial percentage of worldwide electricity usage. Even though there has been substantial expansion in the worldwide renewable energy market in the past 15 years, the transition to renewable energy sources also requires taking into account the importance of energy trading.
Peer-to-peer (P2P) electricity trading is an emerging type of energy exchange that can revolutionise the energy sector by providing a more decentralised and efficient way of trading energy. This research deals about P2P electricity trading in a carbon-neutral scenario. 'Python for Power System Analysis' (PyPSA) was used to develop models through which the P2P effect was tested. Data for the entire state of Baden-Württemberg (BW) was collected. Three scenarios were taken into consideration while developing models: 2019 (base), 2030 (coal phase-out), and 2040(climate neutral). Alongside this, another model with no P2P trading was developed to make a comparison. In addition, the use case of community storage in a P2P trading network is also presented.
The research concludes that P2P has a significant positive effect on a pathway to achieve climate neutrality. The findings show that the share of renewables in electricity generation is increasing compared to conventional sources in BW, which can be traded to meet the demand. From the storage analysis, it can be concluded that community storage can be effectively utilised in P2P trading. While the emissions are reduced, the operating costs are also reduced when the grid has P2P trading available. By highlighting the benefits of P2P trading, this research contributed to the growing body of research on the effectiveness of P2P trading in an electricity network grid.
The primary objective of this thesis is to examine the lean accounting transformation, which involves applying lean management principles to the accounting domain. In recent years, various sectors, including manufacturing, healthcare, and services, have experienced success with lean management practices. Nevertheless, the implementation of lean accounting within financial management has not been as extensively explored. This research aims to bridge that gap by scrutinizing the benefits and potential drawbacks of adopting lean accounting practices in business operations.
This research uses a combination of qualitative techniques and an extensive literature review to better understand the present subject matter. By describing the ideas of lean management and standard accounting and highlighting the fundamental distinctions between the two systems, the literature study lays a theoretical framework. The case studies illustrate the benefits of adopting lean accounting processes with real-world examples of firms that have made the transition effectively.
In the quantitative analysis of lean accounting's impact, both financial and operational factors are examined extensively. The results indicate that companies embracing lean accounting practices experience significant improvements in productivity, cost reduction, and decisionmaking quality. By highlighting the potential gains to be made by incorporating lean techniques into accounting procedures, this study adds to the current body of information on lean management. The findings offer practical implications for accounting professionals, business leaders, and policymakers interested in leveraging lean accounting to drive organizational performance improvement. The thesis finishes with suggestions for further study in this area, lean accounting.
Linux and Linux-based operating systems have been gaining more popularity among the general users and among developers. Many big enterprises and large companies are using Linux for servers that host their websites, some even require their developers to have knowledge about Linux OS. Even in embedded systems one can find many Linux-based OS that run them. With its increasing popularity, one can deduce the need to secure such a system that many personnel rely on, be it to protect the data that it stores or to protect the integrity of the system itself, or even to protect the availability of the services it offers. Many researchers and Linux enthusiasts have been coming up with various ways to secure Linux OS, however new vulnerabilities and new bugs are always found, by malicious attackers, with every update or change, which calls for the need of more ways to secure these systems.
This Thesis explores the possibility and feasibility of another way to secure Linux OS, specifically securing the terminal of such OS, by altering the commands of the terminal, getting in the way of attackers that have gained terminal access and delaying, giving more time for the response teams and for forensics to stop the attack, minimize the damage, restore operations, and to identify collect and store evidence of the cyber-attack. This research will discuss the advantages and disadvantages of various security measures and compare and contrast with the method suggested in this research.
This research is significant because it paints a better picture of what the state of the art of Linux and Linux-based operating systems security looks like, and it addresses the concerns of security enthusiasts, while exploring new uncharted area of security that have been looked at as a not so significant part of protecting the OSes out of concern of the various limitations and problems it entails. This research will address these concerns while exploring few ways to solve them, as well as addressing the ideal areas and situations in which the proposed method can be used, and when would such method be more of a burden than help if used.
In recent years, the demand for reliable power, driven by sensitive electronic equipment, has surged. Even minor deviations from the nominal supply can lead to malfunctions or failure. Despite technological advancements, power quality issues persist due to various factors like short circuits, overloads, voltage fluctuations, unbalanced loads, and non-linear loads.
This thesis extensively explores power quality anomalies in industrial and commercial sectors, using power system data as the primary analytical resource. It addresses the critical need for power supply reliability in today's evolving power grid industry, affected by non-linear loads, renewable energy integration, and electric vehicles. This field of study is paramount for ensuring power supply reliability and stability in the evolving power grid industry.
The core of this thesis involves a comprehensive investigation of power quality, with a focus on frequency, power, and harmonics in voltage and current signals. The research employs Python programming for advanced data analysis, utilizing techniques such as advanced Fast Fourier Transformation (FFT) analysis. The primary objective is to provide valuable insights aimed at elevating power supply quality and enhancing reliability in both industrial and commercial environments.
The Internet of Things is spreading significantly in every sector, including the household, a variety of industries, healthcare, and emergency services, with the goal of assisting all of those infrastructures by providing intelligent means of service delivery. An Internet of Vulnerabilities (IoV) has emerged as a result of the pervasiveness of the Internet of Things (IoT), which has led to a rise in the use of applications and devices connected to the IoT in our day-to-day lives. The manufacture of IoT devices are growing at a rapid pace, but security and privacy concerns are not being taken into consideration. These intelligent Internet of Things devices are especially vulnerable to a variety of attacks, both on the hardware and software levels, which leaves them exposed to the possibility of use cases. This master’s thesis provides a comprehensive overview of the Internet of Things (IoT) with regard to security and privacy in the area of applications, security architecture frameworks, a taxonomy of various cyberattacks based on various architecture models, such as three-layer, four-layer, and five-layer. The fundamental purpose of this thesis is to provide recommendations for alternate mitigation strategies and corrective actions by using a holistic rather than a layer-by-layer approach. We discussed the most effective solutions to the problems of privacy and safety that are associated with the Internet of Things (IoT) and presented them in the form of research questions. In addition to that, we investigated a number of further possible directions for the development of this research.
AI-based Ground Penetrating Radar Signal Processing for Thickness Estimation of Subsurface Layers
(2023)
This thesis focuses on the estimation of subsurface layer thickness using Ground Penetrating Radar (GPR) A-scan and B-scan data through the application of neural networks. The objective is to develop accurate models capable of estimating the thickness of up to two subsurface layers.
Two different approaches are explored for processing the A-scan data. In the first approach, A-scans are compressed using Principal Component Analysis (PCA), and a regression feedforward neural network is employed to estimate the layers’ thicknesses. The second approach utilizes a regression one-dimensional Convolutional Neural Network (1-D CNN) for the same purpose. Comparative analysis reveals that the second approach yields superior results in terms of accuracy.
Subsequently, the proposed 1-D CNN architecture is adapted and evaluated for Step Frequency Continuous Wave (SFCW) radar, expanding its applicability to this type of radar system. The effectiveness of the proposed network in estimating subsurface layer thickness for SFCW radar is demonstrated.
Furthermore, the thesis investigates the utilization of GPR B-scan images as input data for subsurface layer thickness estimation. A regression CNN is employed for this purpose, although the results achieved are not as promising as those obtained with the 1-D CNN using A-scan data. This disparity is attributed to the limited availability of B-scan data, as B-scan generation is a resource-intensive process.
As e-commerce platforms have grown in popularity, new difficulties have emerged, such as the growing use of bots—automated programs—to engage with e-commerce websites. Even though some algorithms are helpful, others are malicious and can seriously hurt e-commerce platforms by making fictitious purchases, posting fictitious evaluations, and gaining control of user accounts. Therefore, the development of more effective and precise bot identification systems is urgently needed to stop such actions. This thesis proposes a methodology for detecting bots in E-commerce using machine learning algorithms such as K-nearest neighbors, Decision Tree, Random Forest, Support Vector Machine, and Neural Network. The purpose of the research is to assess and contrast the output of these machine learning methods. The suggested approach will be based on data that is readily accessible to the public, and the study’s focus will be on the research of bots in e-commerce.
The purpose of the study is to provide an overview of bots in e-commerce, as well as information on the different kinds and traits of bots, as well as current research on bots in e-commerce and associated work on bot detection in e-commerce. The research also seeks to create a more precise and effective bot detection system as well as find critical factors in detecting bots in e-commerce.
This research is significant because it sheds light on the increasing issue of bots in e-commerce and the requirement for more effective bot detection systems. The suggested approach for using machine learning algorithms to identify bots in ecommerce can give e-commerce platforms a more precise and effective bot detection system to stop malicious bot activities. The study’s results can also be used to create a more effective bot detection system and pinpoint key elements in detecting bots in e-commerce.
The cellulase-producing Trichoderma reesei strain RL-P37 exhibits significant potential, yielding 7.3 g/L of cellulase in 241 hours. Microscopic investigations reveal a link between spore formation and enzyme production, suggesting the need for research into the intricate relationship between enzyme production, stress responses, and the nutritional prerequisites of fungi. Comparatively, the use of sodium hydroxide (NaOH) treatment, as opposed to water treatment, results in the reduction of micronutrient content and carbon source extraction as filtrate. Despite these challenges, research by He et al. (2021) highlights NaOH's efficiency in cellulose extraction from plant-based sources. Using NaOH pretreatment can be proven as effective by designing a proper cultivation method. The selection of inducers for enzyme induction gains importance, with soluble inducers, as emphasized by Zhang et al. (2022), exhibiting superior effectiveness. Hence, adopting soluble inducers in designing cultivation methods for improved enzyme production in shaking flasks is recommended. Enzymatic treatment of bio-waste, as outlined by Hu et al. (2021), shows promise in augmenting essential component content by breaking down plant cell walls and intercellular compartments. However, the feasibility of using an artificial bio-waste medium for cultivating Trichoderma reesei is questioned. Investigating the impact of micronutrient levels, particularly the inhibitory role of zinc, on fungal growth becomes essential. These findings underscore the necessity for ongoing research and optimization in cellulase production, emphasizing both strain productivity and cultivation methodologies.
Self-sufficient enzymes belong to the cytochrome P450 (CYP) group and are known for their superior hydroxylation catalytic activity. In the pursuit of identifying new pesticides to combat antimicrobial-resistant pathogens, we employed BM3 wild type (BM3-WT), the fastest monohydroxylating CYP, along with its seven homologs, to investigate the production of potential hydroxylated derivatives from the established pesticide, 4-oxocrotonic acid using high-pressure liquid chromatography (HPLC) method. Following the recombinant production of BM3-WT and three other homologs in E. coli, and their subsequent purification using Immobilized Metal Affinity Chromatography (IMAC), a novel enzyme assay approach was developed as a substitute for the carbon monoxide (CO) assay. This new method relied on the measurement of NADPH consumption at 340 nm by BM3-WT for palmitic acid. Leveraging this established technique, we explored the substrate specificity of BM3-WT and its homologs not only on palmitic acid but also on other structurally similar compounds, including 4-oxocrotonic acid. The results obtained from the established NADPH assay indicate that all tested enzymes displayed greater catalytic activity on 4-oxocrotonic acid in comparison to other substrates with similar structures. However, the impact of BM3-WT and its homologs on 4-oxocrotonic acid varied in terms of product specificity. Enzymes such as Poh, Trr and Bas-CYP D exhibited specificity in producing solely monohydroxylated products, while others tended to yield dehydroxylated and ketol metabolites.
The purpose of this master's thesis was to set up a test bed for the absorption of chemical compounds by carbon-based sorbents and polymers and to develop a method for the detection of these substances applied by liquid chromatography.
The study made it possible to demonstrate the effectiveness of both polymers and biochars sorbents for the adosorption of specific substances. The results obtained open new paths on the study of biochar for the treatment of contaminated water. Some biochars made from plant-based materials have been shown to be almost as effective as commercial products used in plants. The developed chromatography method allows efficient separation of substances and their detection.
Encryption techniques allow storing and transferring of sensitive information securely by using encryption at rest and encryption in transit, respectively. However, when computation is performed on these sensitive data, the data needs to be decrypted first and encrypted again after performing the computations. During the computations, the sensitive data becomes vulnerable to attackers as it's in decrypted form. Homomorphic encryption, a special type of encryption technique that allows computation on encrypted data can be used to solve the above-mentioned problem. The best way to achieve maximum security with homomorphic encryption is to perform at least the homomorphic encryption and decryption on the client side (browser) of a web application by not trusting the server. At present time there are many libraries with different homomorphic schemes available for homomorphic encryption. However, there are very few to no JavaScript libraries available to perform homomorphic encryption on the client side of any web application. This thesis mainly focuses on the JavaScript implementation of client-side homomorphic encryption. The fully homomorphic encryption scheme BFV is selected for the implementation. After implementing the fully homomorphic encryption scheme based on the “py-fhe” library, tests are also carried out in order to determine the applicability (in terms of time consumption, security and correctness) of this implementation in a web application by comparing the performance and security for different test cases and different settings.
Cloud computing is a combination of technologies, including grid computing and distributed computing, that use the Internet as a network for service delivery. Organizations can select the price and service models that best accommodate their demands and financial restrictions. Cloud service providers choose the pricing model for their cloud services, taking the size, usage, user, infrastructure, and service size into account. Thus, cloud computing’s economic and business advantages are driving firms to shift more applications to the cloud, boosting future development. It enlarges the possibilities of current IT systems.
Over the past several years, the ”cloud computing” industry has exploded in popularity, going from a promising business concept to one of the fastest expanding areas of the IT sector. Most enterprises are hosting or installing web services in a cloud architecture for management simplicity and improved availability. Virtual environments are applied to accomplish multi-tenancy in the cloud. A vulnerability in a cloud computing environment poses a direct threat to the users’ privacy and security. In our digital age, the user has many identities. At all levels, access rights and digital identities must be regulated and controlled.
Identity and access management(IAM) are the process of managing identities and regulating access privileges. It is considered as a front-line soldier of IT security. It is the goal of identity and access management systems to protect an organization’s assets by limiting access to just those who need it and in the appropriate cases. It is required for all businesses with thousands of users and is the best practice for ensuring user access control. It identifies, authenticates, and authorizes people to access an organization’s resources. This, in turn, enhances access management efficiency. Authentication, authorization, data protection, and accountability are just a few of the areas in which cloud-based web services have security issues. These features come under identity and access management.
The implementation of identity and access management(IAM) is essential for any business. It’s becoming more and more business-centric, so we need more than technical know-how to succeed. Organizations may save money on identity management and, more crucially, become much nimbler in their support of new business initiatives if they have developed sophisticated IAM capabilities. We used these features of identity and access management to validate the robustness of the cloud computing environment with a comparison of traditional identity and access management.
The rapid pace of innovation and technological advancements has led to the emergence of start-up companies in various sectors. To remain competitive and sustainable, start-ups need to make informed business decisions that can enhance their operations and profitability. Business Intelligence (BI) has become an essential tool for businesses of all sizes in managing their operations and gaining a competitive edge.
This master thesis explores the role of Business Intelligence in start-up companies. The study aims to investigate the use of BI in start-up companies, the drivers and the inhibitors for its adoption and their relationship with price. The research conducted for this thesis involves a review of relevant literature on Business Intelligence, start-up companies, and related topics. The study also includes structured survey with entrepreneurs, start-up company executives, and BI experts to gain data for a quantitative analysis of the topic.
The thesis aims to contribute to the existing body of knowledge on Business Intelligence and its role in start-up companies. The research conducted for this thesis can be of value to start-up entrepreneurs, investors, and other stakeholders who seek to improve their understanding of the benefits and challenges of implementing BI in start-up companies.
Cloud computing has revolutionized the way businesses operate by providing them with access to scalable, cost-effective, and flexible IT resources. This technology has enabled businesses to store, manage, and process data more efficiently, leading to improved competitiveness and increased revenue. The purpose of this thesis is to explore the impacts of using cloud computing from a business perspective. The research employs both primary and secondary sources of data, including a literature review, interviews with employees who have more than 5 years of experience, a questionnaire, and observations from Billwerk+ company.
The findings of this research indicate that cloud computing has had a significant impact on businesses, providing them with cost savings, improved agility and flexibility, and enhanced access to data and applications. However, it has been revealed that the benefits of cloud computing for companies may vary according to the departments of the employees. The results of this research contribute to the existing body of knowledge on the topic of cloud computing and its impact on businesses. The findings of this thesis can be used by business owners, managers, technology professionals, and students to make informed decisions about the adoption and use of cloud computing technology.
In conclusion, this thesis provides a comprehensive understanding of the impacts of using cloud computing from a business perspective, highlighting the factors that companies consider when deciding to use cloud environments and the views from different departments. The results of this research will be valuable to a wide range of individuals interested in exploring the implications of cloud computing for businesses.