Refine
Document Type
- Conference Proceeding (33)
- Master's Thesis (27)
- Article (reviewed) (11)
- Bachelor Thesis (4)
- Part of a Book (2)
- Article (unreviewed) (2)
- Working Paper (2)
- Doctoral Thesis (1)
Conference Type
- Konferenzartikel (30)
- Konferenz-Abstract (1)
- Sonstiges (1)
Language
- English (82) (remove)
Keywords
- Gamification (6)
- IT-Sicherheit (6)
- Maschinelles Lernen (4)
- Acceptance (3)
- Computersicherheit (3)
- Education (3)
- JavaScript (3)
- Künstliche Intelligenz (3)
- Machine Learning (3)
- Social Robots (3)
Institute
- Fakultät Medien (M) (ab 22.04.2021) (82) (remove)
Open Access
- Open Access (34)
- Closed (25)
- Closed Access (23)
- Diamond (9)
- Bronze (6)
- Hybrid (6)
- Gold (4)
- Grün (1)
Strong security measures are required to protect sensitive data and provide ongoing service as a result of the rising reliance on online applications for a range of purposes, including e-commerce, social networking, and commercial activities. This has brought to light the necessity of strengthening security measures. There have been multiple incidents of attackers acquiring access to information, holding providers hostage with distributed denial of service attacks, or accessing the company’s network by compromising the application.
The Bundesamt für Sicherheit in der Informationstechnik (BSI) has published a comprehensive set of information security principles and standards that can be utilized as a solid basis for the development of a web application that is secure.
The purpose of this thesis is to build and construct a secure web application that adheres to the requirements established in the BSI guideline. This will be done in order to answer the growing concerns regarding the security of web applications. We will also evaluate the efficacy of the recommendations by conducting security tests on the prototype application and determining whether or not the vulnerabilities that are connected with a web application that is not secure have been mitigated.
Though the basic concept of a ledger that anyone can view and verify has been around for quite some time, today’s blockchains bring much more to the table including a way to incentivize users. The coins given to the miner or validator were the first source of such incentive to make sure they fulfilled their duties. This thesis draws inspiration from other peer efforts and uses this same incentive to achieve certain goals. Primarily one where users are incentivised to discuss their opinions and find scientific or logical backing for their standpoint. While traditional chains form a consensus on a version of financial "truth", the same can be applied to ideological truths too. To achieve this, creating a modified or scaled proof of stake consensus mechanism is explored in this work. This new consensus mechanism is a Reputation Scaled - Proof of Stake. This reputation can be built over time by voting for the winning side consistently or by sticking to one’s beliefs strongly. The thesis hopes to bridge the gap in current consensus algorithms and incentivize critical reasoning.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
This paper describes the authors' first experiments in creating an artificial dancer whose movements are generated through a combination of algorithmic and interactive techniques with machine learning. This approach is inspired by the time honoured practice of puppeteering. In puppeteering, an articulated but inanimate object seemingly comes to live through the combined effects of a human controlling select limbs of a puppet while the rest of the puppet's body moves according to gravity and mechanics. In the approach described here, the puppet is a machine-learning-based artificial character that has been trained on motion capture recordings of a human dancer. A single limb of this character is controlled either manually or algorithmically while the machine-learning system takes over the role of physics in controlling the remainder of the character's body. But rather than imitating physics, the machine-learning system generates body movements that are reminiscent of the particular style and technique of the dancer who was originally recorded for acquiring training data. More specifically, the machine-learning system operates by searching for body movements that are not only similar to the training material but that it also considers compatible with the externally controlled limb. As a result, the character playing the role of a puppet is no longer passively responding to the puppeteer but makes movement decisions on its own. This form of puppeteering establishes a form of dialogue between puppeteer and puppet in which both improvise together, and in which the puppet exhibits some of the creative idiosyncrasies of the original human dancer.
Generative machine learning models for creative purposes play an increasingly prominent role in the field of dance and technology. A particularly popular approach is the use of such models for generating synthetic motions. Such motions can either serve as source of ideation for choreographers or control an artificial dancer that acts as improvisation partner for human dancers. Several examples employ autoencoder-based deep-learning architectures that have been trained on motion capture recordings of human dancers. Synthetic motions are then generated by navigating the autoencoder's latent space. This paper proposes an alternative approach of using an autoencoder for creating synthetic motions. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors. As illustrative example of an artistic application, this latter method is used for an artificial dancer that plays a digital instrument. The paper presents the implementation of these two methods and provides some preliminary results.
Strings P
(2021)
Strings is an audiovisual performance for an acoustic violin and two generative instruments, one for creating synthetic sounds and one for creating synthetic imagery. The three instruments are related to each other conceptually , technically, and aesthetically by sharing the same physical principle, that of a vibrating string. This submission continues the work the authors have previously published at xCoAx 2020. The current submission briefly summarizes the previous publication and then describes the changes that have been made to Strings. The P in the title emphasizes, that most of these changes have been informed by experiences collected during rehearsals (in German Proben). These changes have helped Strings to progress from a predominantly technical framework to a work that is ready for performance.
Implementation and Evaluation of an Assisting Fuzzer Harness Generation Tool for AUTOSAR Code
(2024)
The digitalization in vehicles tends to add more connectivity such as over-the-air (OTA) updates. To achieve this digitization, each ECU (Electronic Control Unit) becomes smarter and needs to support more and more different externally available protocols such as TLS, which increases the attack surface for attackers. To ensure the security of a vehicle, fuzzing has proven to be an effective method to discover memory-related security vulnerabilities. Fuzzing the software run- ning on a ECU is not an easy task and requires a harness written by a human. The author needs a deep understanding of the specific service and protocol, which is time consuming. To reduce the time needed by a harness author, this thesis aims to develop FuzzAUTO, the first assistant harness generation tool targeting the AUTOSAR (AUTomotive Open System ARchitecture) BSW (Basic Software) to support manual harness generation.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
Privacy is the capacity to keep some things private despite their social repercussions. It relates to a person’s capacity to control the amount, time, and circumstances under which they disclose sensitive personal information, such as a person’s physiology, psychology, or intelligence. In the age of data exploitation, privacy has become even more crucial. Our privacy is now more threatened than it was 20 years ago, outside of science and technology, due to the way data and technology highly used. Both the kinds and amounts of information about us and the methods for tracking and identifying us have grown a lot in recent years. It is a known security concern that human and machine systems face privacy threats. There are various disagreements over privacy and security; every person and group has a unique perspective on how the two are related. Even though 79% of the study’s results showed that legal or compliance issues were more important, 53% of the survey team thought that privacy and security were two separate things. Data security and privacy are interconnected, despite their distinctions. Data security and data privacy are linked with each other; both are necessary for the other to exist. Data may be physically kept anywhere, on our computers or in the cloud, but only humans have authority over it. Machine learning has been used to solve the problem for our easy solution. We are linked to our data. Protect against attackers by protecting data, which also protects privacy. Attackers commonly utilize both mechanical systems and social engineering techniques to enter a target network. The vulnerability of this form of attack rests not only in the technology but also in the human users, making it extremely difficult to fight against. The best option to secure privacy is to combine humans and machines in the form of a Human Firewall and a Machine Firewall. A cryptographic route like Tor is a superior choice for discouraging attackers from trying to access our system and protecting the privacy of our data There is a case study of privacy and security issues in this thesis. The problems and different kinds of attacks on people and machines will then be briefly talked about. We will explain how Human Firewalls and machine learning on the Tor network protect our privacy from attacks such as social engineering and attacks on mechanical systems. As a real-world test, we will use genomic data to try out a privacy attack called the Membership Inference Attack (MIA). We’ll show Machine Firewall as a way to protect ourselves, and then we’ll use Differential Privacy (DP), which has already been done. We applied the method of Lasso and convolutional neural networks (CNN), which are both popular machine learning models, as the target models. Our findings demonstrate a logarithmic link between the desired model accuracy and the privacy budget.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
There is an ongoing debate about the use and scope of Clayton M. Christensen´s idea of disruptive innovation, including the question of whether it is a management buzz phrase or a valuable theory. This discussion considers the general question of how innovation in the field of management theories and concepts finds its way to the different target groups. This conceptual paper combines the different concepts of the creation and dissemination of management trends in a basic framework based on a short review of models for the dissemination of management ideas. This framework allows an analysis of the character of new management ideas like disruptive innovation. By measuring the impact of the theory on the academic sphere using a bibliometric statistic of the number of academic publications on Google scholar and Scopus and a meta-analysis of research papers, we show the significant influence of disruptive innovation beyond pure management fads.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
Public educational institutions are increasingly confronted with a decline in the number of applicants, which is why competition between colleges and universities is also intensifying. For this reason, it is important to position oneself as an institution in order to be perceived by the various target groups and to differentiate oneself from the competition. In this context, the brand and thus its perception and impact play a decisive role, especially in view of the desired communication of the institution's own values and its self-image, the brand identity. To this end, emotions serve as an approach to creating positive stimulation and brand loyalty.
Inner Congo
(2023)
This research-creation project, part of the DE\GLOBALIZE artistic research cycle presented at the #IFM2022 Conference, investigates the complexities of Congo violence, care, and colonialism. Drawing on Michel Serres' metaphor of the great estuaries, the study explores the topology of interactive documentaries, blending theory, emotion, and personal experiences. Accessible through the interactive web documentation at http://deglobalize.com, the platform offers a media-archaeological archive for speculative ethnography, enabling the forensic processing of single documents in line with actor-network theory.
Currently, immersive technologies are enjoying great popularity. This trend is reflected in technological advances and the emergence of new products for the mass market, such as augmented reality glasses. The range of applications for immersive technologies is growing with more efficient and affordable technologies and student adoption. Especially in education, the use will improve existing learning methods. Immersive application use visual, audio and haptic sensors to fully engage the user in a virtual environment. This impression is reinforced with the help of realistic visualizations and the opportunity for interaction. In particular, Augmented reality is characterized by a high degree of integration between reality and the inserted virtual objects. An augmented interactive simulation for the determination of the specific charge of an electron will be used as an example to demonstrate how such immersion can be created for users. A virtual Helmholtz coil is used to measure and calculate the e/m constant. The voltage at the cathode for generating the electron beam, but also the voltage of the homogeneous magnetic field for deflecting the electron beam, can be variably controlled by haptic user input. Based on these voltages, an immersive virtual electron beam is calculated and visualized. In this paper, the authors present the conceptual steps of this immersive application and address the challenges associated with designing and developing an augmented and interactive simulation.
Redesigning a curriculum for teaching media technology is a major challenge. Up-to-date teaching and learning concepts are necessary that meet the constant technological progress and prepare students specifically for their professional life. Teaching and studying should be characterized by a student-oriented teaching and learning culture. In order to achieve this goal, consistent evaluation is essential. The aim of the evaluation concept presented here is to generate structured information regarding the quality of content-related, didactic and organizational aspects of teaching. The exchange of opinions between students and lecturers should be encouraged in order to continuously improve the teaching and learning processes.
The paper will focus on the activities of the International Year of Light and Optical Technologies 2015 (IYL) with their impact in life, science, art, culture, education and outreach as well as the importance in promoting the objectives for sustainable development. It describes our activities carried out in the run-up to or during the IYL, as well as reports on the generic projects that led to the success of the IYL. The success of the IYL is illustrated by examples and statistics. Relating to the potential and success of the IYL, the impact and the genesis of the International Day of Light (IDL) is presented. Impressions from the opening ceremony of the IYL in Paris at UNESCO headquarters and the Inaugural Ceremony of the IDL will then be covered. A second focus is placed on the interdisciplinary media projects realized by the students of our university dedicated to these events. Finally, an analysis of the impact and legacy of IYL and IDL will be presented.
A report from the World Economic Forum (2019) stated loneliness as the third societal stressor in the world, mainly in western countries. Moreover, research shows that loneliness tends to be experienced more severely by young adults than other age groups (Rokach, 2000), which is the case of university students who face profound periods of loneliness when attending university in a new place (Diehl et al., 2018). Digital technology, especially mental health apps (MHapps), have been viewed as promising solutions to address this distress in universities, however, little evidence on this topic reveals uncertainty around how these resources impact individual well-being. Therefore, this research proposed to investigate how the gamified social mobile app Noneliness reduced loneliness rates and other associated mental health issues of students from a German university. As little work has focused on digital apps targeting loneliness, this project also proposed to describe and discuss the app’s design and development processes. A multimethod approach was adopted: literature review on high-efficacy MHapps design, gamification for mental health and loneliness interventions; User Experience Design and Human-centered Computing. Evaluations occurred according to the app’s development iterations, which assessed four versions (from prototype to Beta) through quantitative and qualitative studies with university students. The main results obtained regarding the design aspects were: users' preference for minimalistic interfaces; importance in maintaining privacy and establishing trust among users; students' willingness to use an online support space for emotional and educational support. Most used features were those related to group discussions, private chats and university social events. Preferred gamification elements were those that provided positive reinforcement to motivate social interactions (e.g. Points, Levels and Achievements). Results of a pilot randomized controlled trial with university students (N = 12), showed no statistically significant interactions in reducing loneliness among experimental group members (n = 7, x² = 3.500, p-value = 0.477, Cramer’s V = 0.27) who made continued use of the app for six weeks. On the other hand, the app showed effects of moderate magnitude on loneliness reduction in this group. The app also demonstrated relatively strong magnitude effects on other associated variables, such as depression and stress in the experimental group. In addition to motivating the conduct of further studies with larger samples, the findings point to a potential app effectiveness not only to reduce loneliness, but also other variables that may be associated with the distress.
Digital, virtual environments and the metaverse are rapidly taking shape and will generate disruptive changes in the areas of ethics, privacy, safety, and how the relationships between human beings will be developed. To uncover some of some of the implications that will impact those areas, this study investigates the perceptions of 101 younger people from the generations Y and Z. We present a first exploratory analysis of the findings, focusing on knowledge and self-perception. Results show that these young generations are seriously doubting their knowledge on the metaverse and virtual worlds – regarding both the definition and the usage. It is interesting to see only a medium confidence level, considering that the participants are young and from an academic environment, which should increase their interest in and the affinity towards virtual worlds. Males from both generations perceive themselves as significantly more knowledgeable than females. Regarding a fitting definition, almost 40% agreed on the metaverse as a “universal and immersive virtual world that is made accessible using virtual reality and augmented reality technologies”. Regarding the topic in general, several participants (almost 40%) considered themselves sceptics or “just” users (38%). Interestingly, generation Y participants were more likely than the younger generation Z participants to identify themselves as early adopters or innovators. In result, the considerable amount of “mixed feelings” regarding digital, virtual environments and the metaverse shows that in-depth studies on the perception of the metaverse as well as its ethical and integrity implications are required to create more accessible, inclusive, safe, and inclusive digital, virtual environments.
Linux and Linux-based operating systems have been gaining more popularity among the general users and among developers. Many big enterprises and large companies are using Linux for servers that host their websites, some even require their developers to have knowledge about Linux OS. Even in embedded systems one can find many Linux-based OS that run them. With its increasing popularity, one can deduce the need to secure such a system that many personnel rely on, be it to protect the data that it stores or to protect the integrity of the system itself, or even to protect the availability of the services it offers. Many researchers and Linux enthusiasts have been coming up with various ways to secure Linux OS, however new vulnerabilities and new bugs are always found, by malicious attackers, with every update or change, which calls for the need of more ways to secure these systems.
This Thesis explores the possibility and feasibility of another way to secure Linux OS, specifically securing the terminal of such OS, by altering the commands of the terminal, getting in the way of attackers that have gained terminal access and delaying, giving more time for the response teams and for forensics to stop the attack, minimize the damage, restore operations, and to identify collect and store evidence of the cyber-attack. This research will discuss the advantages and disadvantages of various security measures and compare and contrast with the method suggested in this research.
This research is significant because it paints a better picture of what the state of the art of Linux and Linux-based operating systems security looks like, and it addresses the concerns of security enthusiasts, while exploring new uncharted area of security that have been looked at as a not so significant part of protecting the OSes out of concern of the various limitations and problems it entails. This research will address these concerns while exploring few ways to solve them, as well as addressing the ideal areas and situations in which the proposed method can be used, and when would such method be more of a burden than help if used.
Much of the research in the field of audio-based machine learning has focused on recreating human speech via feature extraction and imitation, known as deepfakes. The current state of affairs has prompted a look into other areas, such as the recognition of recording devices, and potentially speakers, by only analysing sound files. Segregation and feature extraction are at the core of this approach.
This research focuses on determining whether a recorded sound can reveal the recording device with which it was captured. Each specific microphone manufacturer and model, among other characteristics and imperfections, can have subtle but compounding effects on the results, whether it be differences in noise, or the recording tempo and sensitivity of the microphone while recording. By studying these slight perturbations, it was found to be possible to distinguish between microphones based on the sounds they recorded.
After the recording, pre-processing, and feature extraction phases we completed, the prepared data was fed into several different machine learning algorithms, with results ranging from 70% to 100% accuracy, showing Multi-Layer Perceptron and Logistic Regression to be the most effective for this type of task.
This was further extended to be able to tell the difference between two microphones of the same make and model. Achieving the identification of identical models of a microphone suggests that the small deviations in their manufacturing process are enough of a factor to uniquely distinguish them and potentially target individuals using them. This however does not take into account any form of compression applied to the sound files, as that may alter or degrade some or most of the distinguishing features that are necessary for this experiment.
Building on top of prior research in the area, such as by Das et al. in in which different acoustic features were explored and assessed on their ability to be used to uniquely fingerprint smartphones, more concrete results along with the methodology by which they were achieved are published in this project’s publicly accessible code repository.
Truth is the first causality of war”, is a very often used statement. What rather intrigues the mind is what causes the causality of truth. If one dives deeper, one may also wonder why is this so-called truth the first target in a war. Who all see the truth before it dies. These questions rarely get answered as the media and general public tends to focus more on the human and economic losses in a war or war like situation. What many fail to realize is that these truthful pieces of information are critical to how a situation further develops. One correct information may change the course of the whole war saving millions and one mis-information may do the opposite.
Since its inception, some studies have been conducted to propose and develop new applications for OSINT in various fields. In addition to OSINT, Artificial Intelligence is a worldwide trend that is being used in conjunction witThe question here is, what is this information. Who transmits this and how? What is the source. Although, there has been an extensive use of the information provided by the secret services of any nation, which have come handy to many, another kind of information system is using the one that is publicly available, but in different pieces. This kind of information may come from people posting on social media, some publicly available records and much more. The key part in this publicly available information is that these are just pieces of information available across the globe from various different sources. This could be seen as small pieces of a puzzle that need to be put together to see the bigger picture. This is where OSINT comes in place.
h other areas (AI). AI is the branch of computer science that is in charge of developing intelligent systems. In terms of contribution, this work presents a 9-step systematic literature review as well as consolidated data to support future OSINT studies. It was possible to understand where the greatest concentration of publications was, which countries and continents developed the most research, and the characteristics of these publications using this information. What are the trends for the next OSINT with AI studies? What AI subfields are used with OSINT? What are the most popular keywords, and how do they relate to others over time?A timeline describing the application of OSINT is also provided. It was also clear how OSINT was used in conjunction with AI to solve problems in various areas with varying objectives. Private investigators and journalists are no longer the primary users of open-source intelligence gathering and analysis (OSINT) techniques. Approximately 80-90 percent of data analysed by intelligence agencies is now derived from publicly available sources. Furthermore, the massive expansion of the internet, particularly social media platforms, has made OSINT more accessible to civilians who simply want to trawl the Web for information on a specific individual, organisation, or product. The General Data Protection Regulation (GDPR) of the European Union was implemented in the United Kingdom in May 2018 through the new Data Protection Act, with the goal of protecting personal data from unauthorised collection, storage, and exploitation. This document presents a preliminary review of the literature on GDPR-related work.
The reviewed literature is divided into six sections: ’What is OSINT?’, ’What are the risks?’ and benefits of OSINT?’, ’What is the rationale for data protection legislation?’, ’What are the current legislative frameworks in the UK and Europe?’, ’What is the potential impact of the GDPR on OSINT?’, and ’Have the views of civilian and commercial stakeholders been sought and why is this important?’. Because OSINT tools and techniques are available to anyone, they have the unique ability to be used to hold power accountable. As a result, it is critical that new data protection legislation does not impede civilian OSINT capabilities.
In this paper we see how OSINT has played an important role in the wars across the globe in the past. We also see how OSINT is used in our everyday life. We also gain insights on how OSINT is playing a role in the current war going on between Russia and Ukraine. Furthermore, we look into some of these OSINT tools and how they work. We also consider a use case where OSINT is used as an anti terrorism tool. At the end, we also see how OSINT has evolved over the years, and what we can expect in the future as to what OSINT may look like.
Cloud computing is a combination of technologies, including grid computing and distributed computing, that use the Internet as a network for service delivery. Organizations can select the price and service models that best accommodate their demands and financial restrictions. Cloud service providers choose the pricing model for their cloud services, taking the size, usage, user, infrastructure, and service size into account. Thus, cloud computing’s economic and business advantages are driving firms to shift more applications to the cloud, boosting future development. It enlarges the possibilities of current IT systems.
Over the past several years, the ”cloud computing” industry has exploded in popularity, going from a promising business concept to one of the fastest expanding areas of the IT sector. Most enterprises are hosting or installing web services in a cloud architecture for management simplicity and improved availability. Virtual environments are applied to accomplish multi-tenancy in the cloud. A vulnerability in a cloud computing environment poses a direct threat to the users’ privacy and security. In our digital age, the user has many identities. At all levels, access rights and digital identities must be regulated and controlled.
Identity and access management(IAM) are the process of managing identities and regulating access privileges. It is considered as a front-line soldier of IT security. It is the goal of identity and access management systems to protect an organization’s assets by limiting access to just those who need it and in the appropriate cases. It is required for all businesses with thousands of users and is the best practice for ensuring user access control. It identifies, authenticates, and authorizes people to access an organization’s resources. This, in turn, enhances access management efficiency. Authentication, authorization, data protection, and accountability are just a few of the areas in which cloud-based web services have security issues. These features come under identity and access management.
The implementation of identity and access management(IAM) is essential for any business. It’s becoming more and more business-centric, so we need more than technical know-how to succeed. Organizations may save money on identity management and, more crucially, become much nimbler in their support of new business initiatives if they have developed sophisticated IAM capabilities. We used these features of identity and access management to validate the robustness of the cloud computing environment with a comparison of traditional identity and access management.
Conceptualization and implementation of automated optimization methods for private 5G networks
(2023)
Today’s companies are adjusting to the new connectivity realities. New applications require more bandwidth, lower latency, and higher reliability as industries become more distributed and autonomous. Private 5th Generation (5G) networks known as 5G Non-Public Networks (5G-NPN), is a novel 3rd Generation Partnership Project (3GPP)- based 5G network that can deliver seamless and dedicated wireless access for a particular industrial use case by providing the mentioned application’s requirements. To meet these requirements, several radio-related aspects and network parameters should be considered. In many cases, the behavior of the link connection may vary based on wireless conditions, available network resources, and User Equipment (UE) requirements. Furthermore, Optimizing these networks can be a complex task due to the large number of network parameters and KPIs that need to be considered. For these reasons, traditional solutions and static network configuration are not affordable or simply impossible. Despite the existence of papers in the literature that address several optimization methods for cellular networks in industrial scenarios, more insight into these existing but complex or unknown methods is needed.
In this thesis, a series of optimization methods were implemented to deliver an optimal configuration solution for a 5G private network. To facilitate this implementation, a testing system was implemented. This system enables remote control over the UE and 5G network, establishment of a test environment, extraction of relevant KPI reports from both UE and network sides, assessment of test results and KPIs, and effective utilization of the optimization and sampling techniques.
The research highlights the advantageous aspects of automated testing by using OFAT, Simulated Annealing, and Random Forest Regressor methods. With OFAT, as a common sampling method, a sensitivity analysis and an impact of each single parameter variation on the performance of the network were revealed. With Simulated Annealing, an optimal solution with MSE of roughly 10 was revealed. And, in the Random Forest Regressor, it was seen that this method presented a significant advantage over the simulated annealing method by providing substantial benefits in time efficiency due to its machine- learning capability. Additionally, it was seen that by providing a larger dataset or using some other machine-learning techniques, the solution might be more accurate.
Gamification is increasingly successful in the field of education and health. However, beyond call-centers and applications in human resources, its utilization within companies remains limited. In this paper, we examine the acceptance of gamification in a large company (with over 17,000 employees) across three generations, namely X, Y, and Z. Furthermore, we investigate which gamification elements are suited for business contexts, such as the dissemination of company principles and facts, or the organization of work tasks. To this end, we conducted focus group discussions, developed the prototype of a gamified company app, and performed a large-scale evaluation with 367 company employees. The results reveal statistically significant intergenerational disparities in the acceptance of gamification: younger employees, especially those belonging to Generation Z, enjoy gamification more than older employees and are most likely to engage with a gamified app in the workplace. The results further show a nuanced range of preferences regarding gamification elements: avatars are popular among all generations, badges are predominantly appreciated by Generations Z and Y, while leaderboards are solely liked by Generation Z. Drawing upon these insights, we provide recommendations for future gamification projects within business contexts. We hope that the results of our study regarding the preferences of the gamification elements and understanding generational differences in acceptance and usage of gamification will help to create more engaging and effective apps, especially within the corporate landscape.
As e-commerce platforms have grown in popularity, new difficulties have emerged, such as the growing use of bots—automated programs—to engage with e-commerce websites. Even though some algorithms are helpful, others are malicious and can seriously hurt e-commerce platforms by making fictitious purchases, posting fictitious evaluations, and gaining control of user accounts. Therefore, the development of more effective and precise bot identification systems is urgently needed to stop such actions. This thesis proposes a methodology for detecting bots in E-commerce using machine learning algorithms such as K-nearest neighbors, Decision Tree, Random Forest, Support Vector Machine, and Neural Network. The purpose of the research is to assess and contrast the output of these machine learning methods. The suggested approach will be based on data that is readily accessible to the public, and the study’s focus will be on the research of bots in e-commerce.
The purpose of the study is to provide an overview of bots in e-commerce, as well as information on the different kinds and traits of bots, as well as current research on bots in e-commerce and associated work on bot detection in e-commerce. The research also seeks to create a more precise and effective bot detection system as well as find critical factors in detecting bots in e-commerce.
This research is significant because it sheds light on the increasing issue of bots in e-commerce and the requirement for more effective bot detection systems. The suggested approach for using machine learning algorithms to identify bots in ecommerce can give e-commerce platforms a more precise and effective bot detection system to stop malicious bot activities. The study’s results can also be used to create a more effective bot detection system and pinpoint key elements in detecting bots in e-commerce.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
The Internet of Things is spreading significantly in every sector, including the household, a variety of industries, healthcare, and emergency services, with the goal of assisting all of those infrastructures by providing intelligent means of service delivery. An Internet of Vulnerabilities (IoV) has emerged as a result of the pervasiveness of the Internet of Things (IoT), which has led to a rise in the use of applications and devices connected to the IoT in our day-to-day lives. The manufacture of IoT devices are growing at a rapid pace, but security and privacy concerns are not being taken into consideration. These intelligent Internet of Things devices are especially vulnerable to a variety of attacks, both on the hardware and software levels, which leaves them exposed to the possibility of use cases. This master’s thesis provides a comprehensive overview of the Internet of Things (IoT) with regard to security and privacy in the area of applications, security architecture frameworks, a taxonomy of various cyberattacks based on various architecture models, such as three-layer, four-layer, and five-layer. The fundamental purpose of this thesis is to provide recommendations for alternate mitigation strategies and corrective actions by using a holistic rather than a layer-by-layer approach. We discussed the most effective solutions to the problems of privacy and safety that are associated with the Internet of Things (IoT) and presented them in the form of research questions. In addition to that, we investigated a number of further possible directions for the development of this research.
As cyber threats continue to evolve, it is becoming increasingly important for organizations to have a Security Operations Center (SOC) in place to effectively defend against them. However, building and maintaining a SOC can be a daunting task without clear guidelines, policies, and procedures in place. Additionally, most current SOC solutions used by organizations are outdated, lack key features and integrations, and are expensive to maintain and upgrade. Moreover, proprietary solutions can lead to vendor lock-in, making it difficult to switch to a different solution in the future.
To address these challenges, this thesis proposes a comprehensive SOC framework and an open-source SOC solution that provides organizations with a flexible and cost-effective way to defend against modern cyber threats. The research methodology involved conducting a thorough literature review of existing literature and research on building and maintaining a SOC, including using SOC as a service. The data collected from the literature review was analyzed to identify common themes, challenges, and best practices for building and maintaining a SOC.
Based on the data collected, a comprehensive framework for building and maintaining a SOC was developed. The framework addresses essential areas such as the scope and purpose of the SOC, governance and leadership, staffing and skills, technologies and tools, processes and procedures, service level agreements (SLAs), and evaluation and measurement. This framework provides organizations with the necessary guidance and resources to establish and effectively operate a SOC, as well as a reference for evaluating the service provided by SOC service providers.
In addition to the SOC framework, a modern open-source SOC solution was developed, which emphasizes several key measures to help organizations defend against modern cyber threats. These measures include real-time, actionable threat intelligence, rapid and effective incident response, continuous security monitoring and alerting, automation, integration, and customization. The use of open-source technologies and a modular architecture makes the solution cost-effective, allowing organizations to scale it up or down as needed.
Overall, the proposed SOC framework and open-source SOC solution provide organizations with a comprehensive and systematic approach for building and maintaining a SOC that is aligned with the needs and objectives of the organization. The open-source SOC solution provides a flexible and cost-effective way to defend against modern cyber threats, helping organizations to effectively operate their SOC and reduce their risk of security incidents and breaches.
The goal of this thesis is to thoroughly investigate the concepts of stand-alone and decarbonization of optical fiber networks. Because of their dependability, fast speed, and capacity, optical fiber networks are vital inmodern telecommunications. Their considerable energy consumption and carbon emissions, on the other hand, constitute a danger to global sustainability objectives and must be addressed.
The first section of the thesis presents a summary of the current state of optical fiber networks, their
components, and the energy consumption connected with them. This part also goes over the difficulties of lowering energy usage and carbon emissions while preserving network performance and dependability.
The second section of the thesis focuses on the stand-alone idea, which entails powering the optical fiber network with renewable energy sources and energy-efficient technology. This section investigates and explores the possibilities of renewable energy sources like solar and wind power to power the network. It also investigates energy-efficient technologies like virtualization and cloud computing, as well as their potential to minimize network energy usage.
The third section of the thesis focuses on the notion of decarbonization, which entails lowering carbon emissions linked with the optical fiber network. This section looks at various carbon-reduction measures, such as employing low-carbon energy sources and improving energy efficiency. It also covers the relevance of carbon offsets and the difficulties associated with adopting decarbonization measures in the context of optical fiber networks.
The fourth section of the thesis compares the ideas of stand-alone and decarbonization. It investigates the advantages and disadvantages of each strategy, as well as their potential to minimize energy consumption and carbon emissions in optical fiber networks. It also explores the difficulties in applying these notions as well as potential hurdles to their wider adoption.
Finally, the need of addressing the energy consumption and carbon emissions connected with optical fiber networks is emphasized in this thesis.
It outlines important obstacles and potential impediments to adopting these initiatives and gives insights into potential ways for decreasing them.
It also makes suggestions for further study in this area.
Organizations striving to achieve success in the long term must have a positive brand image which will have direct implications on the business. In the face of the rising cyber threats and intense competition, maintaining a threat-free domain is an important aspect of preserving that image in today's internet world. Domain names are often near-synonyms for brand names for numerous companies. There are likely thousands of domains that try to impersonate the big companies in a bid to trap unsuspecting users, usually falling prey to attacks such as phishing or watering hole. Because domain names are important for organizations for running their business online, they are also particularly vulnerable to misuse by malicious actors. So, how can you ensure that your domain name is protected while still protecting your brand identity? Brand Monitoring, for example, may assist. The term "Brand Monitoring" applies only to keep tabs on an organization's brand performance, reception, and overall online presence through various online channels and platforms [1]. There has been a rise in the need of maintaining one's domain clear of any linkages to malicious activities as the threat environment has expanded. Since attackers are targeting domain names of organizations and luring unsuspecting users to visit malicious websites, domain monitoring becomes an important aspect. Another important aspect of brand abuse is how attackers leverage brand logos in creating fake and phishing web pages. In this Master Thesis, we try to solve the problem of classification of impersonated domains using rule-based and machine learning algorithms and automation of domain monitoring. We first use a rule-based classifier and Machine Learning algorithms to classify the domains gathered into two buckets – "Parked" and "Non-Parked". In the project's second phase, we will deploy object detection models (Scale Invariant Feature Transform - SIFT and Multi-Template Matching – MTM) to detect brand logos from the domains of interest.
Even though the internet has only been there for a short period, it has grown tremendously. To- day, a significant portion of commerce is conducted entirely online because of increased inter- net users and technological advancements in web construction. Additionally, cyberattacks and threats have expanded significantly, leading to financial losses, privacy breaches, identity theft, a decrease in customers’ confidence in online banking and e-commerce, and a decrease in brand reputation and trust. When an attacker pretends to be a genuine and trustworthy institution, they can steal private and confidential information from a victim. Aside from that, phishing has been an ongoing issue for a long time. Billions of dollars have been shed on the global economy. In recent years, there has been significant progress in the development of phishing detection and identification systems to protect against phishing attacks. Phishing detection technologies frequently produce binary results, i.e., whether a phishing attempt was made or not, with no explanation. On the other hand, phishing identification methodologies identify phishing web- pages by visually comparing webpages with predetermined authentic references and reporting phishing together with its target brand, resulting in findings that are understandable. However, technical difficulties in the field of visual analysis limit the applicability of currently available solutions, preventing them from being both effective (with high accuracy) and efficient (with little runtime overhead). Here, we evaluate existed framework called Phishpedia. This hybrid deep learning system can recognize identity logos from webpage screenshots and match logo variants of the same brand with high precision. Phishpedia provides high accuracy with low run- time. Lastly, unlike other methods, Phishpedia does not require training on any phishing sam- ples whatsoever. Phishpedia exceeds baseline identification techniques (EMD, PhishZoo, and LogoSENSE), inaccurately detecting phishing pages in lengthy testing using accurate phishing data. The effectiveness of Phishpedia was tested and compared against other standard machine learning algorithms and some state-of-the-art algorithms. The given solutions performed better than different algorithms in the given dataset, which is impressive.
Technology advancement has played a vital role in business development; however, it has opened a broad attack surface. Passwords are one of the essential concepts used in applications for authentication. Companies manage many corporate applications, so the employees must meet the password criteria, which leads to password fatigue. This thesis addressed this issue and how we can overcome this problem by theoretically implementing an IAM solution. In this, we disused MFA, SSO, biometrics, strong password policies and access control. We introduced the IAM framework that should be considered while implementing the IAM solution. Implementing an IAM solution adds an extra layer of security.
Authentic corporate social responsibility: antecedents and effects on consumer purchase intention
(2023)
Purpose
The aim of the research is to identify the factors that create an authentic company's corporate social responsibility (CSR) engagement and to investigate whether an authentic CSR engagement influences the purchase intention. In addition, the study attempts to provide insights into the mediation role of attitude toward the company and frequency of purchase on purchase intention.
Design/methodology/approach
In this study, a theoretical framework is developed in which major antecedents of authentic CSR are identified. A specific example of a brand and its corporate social responsibility activities was used for the study. An online questionnaire was used to collect the data. To verify the hypothesis, structural equation modeling with the partial least squares method was used. A total of 240 people participated in the study.
Findings
The results of the study confirmed that CSR authenticity positively influences consumer purchase intention. Furthermore, the hypothesized impact of CSR authenticity on attitudes toward the company and frequency of purchase could be verified.
Originality/value
Although there is research on the antecedents influencing the consumer's perceived authenticity of CSR, it has not addressed differences in impact and has not presented a full picture of influencing antecedents. In addition, CSR proof as a new antecedent is investigated in the study. Moreover, research on outcomes of perceived CSR authenticity still lacks depth. The study therefore addresses this research gap by providing an extensive research framework including antecedents influencing CSR authenticity and outcomes of CSR authenticity.
Server Side Rendering (SSR), Single Page Application (SPA), and Static Site Generation (SSG) are the three most popular ways of making modern Web applications today. If we go deep into these processes, this can be helpful for the developers and clients. Developers benefit since they do not need to learn other programming languages and can instead utilize their own experience to build different kinds of Web applications; for example, a developer can use only JavaScript in the three approaches. On the other hand, clients can give their users a better experience.
This Master Thesis’s purpose was to compare these processes with a demo application for each and give users a solid understanding of which process they should follow. We discussed the step-by-step process of making three applications in the above mentioned categories. Then we compared those based on criteria such as performance, security, Search Engine Optimization, developer preference, learning curve, content and purpose of the Web, user interface, and user experience. It also talked about the technologies such as JavaScript, React, Node.js, and Next.js, and why and where to use them. The goals we specified before the program creation were fulfilled and can be validated by comparing the solutions we gave for user problems, which was the application’s primary purpose.
Encryption techniques allow storing and transferring of sensitive information securely by using encryption at rest and encryption in transit, respectively. However, when computation is performed on these sensitive data, the data needs to be decrypted first and encrypted again after performing the computations. During the computations, the sensitive data becomes vulnerable to attackers as it's in decrypted form. Homomorphic encryption, a special type of encryption technique that allows computation on encrypted data can be used to solve the above-mentioned problem. The best way to achieve maximum security with homomorphic encryption is to perform at least the homomorphic encryption and decryption on the client side (browser) of a web application by not trusting the server. At present time there are many libraries with different homomorphic schemes available for homomorphic encryption. However, there are very few to no JavaScript libraries available to perform homomorphic encryption on the client side of any web application. This thesis mainly focuses on the JavaScript implementation of client-side homomorphic encryption. The fully homomorphic encryption scheme BFV is selected for the implementation. After implementing the fully homomorphic encryption scheme based on the “py-fhe” library, tests are also carried out in order to determine the applicability (in terms of time consumption, security and correctness) of this implementation in a web application by comparing the performance and security for different test cases and different settings.
Risk-based Cybermaturity Assessment Model - Protecting the company against ransomware attacks
(2023)
Ransomware has become one of the most catastrophic attacks in the previous decade, hurting businesses of all sorts worldwide. So, no organization is safe, and most companies are reviewing their ransomware defensive solutions to avoid business and operational hazards. IT departments are using cybersecurity maturity assessment frameworks like CMMC, C2M2, CMMI, NIST, CIS, CPP, and others to analyze organization security capabilities. In addition to maturity assessment models for the process layer and human pillar, there are much research on the analysis, identification, and defense of cyber threats in product/software layers that propose state-of-the-art approaches.
This motivates a comprehensive ransomware cyber security solution. Then, a crucial question arises: “How companies can measure the security maturity of controls in a specific danger for example for Ransomware attack?” Several studies and frameworks addressed this subject.
Complexity of understanding the ransomware attack, Lack of comprehensive ransomware defense solutions and Lack of cybermaturity assessment model for ransomware defense solutions are different aspects of problem statement in this study. By considering the most important limitations to developing a ransomware defense cybermaturity assessment method, this study developed a cybermaturity assessment methodology and implemented a Toolkit to conduct cyber security self-assessment specifically for ransomware attack to provide a clearer vision for enterprises to analyze the security maturity of controls regardless of industry or size.
It is generally agreed that the development and deployment of an important amount of IoT devices throughout the world has revolutionized our lives in a way that we can rely on these devices to complete certain tasks that may have not been possible just years ago which also brought a new level of convenience and value to our lives.
This technology is allowing us in a smart home environment to remotely control doors, windows, and fridges, purchase online, stream music easily with the use of voice assistants such as Amazon Echo Alexa, also close a garage door from anywhere in the world to cite some examples as this technology has added value to several domains ranging from household environments, cites, industries by exchanging and transferring data between these devices and customers. Many of these devices’ sensors, collect and share information in real-time which enables us to make important business decisions.
However, these devices pose some risks and also some security and privacy challenges that need to be addressed to reach their full potential or be considered to be secure. That is why, comprehensive risk analysis techniques are essential to enhance the security posture of IoT devices as they can help evaluate the robustness and reliability towards potential susceptibility to risks, and vulnerabilities that IoT devices in a smart home setting might possess.
This approach relies on the basis of ISO/IEC 27005 methodology and risk matrix method to highlight the level of risks, impact, and likelihood that an IoT device in smart home settings can have, map the related vulnerability, threats and risks and propose the necessary mitigation strategies or countermeasures that can be taken to secure a device and therefore satisfying some security principles. Around 30 risks were identified on Amazon Echo and the related IoT system using the methodology. A detailed list of countermeasures is proposed as a result of the risk analysis. These results, in turn, can be used to elevate the security posture of the device.
On a regular basis, we hear of well-known online services that have been abused or compromised as a result of data theft. Because insecure applications jeopardize users' privacy as well as the reputation of corporations and organizations, they must be effectively secured from the outset of the development process. The limited expertise and experience of involved parties, such as web developers, is frequently cited as a cause of risky programs. Consequently, they rarely have a full picture of the security-related decisions that must be made, nor do they understand how these decisions affect implementation accurately.
The selection of tools and procedures that can best assist a certain situation in order to protect an application against vulnerabilities is a critical decision. Regardless of the level of security that results from adhering to security standards, these factors inadvertently result in web applications that are insufficiently secured. JavaScript is a language that is heavily relied on as a mainstream programming language for web applications with several new JavaScript frameworks being released every year.
JavaScript is used on both the server-side in web applications development and the client-side in web browsers as well.
However, JavaScript web programming is based on a programming style in which the application developer can, and frequently must, automatically integrate various bits of code from third parties. This potent combination has resulted in a situation today where security issues are frequently exploited. These vulnerabilities can compromise an entire server if left unchecked. Even though there are numerous ad hoc security solutions for web browsers, client-side attacks are also popular. The issue is significantly worse on the server side because the security technologies available for server-side JavaScript application frameworks are nearly non-existent.
Consequently, this thesis focuses on the server-side aspect of JavaScript; the development and evaluation of robust server-side security technologies for JavaScript web applications. There is a clear need for robust security technologies and security best practices in server-side JavaScript that allow fine-grained security.
However, more than ever, there is this requirement of reducing the associated risks without hindering the web application in its functionality.
This is the problem that will be tackled in this thesis: the development of secure security practices and robust security technologies for JavaScript web applications, specifically, on the server-side, that offer adequate security guarantees without putting too many constraints on their functionality.
As information technology continues to advance at a rapid speed around the world, new difficulties emerge. The growing number of organizational vulnerabilities is among the most important issues. Finding and mitigating vulnerabilities is critical in order to protect an organization’s environment from multiple attack vectors.
The study investigates and comprehends the complete vulnerability management process from the standpoint of the security officer job role, as well as potential improvements. Few strategies are used to achieve efficient mitigation and the de- velopment of a process for tracking and mitigating vulnerabilities. As a result, a qualitative study is conducted in which the objective is to create a proposed vulner- ability and risk management process, as well as to develop a system for analyzing and tracking vulnerabilities and presenting the vulnerabilities in a graphical dash- board format. This thesis’s data was gathered through an organized literature study as well as through the use of various web resources. We explored numerous ap- proaches to analyze the data, such as categorizing the vulnerabilities every 30, 60, and 90 days to see whether the vulnerabilities were reoccurring or new. According to our findings, tracking vulnerabilities can be advantageous for a security officer.
We come to the conclusion that if an organization has a proper vulnerability tracking system and vulnerability management process, it can aid security officers in having a better understanding of and making plans for reducing vulnerabilities. In terms of system patching and vulnerability remediation, it will also assist the security officer in identifying areas of weakness in the process. As a result, the suggested ways provide an alternate approach to managing and tracking vulnerabilities in an effective manner, although there is still a small area that needs additional analysis and research to make it even better.
Every new technology is used by us humans almost without hesitation. Usually the military use comes first. Examples from recent history are the use of chemical weapons by Germany in the First World War and of atomic bombs in the Second World War by the US. Now, with the rapid advances in microelectronics over the past few decades, a wave of its application, called digitization, is spreading around the world with barely any control mechanisms. In many areas this has simplified and enriched our lives, but it has also encouraged abuse. The adaptation of legislation to contain the obvious excesses of “digitization” such as hate mail and anonymous threats is lagging behind massively. We hear almost nothing about technology assessment through systematic research; it is demanded at most by a few, usually small groups in civil society, which draw attention to the threats to humankind—future and present—and the Earth's ecosystem. One such group, the Federation of German Scientists (VDW) e.V., in the spirit of the responsibility of science for the peaceful and considered application of the possibilities it creates, asked three of its study groups to jointly organize its 2019 Annual Conference. The study groups “Health in Social Change,” “Education and Digitization,” and “Technology Assessment of Digitization” formulated the following position paper for the 2019 VDW Annual Conference, entitled “Ambivalences of the Digital.”
VR-based implementation of interactive laboratory experiments in optics and photonics education
(2022)
Within the framework of a developed blended learning concept, a lot of experience has already been gained with a mixture of theoretical lectures and hands-on activities, combined with the advantages of modern digital media. Here, visualizations using videos, animations and augmented reality have proven to be effective tools to convey learning content in a sustainable way. In the next step, ideas and concepts were developed to implement hands-on laboratory experiments in a virtual environment. The main focus is on the realization of virtual experiments and environments that give the students a deep insight into selected subfields of optics and photonics.
DE\GLOBALIZE
(2022)
The artistic research cycle DE\GLOBALIZE is a media ecological search movement for the terrestrial. After examining matters of fact in India (2014-18), matters of concern in Egypt (2016-2019) and matters of care in the Upper Rhine (2018-22), the focus turns toward matters of violence in the Congo (2022). From matter to mater, mother-earth, the garden to exploitation. From science, water and climate to migration, oppression and extermination.
The long-term research is accessible through interactive web documentation. The platform serves as a continuous media-archaeological archive for a speculative ethnography. The relational structure of the videographic essay is enabling the forensic processing of single documents in the sense of the actor-network theory.
The subject of the presentation at IFM is a field trip to the Congo planned for March 2022, which will focus on the ambivalence of violence and care in collaboration with local artists. The field trip is based on the postcolonial reflection luderitzcargo by the author from 1996, in which a freight container was transformed into a translocal cinema in Namibia.
Through the journey to Congo, a group of media artists, a psychotherapist, a theater dramaturg, a filmmaker and a philosopher intend to explore the political, technological and psycho-geographic borders. By artistic interventions with locals, we want to interfere with relational string figures as part of the new Earth Politics. They are focusing on the displaced consumption of resources which are hard-fought and guarantee prosperity in the global north. The so-called ghost acreages are repressed and justified as part of a civilizational mission. With this trip, we want to confront our self-lies with the ones of our hosts. We want to confront ourselves with the foreign, the dark and the displaced ghosts within ourselves. In the presentation at the #IFM2022 Conference, the platform DE\GLOBALIZE will be problematized itself as an example of epistemic violence for the ethnographic memory of (Western) knowledge.
We are not the missionaries but the perplexed travellers. In our search movement, we are dealing with psychoanalysis, video, performance and trance. As disoriented white men we try the reversal of Black Skin and White Mask by Franz Fanon without blackfacing. We will not only care about the sensitivity of our skin but that of our g/hosts and the one of mother earth.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
We consider large scale Peer-to-Peer Sensor Networks, which try to calculate and distribute the mean value of all sensor inputs. For this we design, simulate and evaluate distributed approximation algorithms which reduce the number of messages. The main difference of these algorithms is the underlying communication protocol which all use the random call model, where in discrete round model each node can call a random sensor node with uniform probability.The amount of data exchanged between sensor nodes and used in the calculation process affects the accuracy of the aggregation results leading to a trade-off situation. The key idea of our algorithms is to limit the sample size using the Finite Population Correction (FPC) method and collect the data using a distribution aggregation using Push-Pull Sampling, Pull Sampling, and Push Sampling communication protocols. It turns out that all methods show exponential improvement of Mean Squared Error (MSE) with the number of messages and rounds.
Objective: Dickkopf 3 (DKK3) has been identified as a urinary biomarker. Values above 4000 pg/mg creatinine (Cr) were linked with a higher risk of short-term decline of kidney function (J Am Soc Nephrol 29: 2722–2733). However, as of today, there is little experience with DKK3 as a risk marker in everyday clinical practice. We used algorithm-based data analysis to evaluate the potential dependence of DKK3 in a cohort from a large single center in Germany.
Method: DKK3 was measured in all CKD patients in our center October 1 st 2018 till Dec. 31 2019, together with calculated GFR (eGFR) and urinary albumin/creatinine ratio (UACR). Kidney transplant patients were excluded. Until the end of follow-up Dec 31 st 2021, repeated measurements were performed for all parameters. Data analysis was performed using MD-Explorer (BioArtProducts, Rostock, Germany) and Python with multiple libraries. Linear regression models were applied in patients for DKK3, eGFR and UACR. Comparison of the models was performed with a twosided Kolmogorov-Smirnov test.
Results: 1206 DKK3 measurements were performed in 1103 patients (621 male, age 70yrs, eGFR 29,41 ml/min/1.73qm, UACR 800 mg/g). 134 patients died during follow-up. DKK3 mean was 2905 pg/mg Cr (max. 20000, 75 % percentile 3800). 121 pts had DKK3 > 4000. At the end of follow-up 7 % of patients with DKK3 < 4000 (initial eGFR 17.6) versus 39.6 % of patients with DDK3 > 4000 (initial eGFR 15.7) underwent dialysis. Compared to eGFR and UACR at baseline, DKK3 > 4000 performed best to predict eGFR loss over the next 12 months.
Conclusion: In this cohort of CKD patients, DKK3 > 4000 at baseline predicted the eGFR slope better than eGFR or UACR at baseline. DKK3 > 4000 reflected a higher risk of progression towards ESRD in patients with similar baseline eGFR levels.