Refine
Year of publication
Document Type
- Conference Proceeding (99)
- Article (reviewed) (12)
- Part of a Book (12)
- Article (unreviewed) (9)
- Contribution to a Periodical (7)
- Bachelor Thesis (6)
- Book (3)
- Working Paper (3)
- Doctoral Thesis (2)
- Master's Thesis (2)
Conference Type
- Konferenzartikel (90)
- Konferenzband (5)
- Konferenz-Poster (2)
- Sonstiges (2)
- Konferenz-Abstract (1)
Language
- English (157) (remove)
Keywords
- Gamification (11)
- Assistive Technology (8)
- Human Computer Interaction (6)
- Optik (6)
- Deafblindness (5)
- Photonik (5)
- Wearables (5)
- Education in Optics and Photonics (4)
- Games (4)
- Roboter (4)
- Robotics (4)
- Social Robots (4)
- Virtual Reality (4)
- Affective Computing (3)
- Communication Systems (3)
- Computer Games (3)
- Computerspiele (3)
- E-Learning (3)
- Game Design (3)
- Haptics (3)
- Human Resources (3)
- Information Systems (3)
- Licht (3)
- Tactile (3)
- Virtuelle Realität (3)
- research-oriented education (3)
- Algorithmus (2)
- Astronomical events (2)
- Bloom filters (2)
- Cloud Computing (2)
- Datensicherung (2)
- Design (2)
- Emotion Recognition (2)
- Human Computrer Interaction (2)
- Informatik (2)
- Interaction metaphor (2)
- International Day of Light, IDL (2)
- International Year of Light, IYL (2)
- Live Broadcasting (2)
- Navigation (2)
- Netzwerk (2)
- Procedural Content Generation (2)
- Rehabilitation (2)
- Robots (2)
- Security (2)
- Sensortechnik (2)
- Smart Textiles (2)
- Spiel (2)
- Threat Modeling (2)
- Unity (2)
- education and research (2)
- optics and photonics (2)
- 3D interaction (1)
- 3D virtual reality (1)
- AI (1)
- AVD (1)
- Android (1)
- Arbeitstag (1)
- Arbeitswissenschaft (1)
- Art and Photonics (1)
- Artificial Intelligence (1)
- Artistic Research (1)
- Assistive Technologies (1)
- Assistive systems at the workplace (1)
- Astronomie (1)
- Audiovisual Performance (1)
- Augenfolgebewegung (1)
- Augmented Reality (1)
- Ausbildung (1)
- Automata (1)
- Automobile After-Sales Service (1)
- Banking (1)
- Bauteil (1)
- Bedrohungsanalyse (1)
- Biosignals (1)
- Brain Tissue (1)
- Bruchmechanik (1)
- COVID-19 (1)
- Cloud Security (1)
- Cloud Service Provider (1)
- Cloud User (1)
- Collision Avoidance (1)
- Collision avoidance (1)
- Computer Science (1)
- Computersicherheit (1)
- Computersimulation (1)
- Computerspiel (1)
- Context-Awareness (1)
- Context-awareness (1)
- Corona (1)
- DMD (1)
- DVRIP (1)
- Data privacy (1)
- Datenbanksystem (1)
- Deaf-Blindness (1)
- Dienstleistung (1)
- Diffusion (1)
- Digitalisierung (1)
- Economic Impact (1)
- Education (1)
- Educations (1)
- Emotions (1)
- Erweiterte Realität <Informatik> (1)
- Ethics of technology (1)
- Event Tracing for Windows (1)
- Farbe (1)
- Faseroptik (1)
- Faserstoff (1)
- Fläche (1)
- Flüssigkristall (1)
- Funktechnik (1)
- Game Development (1)
- Gehirn (1)
- Generative Art (1)
- Gestaltung (1)
- Gewebe (1)
- HR (1)
- History of Technology (1)
- Human-Robot Interaction (1)
- IDS (1)
- Impairments (1)
- Implementation (1)
- Implementierung (1)
- Improvisation (1)
- Industrie 4.0 (1)
- Informationstechnik (1)
- Interactive Documentary (1)
- Intrusion Detection (1)
- Kommunikation (1)
- Kontextbewusstsein (1)
- Kryptographie (1)
- Laboratory Exercises (1)
- Leap Motion Controller (1)
- Learning (1)
- Lehre (1)
- Lernen (1)
- Linse (1)
- Logging (1)
- Lokalisation (1)
- Loneliness (1)
- Machine learning (1)
- Maschinenbau (1)
- Mass Diffusion (1)
- Media Ecology (1)
- Mikrocontroller (1)
- Mobile Applications (1)
- Mobiles Endgerät (1)
- Network-Intrusion-Detection (1)
- Neurodivergent (1)
- Onboarding (1)
- Optics and Photonics (1)
- PCG (1)
- Phontonik (1)
- Physik (1)
- Porous Media Theory (1)
- Privatsphäre (1)
- Procedural Content (1)
- Produktion (1)
- Programmierung (1)
- Quellcode (1)
- Range Imaging (1)
- Range imaging RGB-D (1)
- Recruiting (1)
- Recrutainment (1)
- Refugee Migration (1)
- Risk Assessment (1)
- RoboCup (1)
- Robot-Assisted Training (1)
- STRIDE (1)
- Security Engineering (1)
- Segmentierung (1)
- Sexual Orientation (1)
- Sicherheit (1)
- Simulation (1)
- Simulation-based Interaction (1)
- Smart Grid (1)
- Smart wearables (1)
- Social Interaction (1)
- Social Isolation (1)
- Social inclusion (1)
- Software Architecture (1)
- Software Security (1)
- Sound Synthesis (1)
- Soziale Roboter (1)
- Spektroskopie (1)
- Spiel-Engine (1)
- Spiele (1)
- Sprachkurs (1)
- Strömungsmechanik (1)
- Task Analysis (1)
- Taxonomy (1)
- Technology Acceptance (1)
- Telemetry (1)
- Telepresence (1)
- Temperaturmessung (1)
- Textile (1)
- Timing Attacks (1)
- Tissue (1)
- Topology (1)
- Touch (1)
- Tourism (1)
- Trauma (1)
- Trust (1)
- UX (1)
- Unemployment (1)
- Unity3D (1)
- User Experience (1)
- User Studies (1)
- VR (1)
- Verteiltes System (1)
- Videogame (1)
- Videospiel (1)
- Virtualisierung (1)
- Visual Impairments (1)
- Web Development (1)
- Web-Applikation (1)
- Web-Entwicklung (1)
- Webassembly (1)
- Windows (1)
- Wirbelsäule (1)
- Wirtschaft (1)
- Wissenschaft (1)
- Zahlung (1)
- Zenware (1)
- agent (1)
- agent based systems (1)
- analysis (1)
- approximate histograms (1)
- art (1)
- binary (1)
- bio-inspired models (1)
- bloom filters (1)
- cloud computing (1)
- cloud security (1)
- color vision (1)
- dahua (1)
- data malleability (1)
- data processing (1)
- deglobalization (1)
- differential mode-delay (1)
- display technologies (1)
- distributed computing (1)
- e-learning (1)
- eingebettetes Gerät (1)
- eye-tracking-movement (1)
- homomorphic encryption (1)
- kanal (1)
- learning scenario (1)
- m-learning (1)
- media (1)
- mobile learning (1)
- multimode fibre (1)
- multimode fibre connectors (1)
- optics (1)
- outsourced computation (1)
- peer to peer network (1)
- photonics (1)
- power distribution (1)
- protocol (1)
- real-time (1)
- reliability (1)
- self-organizing networks (1)
- sensor node (1)
- set operations (1)
- set relations (1)
- softwaregenerierter Code (1)
- teaching and learning culture (1)
- time series data (1)
- virtual reality (1)
- wireless sensor network (1)
- Überwachung (1)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (157) (remove)
Open Access
- Closed Access (78)
- Open Access (60)
- Closed (4)
- Bronze (2)
The aim of the smart grid is to achieve more efficient, distributed and secure supply of energy over the traditional power grid by using a bidirectional information flow between the grid agents (e.g. generator node, customer). One of the key optimization problems in smart grid is to produce power among generator nodes with a minimum cost while meeting the customer demand, known as Economic Dispatch Problem (EDP). In recent years, many distributed approaches to solve EDP have been proposed. However, protecting the privacy-sensitive data of individual generator nodes has been largely overlooked in the existing solutions. In this work, we show an attack against an existing auction-based EDP protocol considering a non-colluding semi-honest adversary. We briefly introduce our approach to a practical privacy-preserving EDP solution as our work in progress.
Remote code attestation protocols are an essential building block to offer a reasonable system security for wireless embedded devices. In the work at hand we investigate in detail the trustability of a purely software-based remote code attestation based inference mechanism over the wireless when e.g. running the prominent protocol derivate SoftWare-based ATTestation for Embedded Devices (SWATT). Besides the disclosure of pitfalls of such a protocol class we also point out good parameter choices which allow at least a meaningful plausibility check with a balanced false positive and false negative ratio.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
Practical exercises are a crucial part of many curricula. Even simple exercises can improve the understanding of the underlying subject. Most experimental setups require special hardware. To carry out e. g. a lens experiments the students need access to an optical bench, various lenses, light sources, apertures and a screen. In our previous publication we demonstrated the use of augmented reality visualization techniques in order to let the students prepare with a simulated experimental setup. Within the context of our intended blended learning concept we want to utilize augmented or virtual reality techniques for stationary laboratory exercises. Unlike applications running on mobile devices, stationary setups can be extended more easily with additional interfaces and thus allow for more complex interactions and simulations in virtual reality (VR) and augmented reality (AR). The most significant difference is the possibility to allow interactions beyond touching a screen. The LEAP Motion controller is a small inexpensive device that allows for the tracking of the user’s hands and fingers in three dimensions. It is conceivable to allow the user to interact with the simulation’s virtual elements by the user’s very hand position, movement and gesture. In this paper we evaluate possible applications of the LEAP Motion controller for simulated experiments in augmented and virtual reality. We pay particular attention to the devices strengths and weaknesses and want to point out useful and less useful application scenarios. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university’s laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based.
Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one’s perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content.
The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
Walking interfaces offer advantages in navigation of VE systems over other types of locomotion. However, VR helmets have the disadvantage that users cannot see their immediate surroundings. Our publication describes the prototypical implementation of a virtual environment (VE) system, capable of detecting possible obstacles using an RGB-D sensor. In order to warn users of potential collisions with real objects while they are moving throughout the VE tracking area, we designed 4 different visual warning metaphors: Placeholder, Rubber Band, Color Indicator and Arrow. A small pilot study was carried out in which the participants had to solve a simple task and avoid any arbitrarily placed physical obstacles when crossing the virtual scene. Our results show that the Placeholder metaphor (in this case: trees), compared to the other variants, seems to be best suited for the correct estimation of the position of obstacles and in terms of the ability to evade them.
With this generation of devices, Virtual Reality (VR) has actually made it into the living rooms of end-users. These devices feature 6-DOF tracking, allowing them to move naturally in virtual worlds and experience them even more immersively. However, for a natural locomotion in the virtual, one needs a corresponding free space in the real environment. The available space is often limited, especially in everyday environments and under normal spatial conditions. Furnishings and objects of daily life can quickly become obstacles for VR users if they are not cleared away. Since the idea behind VR is to place users into a virtual world and to hide the real world as much as possible, invisible objects represent potential obstacles. The currently available systems offer only rudimentary assistance for this problem. If a user threatens to leave the space previously defined for use, a visual boundary is displayed to allow orientation within the space. These visual metaphors are intended to prevent users from leaving the safe area. However, there is no detection of potentially dangerous objects within this part of space. Objects that have not been cleared away or that have been added in the meantime may still become obstacles. This thesis shows how possible obstacles in the environment can be detected automatically with range imaging cameras and how users can be effectively warned about them in the virtual environment without significantly disturbing their sense of presence. Four different interactive visual metaphors are used to signalize the obstacles within the VE. With the help of a user study, the four signaling variants and the obstacle detection were evaluated and tested.
The authors claim that location information of stationary ICT components can never be unclassified. They describe how swarm-mapping crowd sourcing is used by Apple and Google to worldwide harvest geo-location information on wireless access points and mobile telecommunication systems' base stations to build up gigantic databases with very exclusive access rights. After having highlighted the known technical facts, in the speculative part of this article, the authors argue how this may impact cyber deterrence strategies of states and alliances understanding the cyberspace as another domain of geostrategic relevance. The states and alliances spectrum of activities due to the potential existence of such databases may range from geopolitical negotiations by institutions understanding international affairs as their core business, mitigation approaches at a technical level, over means of cyber deterrence-by-retaliation.
We propose in this work to solve privacy preserving set relations performed by a third party in an outsourced configuration. We argue that solving the disjointness relation based on Bloom filters is a new contribution in particular by having another layer of privacy on the sets cardinality. We propose to compose the set relations in a slightly different way by applying a keyed hash function. Besides discussing the correctness of the set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. We are in particular interested in how having bits overlapping in the Bloom filters impacts the privacy level of our approach. Finally, we present our results with real-world parameters in two concrete scenarios.
While prospect of tracking mobile devices' users is widely discussed all over European countries to counteract COVID-19 propagation, we propose a Bloom filter based construction providing users' location privacy and preventing mass surveillance.
We apply a solution based on Bloom filters data structure that allows a third party, a government agency, to perform some privacy-preserving set relations on a mobile telco's access logfile.
By computing set relations, the government agency, given the knowledge of two identified persons, has an instrument that provides a (possible) infection chain from the initial to the final infected user no matter at which location on a worldwide scale they are.
The benefit of our approach is that intermediate possible infected users can be identified and subsequently contacted by the agency. With such approach, we state that solely identities of possible infected users will be revealed and location privacy of others will be preserved. To this extent, it meets General Data Protection Regulation (GDPR)requirements in this area.
Strings
(2020)
This article presents the currently ongoing development of an audiovisual performance work with the title Strings. This work provides an improvisation setting for a violinist, two laptop performers, and two generative systems. At the core of Strings lies an approach that establishes a strong correlation among all participants by means of a shared physical principle. The physical principle is that of a vibrating string. The article discusses how this principle is used in both natural and simulated forms as main interaction layer between all performers and as natural or generative principle for creating audio and video.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.
Wow, You Are Terrible at This!: An Intercultural Study on Virtual Agents Giving Mixed Feedback
(2020)
While the effects of virtual agents in terms of likeability, uncanniness, etc. are well explored, it is unclear how their appearance and the feedback they give affects people's reactions. Is critical feedback from an agent embodied as a mouse or a robot taken less serious than from a human agent? In an intercultural study with 120 participants from Germany and the US, participants had to find hidden objects in a game and received feedback on their performance by virtual agents with different appearances. As some levels were designed to be unsolvable, critical feedback was unavoidable. We hypothesized that feedback would be taken more serious, the more human the agent looked. Also, we expected the subjects from the US to react more sensitively to criticism. Surprisingly, our results showed that the agents' appearance did not significantly change the participants' perception. Also, while we found highly significant differences in inspirational and motivational effects as well as in perceived task load between the two cultures, the reactions to criticism were contrary to expectations based on established cultural models. This work improves our understanding on how affective virtual agents are to be designed, both with respect to culture and to dialogue strategies.
Communication protocols enable information exchange between different information systems. If protocol descriptions for these systems are not available, they can be reverse-engineered for interoperability or security reasons. This master thesis describes the analysis of such a proprietary binary protocol, named the DVRIP or Dahua private protocol from Dahua Technology. The analysis contains the identification of the DVRIP protocol header format, security mechanisms and vulnerabilities inside the protocol implementation. With the revealing insights of the protocol, an increase of the overall security is achieved. This thesis builds the foundation for further targeted security analyses.
The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus.
The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves.
The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop “Liquid Crystal Display in the do-it-yourself”.
This paper explains the realization of a concept for research-oriented photonics education. Using the example of the integration of an actual PhD project, it is shown how students are familiarized with the topic of research and scientific work in the first semesters. Typical research activities are included as essential parts of the learning process. Research should be made visible and tangible for the students. The authors will present all aspects of the learning environment, their impressions and experiences with the implemented scenario, as well as first evaluation results of the students.
The authors explain a developed concept for research-oriented education in optics and photonics. It is presented which goals are to be achieved, which strategies have been developed and how these can be implemented in a blended learning scenario. The goal of our education is the best possible qualification of the students on the basis of a strong scientific and research-oriented education, which also includes the acquisition of important interdisciplinary competences. All phases of a research process are to be mapped in the learning process and offer students an insight into current research topics in optics and photonics.
Increased knowledge transfer through the integration of research projects into university teaching
(2019)
This paper describes the integration of the research project "Characterization of Color Vision using Spectroscopy and Nanotechnology: Application to Media Photonics" into an engineering course in the field of media technology. The aim is to develop the existing learning concept towards a more research-oriented teaching. Involving students in research projects as part of the learning process provides a deeper insight into current research topics and the key elements of scientific work. This makes it easier for students to recognize the importance of the acquired theoretical knowledge for the practice, which enables them to derive new insights of their own.
We generalize the fluid flow problem of an oscillating flat plate (II. Stokes problem) in two directions. We discuss first the oscillating porous flat plate with superimposed blowing or suction. The second generalization is concerned with an increasing or decreasing velocity amplitude of the oscillating flat plate. Finally we show that a combination of both effects is possible as well.
In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.
Deafblindness, also known as dual sensory loss, is the combination of sight and hearing impairments of such extent that it becomes difficult for one sense to compensate for the other. Communication issues are a key concern for the Deafblind community. We present the design and technical implementation of the Tactile Board: a mobile Augmentative and Alternative Communication (AAC) device for individuals with deafblindness. The Tactile Board allows text and speech to be translated into vibrotactile signs that are displayed real-time to the user via a haptic wearable. Our aim is to facilitate communication for the deafblind community, creating opportunities for these individuals to initiate and engage in social interactions with other people without the direct need of an intervener.
Co-Designing Assistive Tools to Support Social Interactions by Individuals Living with Deafblindness
(2020)
Deafblindness is a dual sensory impairment that affects many aspects of life, including mobility, access to information, communication, and social interactions. Furthermore, individuals living with deafblindness are under a high risk of social isolation. Therefore, we identified opportunities for applying assistive tools to support social interactions through co-ideation activities with members of the deafblind community. This work presents our co-design approach, lessons learned and directions for designing meaningful assistive tools for dual sensory loss.
In the area of cloud computing, judging the fulfillment of service-level agreements on a technical level is gaining more and more importance. To support this we introduce privacy preserving set relations as inclusiveness and disjointness based ao Bloom filters. We propose to compose them in a slightly different way by applying a keyed hash function. Besides discussing the correctness of set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. Indeed, our solution proposes to bring another layer of privacy on the sizes. We are in particular interested how the overlapping bits of a Bloom filter impact the privacy level of our approach. We concretely apply our solution to a use case of cloud security audit on access control and present our results with real-world parameters.
This work discusses several use cases of post-mortem mobile device tracking in which privacy is required e.g. due to client-confidentiality agreements and sensibility of data from government agencies as well as mobile telecommunication providers. We argue that our proposed Bloomfilter based privacy approach is a valuable technical building block for the arising General Data Protection Regulation (GDPR) requirements in this area. In short, we apply a solution based on the Bloom filters data structure that allows a 3rd party to performsome privacy saving setrelations on a mobiletelco’s access logfile or other mobile access logfile from harvesting parties without revealing any other mobile users in the proximity of a mobile base station but still allowing to track perpetrators.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
In the work at hand, we state that privacy and malleability of data are two aspects highly desired but not easy to associate. On the one hand, we are trying to shape data to make them usable and editable in an intelligible way, namely without losing their initial information. On the other hand, we are looking for effective privacy on data such that no external or non-authorized party could learn about their content. In such a way, we get overlapping requirements by pursuing different goals; it is trivial to be malleable without being secure, and vice versa. We propose four “real-world” use cases identified as scenarios where these two contradictory features are required and taking place in distinct environments. These considered backgrounds consist of firstly, cloud security auditing, then privacy of mobile network users and industry 4.0 and finally, privacy of COVID-19 tracing app users. After presenting useful background material, we propose to employ multiple approaches to design solutions to solve the use cases. We combine homomorphic encryption with searchable encryption and private information retrieval protocol to build an effective construction for the could auditing use case. As a second step, we develop an algorithm to generate the appropriate parameters to use the somewhat homomorphic encryption scheme by considering correctness, performance and security of the respective application. Finally, we propose an alternative use of Bloom filter data structure by adding an HMAC function to allow an outsourced third party to perform set relations in a private manner. By analyzing the overlapping bits occurring on Bloom filters while testing the inclusiveness or disjointness of the sets, we show how these functions maintain privacy and allow operations directly computed on the data structure. Then, we show how these constructions could be applied to the four selected use cases. Our obtained solutions have been implemented and we provide promising results that validate their efficiency and thus relevancy.
Webassembly is a new technology to create application in a new way. Webassembly is being developed since 2017 by the worldwide web consortium (w3c). The primary task of webassembly is to improve web applications.
Today, more and more applications are being created as web applications. Web applications have some advantages - they are platform independent and even mobile platforms can run them, and no installation is needed apart from a modern web browser.
Currently, web applications are being developed in JavaScript (JS), hypertext mark-up language 5 (HTML 5), and cascading style sheets (CSS).
These technologies are not made for huge web applications, but they should not be replaced by webassembly; rather, webassembly is an extension to the currently existing technology.
The purpose of webassembly is to fix or improve the problems in web application development.
This master’s thesis reviews all of the aspects and checks whether the promises of webassembly are kept and where problems still exist.
This thesis deals with the implementation of character controls and combat system of the Action Adventure 'Scout 3D'. The game development was realized with the game engine Unity 3D. In the first part, the architecture of a typical game engine is explained. The single components are describes step by step. Then, five well-known game engines are compared and evaluated. In the next chapter, a short overview about design and architecture patterns is worked out. The features of Unity, that are used for the implementation, and Unity's animation system 'Mecanim, are described finally. The second part includes the requirement definitions for the game 'Scout COD' which define player input, different conditions that allow or disallow several activities and the behaviour of enemies. With the help of patterns the architecture of the game is designed. Then, the implementation is explained by means of code snippets.
In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
In contrast to their traditional, non-interactive counterparts, interactive dynamic visualisations allow users to adapt their form and content to their individual cognitive skills and needs. Provided that the interactive features allow for intuitive use without increasing cognitive load, interactive videos should therefore lead to more efficient forms of learning. This notion was tested in an experimental study, where participants learned to tie four nautical knots of different complexity by watching either non-interactive or interactive videos. The results show that in the interactive condition, participants used the interactive features like stopping, replaying, reversing or changing speed to adapt the pace of the video demonstration. This led to an uneven distribution of their attention and cognitive resources across the videos, which was more pronounced for the difficult knots. Consequently users of non-interactive video presentations, needed substantially more time than users of the interactive videos to acquire the necessary skills for tying the knots.
G.R.E.C is a adventure game, set in an dystopien industrial world, where you are a scavenger for hire. Explore the village of Vankhart Valley and grab everything valuable you can get your hands on.
Your trusty old jump boots will help you avoiding the nasty and deadly spores that changed the world of G.R.E.C forever.
Brand identification has the potential of shaping individuals' attitudes, performance and commitment within learning and work contexts. We explore these effects, by incorporating elements of branded identification within gamified environments. We report a study with 44 employees, in which task performance and emotional outcomes are assessed in a real-world assembly scenario - namely, while performing a soldering task. Our results indicate that brand identification has a direct impact on individuals' attitude towards the task at hand: while instigating positive emotions, aversion and reactance also arise.
The core logging and tracing facility in Windows operating system is called Event Tracing for Windows (ETW).
Data sources providing events for ETW are instrumented all over the operating system.
That means most hard- and software assets in a Windows system are instrumented with ETW and so are able to contribute low-level information.
ETW can be used by developers and administrators to get low-level information about operating system's activity.
We describe existing tools to interact with the ETW faciltity and evaluate them based on defined criteria.
Based on relevant application scenarios, we show the richness of informational content for debugging or detecting security incidents with ETW.
The widely used instrumentation of ETW in the operating system and its application results also in security risks according to confidentiality.
Based on common ETW providers we show the impact to confidentiality what ETW offers an adversary.
At the end we evaluate solutions and approaches for a customizable telemetry infrastructure using ETW in large-scale environments.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.
In this paper we report on the commercial background as well as resulting high-level architecture and design of a cloud-based system for cryptographic software protection and licensing. This is based on the experiences and insights gained in the context of a real-world commercial R&D project at Wibu-Systems AG, a company that specialises in software encryption and licensing solutions.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
Threat Modelling is an accepted technique to identify general threats as early as possible in the software development lifecycle. Previous work of ours did present an open-source framework and web-based tool (OVVL) for automating threat analysis on software architectures using STRIDE. However, one open problem is that available threat catalogues are either too general or proprietary with respect to a certain domain (e.g. .Net). Another problem is that a threat analyst should not only be presented (repeatedly) with a list of all possible threats, but already with some automated support for prioritizing these. This paper presents an approach to dynamically generate individual threat catalogues on basis of the established CWE as well as related CVE databases. Roughly 60% of this threat catalogue generation can be done by identifying and matching certain key values. To map the remaining 40% of our data (~50.000 CVE entries) we train a text classification model by using the already mapped 60% of our dataset to perform a supervised machine-learning based text classification. The generated entire dataset allows us to identify possible threats for each individual architectural element and automatically provide an initial prioritization. Our dataset as well as a supporting Jupyter notebook are openly available.
OVVL (the Open Weakness and Vulnerability Modeller) is a tool and methodology to support threat modeling in the early stages of the secure software development lifecycle. We provide an overview of OVVL (https://ovvl.org), its data model and browser-based UI. We equally provide a discussion of initial experiments on how identified threats in the design phase can be aligned with later activities in the software lifecycle (issue management and security testing).
Prof. Gitte Lindgaard, from the University of Carleton, Canada, says that viewing only some milliseconds of the first page of a website defines our general opinion about it [1]. For an online-shop, it would therefore be essential to have a first page that is not only pleasing to the eye, but also understandable enough to not loose the attention of the user. More and more companies are nowadays using the Internet not only as a showcase anymore, but as a full-strength selling tool, needing thus to convince their users and clients at first glance. This paper shows the analysis of two online-shops in the magazines’ field thanks to eye-tracking. With the analysis of the testers’ glances and their comments during and after the test, the usability of these two websites has been evaluated.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously during development iterations, threat modeling can help in following a “Security by Design” approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this thesis the current state of threat modeling is investigated. Based on this analysis, requirements for a new tool are derived. These requirements are then used to develop a new tool, called OVVL, which utilizes all main components of current threat modeling methodologies, as well as functionality not available in existing solutions. After documenting the development process and OVVL in general, this newly developed tool is used to conduct two case studies in the field of e-commerce and IoT.
Monitoring of the molecular structure of lubricant oil using a FT-Raman spectrometer prototype
(2014)
The determination of the physical state of the lubricant materials in complex mechanical systems is highly critical from different points of view: operative, economical, environmental, etc. Furthermore, there are several parameters that a lubricant oil must meet for a proper performance inside a machine. The monitoring of these lubricants can represent a serious issue depending on the analytical approach applied. The molecular change of aging lubricant oils have been analyzed using an all-standard-components and self-designed FT-Raman spectrometer. This analytical tool allows the direct and clean study of the vibrational changes in the molecular structure of the oils without having direct contact with the samples and without extracting the sample from the machine in operation. The FT-Raman spectrometer prototype used in the analysis of the oil samples consist of a Michelson interferometer and a self-designed photon counter cooled down on a Peltier element arrangement. The light coupling has been accomplished by using a conventional 62.5/125μm multi-mode fiber coupler. The FT-Raman arrangement has been able to extract high resolution and frequency precise Raman spectra, comparable to those obtained with commercial FT-Raman systems, from the lubricant oil samples analyzed. The spectral information has helped to determine certain molecular changes in the initial phases of wearing of the oil samples. The proposed instrument prototype has no additional complex hardware components or costly software modules. The mechanical and thermal irregularities influencing the FT-Raman spectrometer have been removed mathematically by accurately evaluating the optical path difference of the Michelson interferometer. This has been achieved by producing an additional interference pattern signal with a λ= 632.8 nm helium-neon laser, which differs from the conventional zero-crossing sampling (also known as Connes advantage) commonly used by FT-devices. It enables the FT-Raman system to perform reliable and clean spectral measurements from the analyzed oil samples.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.