Refine
Year of publication
Document Type
- Conference Proceeding (120) (remove)
Conference Type
- Konferenzartikel (109)
- Konferenzband (6)
- Konferenz-Poster (3)
- Sonstiges (3)
- Konferenz-Abstract (1)
Keywords
- Gamification (9)
- Assistive Technology (8)
- Deafblindness (5)
- Wearables (5)
- Education in Optics and Photonics (4)
- Human Computer Interaction (4)
- Licht (4)
- Optik (4)
- Affective Computing (3)
- Communication Systems (3)
- Haptics (3)
- Information Systems (3)
- Tactile (3)
- Virtuelle Realität (3)
- research-oriented education (3)
- Astronomical events (2)
- Design (2)
- Emotion Recognition (2)
- Human Resources (2)
- International Day of Light, IDL (2)
- International Year of Light, IYL (2)
- Live Broadcasting (2)
- Navigation (2)
- Netzwerk (2)
- Photonik (2)
- Psychometrie (2)
- Rehabilitation (2)
- Smart Textiles (2)
- education and research (2)
- optics and photonics (2)
- 3D interaction (1)
- 3D virtual reality (1)
- Algorithmus (1)
- Arbeitstag (1)
- Arbeitswissenschaft (1)
- Art and Photonics (1)
- Assistive Technologies (1)
- Audiovisual Performance (1)
- Augmented Reality (1)
- Ausbildung (1)
- Bildungscontrolling (1)
- Biosignals (1)
- Blockchain (1)
- Bloom filters (1)
- Bruchmechanik (1)
- Cloud Computing (1)
- Cloud Security (1)
- Cloud Service Provider (1)
- Cloud User (1)
- Collision avoidance (1)
- Computer Games (1)
- Computersicherheit (1)
- Computersimulation (1)
- Context-Awareness (1)
- Context-awareness (1)
- DMD (1)
- Data Integrity (1)
- Datenbanksystem (1)
- Datensicherung (1)
- Deaf-Blindness (1)
- Dienstleistung (1)
- Digitale Medien (1)
- Digitalisierung (1)
- Digitalpakt Schule (1)
- E-Learning (1)
- Educations (1)
- Emotions (1)
- Erweiterte Realität <Informatik> (1)
- Fachwissen (1)
- Faserstoff (1)
- Flüssigkristall (1)
- Game Design (1)
- Games (1)
- Ganztagsschule (1)
- Gehirn (1)
- Gender in Science and Technology Studies (STS), digitalization, interactive documentary, participation (1)
- Generative Art (1)
- Gestaltung (1)
- Gewebe (1)
- HR (1)
- Human Computrer Interaction (1)
- Hyperledger (1)
- Impairments (1)
- Improvisation (1)
- Industrie 4.0 (1)
- Informationstechnik (1)
- Interaction metaphor (1)
- Kommunikation (1)
- Kontextbewusstsein (1)
- Kryptographie (1)
- Leap Motion Controller (1)
- Learning (1)
- Learning Analytics (1)
- Lehre (1)
- Lernen (1)
- MINT (1)
- Maschinenbau (1)
- Mikrocontroller (1)
- Neurodivergent (1)
- Onboarding (1)
- Optics and Photonics (1)
- Phontonik (1)
- Physik (1)
- Privatsphäre (1)
- Produktion (1)
- Programmierung (1)
- Prüfung (1)
- Quellcode (1)
- Range imaging RGB-D (1)
- Risk Assessment (1)
- RoboCup (1)
- Roboter (1)
- Robotics (1)
- Robots (1)
- Schulcloud (1)
- Security Engineering (1)
- Segmentierung (1)
- Sensortechnik (1)
- Sicherheit (1)
- Simulation (1)
- Simulation-based Interaction (1)
- Smart Grid (1)
- Smart wearables (1)
- Social Interaction (1)
- Social Robots (1)
- Software Protection (1)
- Software Security (1)
- Sound Synthesis (1)
- Soziale Roboter (1)
- Task Analysis (1)
- Temperaturmessung (1)
- Textile (1)
- Threat Modeling (1)
- Touch (1)
- Umwelt (1)
- Unterricht (1)
- VR (1)
- Vermessung des Menschen (1)
- Virtual Reality (1)
- Visual Impairments (1)
- Web-Applikation (1)
- Wissenschaft (1)
- Zahlung (1)
- Zenware (1)
- agent (1)
- agent based systems (1)
- approximate histograms (1)
- bio-inspired models (1)
- cloud computing (1)
- cloud security (1)
- color vision (1)
- datengestützte Schulentwicklung (1)
- differential mode-delay (1)
- display technologies (1)
- distributed computing (1)
- e-learning (1)
- eingebettetes Gerät (1)
- kanal (1)
- learning scenario (1)
- m-learning (1)
- multimode fibre (1)
- multimode fibre connectors (1)
- peer to peer network (1)
- power distribution (1)
- reliability (1)
- self-organizing networks (1)
- sensor node (1)
- softwaregenerierter Code (1)
- teaching and learning culture (1)
- time series data (1)
- virtual reality (1)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120) (remove)
Open Access
- Closed Access (68)
- Open Access (46)
- Bronze (3)
- Closed (3)
The aim of the smart grid is to achieve more efficient, distributed and secure supply of energy over the traditional power grid by using a bidirectional information flow between the grid agents (e.g. generator node, customer). One of the key optimization problems in smart grid is to produce power among generator nodes with a minimum cost while meeting the customer demand, known as Economic Dispatch Problem (EDP). In recent years, many distributed approaches to solve EDP have been proposed. However, protecting the privacy-sensitive data of individual generator nodes has been largely overlooked in the existing solutions. In this work, we show an attack against an existing auction-based EDP protocol considering a non-colluding semi-honest adversary. We briefly introduce our approach to a practical privacy-preserving EDP solution as our work in progress.
Remote code attestation protocols are an essential building block to offer a reasonable system security for wireless embedded devices. In the work at hand we investigate in detail the trustability of a purely software-based remote code attestation based inference mechanism over the wireless when e.g. running the prominent protocol derivate SoftWare-based ATTestation for Embedded Devices (SWATT). Besides the disclosure of pitfalls of such a protocol class we also point out good parameter choices which allow at least a meaningful plausibility check with a balanced false positive and false negative ratio.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
Practical exercises are a crucial part of many curricula. Even simple exercises can improve the understanding of the underlying subject. Most experimental setups require special hardware. To carry out e. g. a lens experiments the students need access to an optical bench, various lenses, light sources, apertures and a screen. In our previous publication we demonstrated the use of augmented reality visualization techniques in order to let the students prepare with a simulated experimental setup. Within the context of our intended blended learning concept we want to utilize augmented or virtual reality techniques for stationary laboratory exercises. Unlike applications running on mobile devices, stationary setups can be extended more easily with additional interfaces and thus allow for more complex interactions and simulations in virtual reality (VR) and augmented reality (AR). The most significant difference is the possibility to allow interactions beyond touching a screen. The LEAP Motion controller is a small inexpensive device that allows for the tracking of the user’s hands and fingers in three dimensions. It is conceivable to allow the user to interact with the simulation’s virtual elements by the user’s very hand position, movement and gesture. In this paper we evaluate possible applications of the LEAP Motion controller for simulated experiments in augmented and virtual reality. We pay particular attention to the devices strengths and weaknesses and want to point out useful and less useful application scenarios. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university’s laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based.
Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one’s perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content.
The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
Walking interfaces offer advantages in navigation of VE systems over other types of locomotion. However, VR helmets have the disadvantage that users cannot see their immediate surroundings. Our publication describes the prototypical implementation of a virtual environment (VE) system, capable of detecting possible obstacles using an RGB-D sensor. In order to warn users of potential collisions with real objects while they are moving throughout the VE tracking area, we designed 4 different visual warning metaphors: Placeholder, Rubber Band, Color Indicator and Arrow. A small pilot study was carried out in which the participants had to solve a simple task and avoid any arbitrarily placed physical obstacles when crossing the virtual scene. Our results show that the Placeholder metaphor (in this case: trees), compared to the other variants, seems to be best suited for the correct estimation of the position of obstacles and in terms of the ability to evade them.
We propose in this work to solve privacy preserving set relations performed by a third party in an outsourced configuration. We argue that solving the disjointness relation based on Bloom filters is a new contribution in particular by having another layer of privacy on the sets cardinality. We propose to compose the set relations in a slightly different way by applying a keyed hash function. Besides discussing the correctness of the set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. We are in particular interested in how having bits overlapping in the Bloom filters impacts the privacy level of our approach. Finally, we present our results with real-world parameters in two concrete scenarios.
Qualitative Wissenschaft, künstlerisches Forschen und forschendes Lernen verbinden Erkenntnis aus Praxis und Erfahrung. In der Autoethnographie der eigenen Werkstatt des Hörens wie der Kultur in Studios anderer, wird die noch neue Interdisziplin Sound (Studies) erprobt und vertieft, mit Impulsen für die Praxis und Theorie, von der noch wenig bekannten A/r/t ographie heute, hin zu einer künftig A/R/Tophonie, dem künstlerischen Forschen in der Musik, ebenso wie durch Klang Komposition, Radio Kunst und visuelle Musik.
Strings
(2020)
This article presents the currently ongoing development of an audiovisual performance work with the title Strings. This work provides an improvisation setting for a violinist, two laptop performers, and two generative systems. At the core of Strings lies an approach that establishes a strong correlation among all participants by means of a shared physical principle. The physical principle is that of a vibrating string. The article discusses how this principle is used in both natural and simulated forms as main interaction layer between all performers and as natural or generative principle for creating audio and video.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.
Wow, You Are Terrible at This!: An Intercultural Study on Virtual Agents Giving Mixed Feedback
(2020)
While the effects of virtual agents in terms of likeability, uncanniness, etc. are well explored, it is unclear how their appearance and the feedback they give affects people's reactions. Is critical feedback from an agent embodied as a mouse or a robot taken less serious than from a human agent? In an intercultural study with 120 participants from Germany and the US, participants had to find hidden objects in a game and received feedback on their performance by virtual agents with different appearances. As some levels were designed to be unsolvable, critical feedback was unavoidable. We hypothesized that feedback would be taken more serious, the more human the agent looked. Also, we expected the subjects from the US to react more sensitively to criticism. Surprisingly, our results showed that the agents' appearance did not significantly change the participants' perception. Also, while we found highly significant differences in inspirational and motivational effects as well as in perceived task load between the two cultures, the reactions to criticism were contrary to expectations based on established cultural models. This work improves our understanding on how affective virtual agents are to be designed, both with respect to culture and to dialogue strategies.
The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus.
The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves.
The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop “Liquid Crystal Display in the do-it-yourself”.
This paper explains the realization of a concept for research-oriented photonics education. Using the example of the integration of an actual PhD project, it is shown how students are familiarized with the topic of research and scientific work in the first semesters. Typical research activities are included as essential parts of the learning process. Research should be made visible and tangible for the students. The authors will present all aspects of the learning environment, their impressions and experiences with the implemented scenario, as well as first evaluation results of the students.
The authors explain a developed concept for research-oriented education in optics and photonics. It is presented which goals are to be achieved, which strategies have been developed and how these can be implemented in a blended learning scenario. The goal of our education is the best possible qualification of the students on the basis of a strong scientific and research-oriented education, which also includes the acquisition of important interdisciplinary competences. All phases of a research process are to be mapped in the learning process and offer students an insight into current research topics in optics and photonics.
Increased knowledge transfer through the integration of research projects into university teaching
(2019)
This paper describes the integration of the research project "Characterization of Color Vision using Spectroscopy and Nanotechnology: Application to Media Photonics" into an engineering course in the field of media technology. The aim is to develop the existing learning concept towards a more research-oriented teaching. Involving students in research projects as part of the learning process provides a deeper insight into current research topics and the key elements of scientific work. This makes it easier for students to recognize the importance of the acquired theoretical knowledge for the practice, which enables them to derive new insights of their own.
Deafblindness, also known as dual sensory loss, is the combination of sight and hearing impairments of such extent that it becomes difficult for one sense to compensate for the other. Communication issues are a key concern for the Deafblind community. We present the design and technical implementation of the Tactile Board: a mobile Augmentative and Alternative Communication (AAC) device for individuals with deafblindness. The Tactile Board allows text and speech to be translated into vibrotactile signs that are displayed real-time to the user via a haptic wearable. Our aim is to facilitate communication for the deafblind community, creating opportunities for these individuals to initiate and engage in social interactions with other people without the direct need of an intervener.
Co-Designing Assistive Tools to Support Social Interactions by Individuals Living with Deafblindness
(2020)
Deafblindness is a dual sensory impairment that affects many aspects of life, including mobility, access to information, communication, and social interactions. Furthermore, individuals living with deafblindness are under a high risk of social isolation. Therefore, we identified opportunities for applying assistive tools to support social interactions through co-ideation activities with members of the deafblind community. This work presents our co-design approach, lessons learned and directions for designing meaningful assistive tools for dual sensory loss.
This work discusses several use cases of post-mortem mobile device tracking in which privacy is required e.g. due to client-confidentiality agreements and sensibility of data from government agencies as well as mobile telecommunication providers. We argue that our proposed Bloomfilter based privacy approach is a valuable technical building block for the arising General Data Protection Regulation (GDPR) requirements in this area. In short, we apply a solution based on the Bloom filters data structure that allows a 3rd party to performsome privacy saving setrelations on a mobiletelco’s access logfile or other mobile access logfile from harvesting parties without revealing any other mobile users in the proximity of a mobile base station but still allowing to track perpetrators.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
Brand identification has the potential of shaping individuals' attitudes, performance and commitment within learning and work contexts. We explore these effects, by incorporating elements of branded identification within gamified environments. We report a study with 44 employees, in which task performance and emotional outcomes are assessed in a real-world assembly scenario - namely, while performing a soldering task. Our results indicate that brand identification has a direct impact on individuals' attitude towards the task at hand: while instigating positive emotions, aversion and reactance also arise.
Blockchain frameworks enable the immutable storage of data. A still open practical question is the so called "oracle" problem, i.e. the way how real world data is actually transferred into and out of a blockchain while preserving its integrity. We present a case study that demonstrates how to use an existing industrial strength secure element for cryptographic software protection (Wibu CmDongle / the "dongle") to function as such a hardware-based oracle for the Hyperledger blockchain framework. Our scenario is that of a dentist having leased a 3D printer. This printer is initially supplied with an amount of x printing units. With each print action the local unit counter on the attached dongle is decreased and in parallel a unit counter is maintained in the Hyperledger-based blockchain. Once a threshold is met, the printer will stop working (by means of the cryptographically protected invocation of the local print method). The blockchain is configured in such a way that chaincode is executed to increase the units again automatically (and essentially trigger any payment processes). Once this has happened, the new unit counter value will be passed from the blockchain to the local dongle and thus allow for further execution of print jobs.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.
In this paper we report on the commercial background as well as resulting high-level architecture and design of a cloud-based system for cryptographic software protection and licensing. This is based on the experiences and insights gained in the context of a real-world commercial R&D project at Wibu-Systems AG, a company that specialises in software encryption and licensing solutions.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
Threat Modelling is an accepted technique to identify general threats as early as possible in the software development lifecycle. Previous work of ours did present an open-source framework and web-based tool (OVVL) for automating threat analysis on software architectures using STRIDE. However, one open problem is that available threat catalogues are either too general or proprietary with respect to a certain domain (e.g. .Net). Another problem is that a threat analyst should not only be presented (repeatedly) with a list of all possible threats, but already with some automated support for prioritizing these. This paper presents an approach to dynamically generate individual threat catalogues on basis of the established CWE as well as related CVE databases. Roughly 60% of this threat catalogue generation can be done by identifying and matching certain key values. To map the remaining 40% of our data (~50.000 CVE entries) we train a text classification model by using the already mapped 60% of our dataset to perform a supervised machine-learning based text classification. The generated entire dataset allows us to identify possible threats for each individual architectural element and automatically provide an initial prioritization. Our dataset as well as a supporting Jupyter notebook are openly available.
OVVL (the Open Weakness and Vulnerability Modeller) is a tool and methodology to support threat modeling in the early stages of the secure software development lifecycle. We provide an overview of OVVL (https://ovvl.org), its data model and browser-based UI. We equally provide a discussion of initial experiments on how identified threats in the design phase can be aligned with later activities in the software lifecycle (issue management and security testing).
Aufgrund der zunehmenden Bedeutung von E-Prüfungen an Hochschulen und Universitäten werden Lösungen benötigt, die eine einfache, schnelle und sichere Nutzung von bestehenden Poolräumen für verschiedene Prüfungsszenarien ermöglichen. Das Projekt bwLehrpool hat in der Vergangenheit gezeigt, dass mit Hilfe von Virtualisierung eine große Anzahl an unterschiedlichen, individualisierten Lehrumgebungen flexibel und räumlich unabhängig verteilt werden kann. Im nächsten Schritt sollen nun Erweiterungen entwickelt werden, die diese Flexibilität auch für elektronische Prüfungen nutzbar macht. Dabei gilt es vor allem, die Vorteile, wie z.B. die Nutzung von Softwareunterstützung für realitätsnahe Aufgabenstellungen, mit der Notwendigkeit nach größtmöglicher Sicherheit und schneller Umrüstzeit der Infrastruktur in Einklang zu bringen. Um den aktuellen Entwicklungsstand zu testen, wurde im Wintersemester 2015/2016 an der Hochschule Offenburg eine E-Prüfung unter bwLehrpool durch über 140 Studierende durchgeführt. Die Ergebnisse zeigen, dass die Anforderungen bisher erfolgreich umgesetzt werden konnten, allerdings noch mehr manueller Aufwand nötig ist, als gewünscht. Der Ablauf soll in Zukunft weiter vereinfacht und verstetigt werden.
Monitoring of the molecular structure of lubricant oil using a FT-Raman spectrometer prototype
(2014)
The determination of the physical state of the lubricant materials in complex mechanical systems is highly critical from different points of view: operative, economical, environmental, etc. Furthermore, there are several parameters that a lubricant oil must meet for a proper performance inside a machine. The monitoring of these lubricants can represent a serious issue depending on the analytical approach applied. The molecular change of aging lubricant oils have been analyzed using an all-standard-components and self-designed FT-Raman spectrometer. This analytical tool allows the direct and clean study of the vibrational changes in the molecular structure of the oils without having direct contact with the samples and without extracting the sample from the machine in operation. The FT-Raman spectrometer prototype used in the analysis of the oil samples consist of a Michelson interferometer and a self-designed photon counter cooled down on a Peltier element arrangement. The light coupling has been accomplished by using a conventional 62.5/125μm multi-mode fiber coupler. The FT-Raman arrangement has been able to extract high resolution and frequency precise Raman spectra, comparable to those obtained with commercial FT-Raman systems, from the lubricant oil samples analyzed. The spectral information has helped to determine certain molecular changes in the initial phases of wearing of the oil samples. The proposed instrument prototype has no additional complex hardware components or costly software modules. The mechanical and thermal irregularities influencing the FT-Raman spectrometer have been removed mathematically by accurately evaluating the optical path difference of the Michelson interferometer. This has been achieved by producing an additional interference pattern signal with a λ= 632.8 nm helium-neon laser, which differs from the conventional zero-crossing sampling (also known as Connes advantage) commonly used by FT-devices. It enables the FT-Raman system to perform reliable and clean spectral measurements from the analyzed oil samples.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
This paper describes a comparative study of two tactile systems supporting navigation for persons with little or no visual and auditory perception. The efficacy of a tactile head-mounted device (HMD) was compared to that of a wearable device, a tactile belt. A study with twenty participants showed that the participants took significantly less time to complete a course when navigating with the HMD, as compared to the belt.
We provide a privacy-friendly cloud-based smart metering storage architecture which provides few-instance storage on encrypted measurements by at the same time allowing SQL queries on them. Our approach is most flexible with respect to two axes: on the one hand it allows to apply filtering rules on encrypted data with respect to various upcoming business cases; on the other hand it provides means for a storage-efficient handling of encrypted measurements by applying server-side deduplication techniques over all incoming smart meter measurements. Although the work at hand is purely dedicated to a smart metering architecture we believe our approach to have value for a broader class of IoT cloud storage solutions. Moreover, it is an example for Privacy-by-design supporting the positive-sum paradigm.
Logging information is more precious as it contains the execution of a system; it is produced by millions of events from simple application logins to random system errors. Most of the security related problems in the cloud ecosystem like intruder attacks, data loss, and denial of service, etc. could be avoided if Cloud Service Provider (CSP) or Cloud User (CU) analyses the logging information. In this paper we introduced few challenges, which are place of monitoring, security, and ownership of the logging information between CSP and CU.
Also we proposed a logging architecture to analyze the behaviour of the cloud ecosystem, to avoid data breaches and other security related issues at the CSP space. So that we believe our proposed architecture can provide maximum trust between CU and CSP.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in estimating the state of the smart grid, allowing the determination of meaningful system-wide measures (such as the current power generation, consumption, etc.) to balance the power in the grid environment. Data aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing overlay by exploiting the heterogeneous characteristics of the nodes and interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations in finite time and tolerates substantial message loss. Our approach is inspired from biological and sociological self-organizing mechanisms.
Mit Gendering Marteloskope stellen wir Entwicklungsprozess dar: Entstanden ist videografisches Material in Marteloskopen, die im Wald Bäume, Tablets und Menschen in Dialog zueinander setzen. Die Videografie und die Erfahrungen vor Ort werden mit Ansätzen aus Gender in Science and Technlogy Studies reflektiert sowie mit digital unterstützter kollaborativer Didaktik über interaktive Webdokumentationen zu Open Science Modulen zusammengeführt.
In the age data digitalization, important applications of optics and photonics based sensors and technology lie in the field of biometrics and image processing. Protecting user data in a safe and secure way is an essential task in this area. However, traditional cryptographic protocols rely heavily on computer aided computation. Secure protocols which rely only on human interactions are usually simpler to understand. In many scenarios development of such protocols are also important for ease of implementation and deployment. Visual cryptography (VC) is an encryption technique on images (or text) in which decryption is done by human visual system. In this technique, an image is encrypted into number of pieces (known as shares). When the printed shares are physically superimposed together, the image can be decrypted with human vision. Modern digital watermarking technologies can be combined with VC for image copyright protection where the shares can be watermarks (small identification) embedded in the image. Similarly, VC can be used for improving security of biometric authentication. This paper presents about design and implementation of a practical laboratory experiment based on the concept of VC for a course in media engineering. Specifically, our contribution deals with integration of VC in different schemes for applications like digital watermarking and biometric authentication in the field of optics and photonics. We describe theoretical concepts and propose our infrastructure for the experiment. Finally, we will evaluate the learning outcome of the experiment, performed by the students. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
The economic dispatch (ED) problem is a large-scale optimization problem in electricity power grids. Its goal is to find a power output combination of all generator nodes that meet the demand of the customers at minimum operating cost. In recent years, distributed protocols have been proposed to replace the traditional centralized ED calculation for modern smart grid infrastructures with the most realistic being the one proposed by Binetti et al. (2014). However, we show that this protocol leaks private information of the generator nodes. We then propose a privacy-preserving distributed protocol that solves the ED problem. We analyze the security of our protocol and give experimental results from a prototype implementation to show the feasibility of the solution.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
Gehören Sie zur „generation upload“? Laden Sie ihre privaten Bilder auf Flickr hoch und stellen Videos bei YouTube ein? Downloaden Sie MPEG-Files auf ihr Handheld oder spielen Sie ständig neue, echt witzige Apps auf ihr SmartPhone? Klicken Sie sich ihre Freunde in Facebook, MySpace oder StudiVZ zusammen, um rund um die Uhr zu chatten und zu bloggen? Oder twittern Sie eher und haben für Ihren Tweed schon Follower? Gruscheln Sie Menschen, deren Foto ihnen gefällt und sperren den Kontakt per Mausklick, wenn er oder sie doch nicht so nett ist? Software und Filme besorgen Sie sich von ihren Peers über Bit-Torrent-Tracker wie Pirate Bay? Lustig finden Sie „flash mobs“, weniger witzig „cyber mobs“? Oder sind Sie der eher rabiate Typ, der fremde Rechner hackt, spammt und „Google bombs“ platziert? Oder fragen Sie sich gerade, von was ich hier überhaupt rede? Willkommen in der „brave new world – of media”.
Die Frage nach der Struktur und Funktion von „Hochschulen“ kann man sinnvoll nicht isoliert betrachten ohne einen Blick auf Schulen. Hochschulen sind Teil des gesamten Schulsystems und eingebunden in eine (momentan noch) sehr differenzierte und vielfältige, bundesdeutsche „Bildungslandschaft“, die sich über Jahrhunderte herauskristallisiert hat. Tradition und evolutionäre Genese sind eine Konstante von Bildungseinrichtungen, der ständige Wandel und der stetige Reformdruck eine weitere. Es scheint, das an Schulen und Hochschulen immer von neuem laboriert werden muss, auch wenn das mögliche Spektrum von Einstellungen und Methoden – zumindest was Lernen und Lehrkonzepte betrifft, – seit der Antike bekannt sind.
Daher gliedert sich dieser Text in drei Abschnitte:
• Ein kurzer Blick zurück leitet zentrale Begriffe her.
• Die Analyse des Ist-Zustandes unter Berücksichtigung der seit 1998 unter dem Namen „Bologna“ realisierten Reformen (Vereinheitlichung der europäischen Studiengänge, Umstellung der Studiengänge auf andere Abschlüsse (Bachelor, Master) u.v.m.) zeigt aktuelle Fehlentwicklungen, nennt Gründe und Protagonisten .
• Der abschließend Blick nach vorn zeigt, was aus (Hoch)Schulen (wieder) werden könnten, wenn Lehrende und Studierende mutiger werden.