Refine
Year of publication
Document Type
Language
- English (23) (remove)
Keywords
- E-Learning (3)
- Algorithmus (2)
- Optik (2)
- Photonik (2)
- mobile learning (2)
- App <Programm> (1)
- Communication Systems (1)
- Computersysteme (1)
- Datenerfassung (1)
- Datenmanagement (1)
Enthält die Artikel:
"Smoothie:a solution for device and content independent applications including 3D imaging as content" von Razia Sultana und Andreas Christ, S. 13-18
"Future of Logging in the Crisis of Cloud Security", von Sai Manoj Marepalli, Razia Sultana und Andreas Christ, S. 60-64
Recent developments in information and communication technology, along with advanced displaying techniques and high computational performance open up new visualisation methods to both scientists and lecturers. Thus simulations of complex processes [1] can be computed and visualised in image sequences. The particular idea in our approach is the outsourcing of computationally intensive calculations to servers which then send the results back to mobile users. In order to improve interpretations of the visualised results, users can view them in a 3D-perspective or stereoscopically, given the technical requirements. Today’s technology even permits to view these visualisations on a mobile phone. An example for such a computationally intensive calculation originating from the theory of relativity is depicted in Figure 4.1-1.
We propose secure multi-party computation techniques for the distributed computation of the average using a privacy-preserving extension of gossip algorithms. While recently there has been mainly research on the side of gossip algorithms (GA) for data aggregation itself, to the best of our knowledge, the aforementioned research line does not take into consideration the privacy of the entities involved. More concretely, it is our objective to not reveal a node's private input value to any other node in the network, while still computing the average in a fully-decentralized fashion. Not revealing in our setting means that an attacker gains only minor advantage when guessing a node's private input value. We precisely quantify an attacker's advantage when guessing - as a mean for the level of data privacy leakage of a node's contribution. Our results show that by perturbing the input values of each participating node with pseudo-random noise with appropriate statistical properties (i) only a minor and configurable leakage of private information is revealed, by at the same time (ii) providing a good average approximation at each node. Our approach can be applied to a decentralized prosumer market, in which participants act as energy consumers or producers or both, referred to as prosumers.
The iSign project started in 2000 as a web-based laboratory setting for students of electrical engineering. In the meantime it has broadened into a heterogeneous learning environment offering learning material, adaptive user settings and access to a simulation tool. All these offerings can be accessed via web and wireless by different clients, such as PCs, PDAs and mobile phones. User adaptive systems offer unique and personalised environment for every learner and therefore are a very important aspect of modern e-learning systems. The iSign project aims to personalise the content structure based on the learner's behaviour, content pattern, policies, and system environment. The second aspect of the recent research and development within this project is the generation of suitable content and presentation for different clients. This generation is based additionally on the user preferences in order to obtain the desirable presentation for a given device. New, valuable features are added to the mobile application, empowering the user not only to control the simulation process with his mobile device but also to input data, view the simulation's output and evaluate the results. Experiences with students have helped to improve functionality and look-and-feel whilst using the iSign system. Our goal is to provide unconstrained, continuous and personalised access to the laboratory settings and learning material everywhere and at anytime with different devices.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in
estimating the state of the smart grid, allowing the determination of
meaningful system-wide measures (such as the current power generation,
consumption, etc.) to balance the power in the grid environment. Data
aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing
overlay by exploiting the heterogeneous characteristics of the nodes and
interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations
in finite time and tolerates substantial message loss. Our approach is
inspired from biological and sociological self-organizing mechanisms.