Refine
Year of publication
Document Type
- Conference Proceeding (44) (remove)
Conference Type
- Konferenzartikel (38)
- Konferenz-Abstract (3)
- Konferenz-Poster (2)
- Sonstiges (1)
Keywords
- Couplings (3)
- Design automation (3)
- E-Learning (3)
- Mobile Learning (3)
- Blended Learning (2)
- FETs (2)
- Finite difference methods (2)
- Frequency (2)
- Integrated circuit interconnections (2)
- Microwave devices (2)
Institute
Open Access
- Open Access (21)
- Closed (17)
- Bronze (12)
- Closed Access (6)
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in estimating the state of the smart grid, allowing the determination of meaningful system-wide measures (such as the current power generation, consumption, etc.) to balance the power in the grid environment. Data aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing overlay by exploiting the heterogeneous characteristics of the nodes and interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations in finite time and tolerates substantial message loss. Our approach is inspired from biological and sociological self-organizing mechanisms.
We propose secure multi-party computation techniques for the distributed computation of the average using a privacy-preserving extension of gossip algorithms. While recently there has been mainly research on the side of gossip algorithms (GA) for data aggregation itself, to the best of our knowledge, the aforementioned research line does not take into consideration the privacy of the entities involved. More concretely, it is our objective to not reveal a node's private input value to any other node in the network, while still computing the average in a fully-decentralized fashion. Not revealing in our setting means that an attacker gains only minor advantage when guessing a node's private input value. We precisely quantify an attacker's advantage when guessing - as a mean for the level of data privacy leakage of a node's contribution. Our results show that by perturbing the input values of each participating node with pseudo-random noise with appropriate statistical properties (i) only a minor and configurable leakage of private information is revealed, by at the same time (ii) providing a good average approximation at each node. Our approach can be applied to a decentralized prosumer market, in which participants act as energy consumers or producers or both, referred to as prosumers.
Der Studienbeginn wird an der Hochschule Offenburg durch Vorbereitungskurse, sogenannte Brückenkurse, unterstützt. Wir stellen vorläufige Ergebnisse beim Einsatz von Smartphones und Tablets im Rahmen des Physik-Brückenkurses vor, bei dem die Studenten Hilfen zum selbständigen Üben durch eine App erhalten. Durch die Überarbeitung des Kurses und den Einsatz der App konnte der Teilnehmerschwund verringert werden. Die Evaluationsergebnisse bestätigen eine hohe Akzeptanz der Neuerungen seitens der Studierenden. Erste Auswertungen von Ein- und Ausgangstests deuten darauf hin, dass durch den Brückenkurs eine Angleichung der Vorkenntnisse der Studienanfänger erreicht wird, da Teilnehmer mit geringeren Vorkenntnissen tendenziell einen größeren Lernfortschritt erreichen. Durch unterschiedliche Schwierigkeitsstufen und selbstregulierte Übungsphasen in individuellem Tempo können aber auch die Erfordernisse der stärkeren Teilnehmer angemessen berücksichtigt werden.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.
Smoothie: a solution for device and content independent applications including 3D imaging as content
(2014)
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information represented in different data formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of effort is being made in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including those that are mobile, considering the individual situation of the end user. Till today the research is going on in different parts of the world but the task is not completed yet. The goal of this research work is to find a way to solve the above stated problems by investigating system architectures to provide unconstrained, continuous and personalized access to the content and interactive applications everywhere and at anytime with different devices. As a Solution of the problem considered, a new architecture named “Smoothie” is proposed.
The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.