Refine
Year of publication
- 2014 (10) (remove)
Document Type
Conference Type
- Konferenzartikel (6)
- Konferenz-Abstract (1)
Has Fulltext
- no (10) (remove)
Is part of the Bibliography
- yes (10)
Keywords
- Algorithmus (2)
- 3D displays (1)
- 3D modeling (1)
- 3D printing (1)
- Cloud Security (1)
- Cloud Service Provider (1)
- Cloud User (1)
- Computersysteme (1)
- Data communications (1)
- Data modeling (1)
- Databases (1)
- Datenerfassung (1)
- E-Learning (1)
- Enhanced Security (1)
- Funktechnik (1)
- Intelligentes Stromnetz (1)
- Mobile devices (1)
- Netzwerk (1)
- Optik (1)
- Photonik (1)
- Segmentierung (1)
- Spannung (1)
- e-learning (1)
- kanal (1)
- Überwachung (1)
Institute
Open Access
- Open Access (4)
- Closed (2)
- Closed Access (2)
- Bronze (1)
E-Tutoren-Ausbildung: Lernerfahrungen reflektieren – Lehrhandlungskompetenzen dialogisch aufbauen
(2014)
Data is ever increasing in the computing world. Due to advancement of cloud technology the dynamics of volumes of data and its capacity has increased within a short period of time and will keep increasing further. Providing transparency, privacy, and security to the cloud users is becoming more and more challenging along with the volume of data and use of cloud services. We propose a new approach to address the above mentioned challenge by recording the user events in the cloud ecosystem into log files and applying MAR principle namely 1) Monitoring 2) Analyzing and 3) Reporting.
Logging information is more precious as it contains the execution of a system; it is produced by millions of events from simple application logins to random system errors. Most of the security related problems in the cloud ecosystem like intruder attacks, data loss, and denial of service, etc. could be avoided if Cloud Service Provider (CSP) or Cloud User (CU) analyses the logging information. In this paper we introduced few challenges, which are place of monitoring, security, and ownership of the logging information between CSP and CU.
Also we proposed a logging architecture to analyze the behaviour of the cloud ecosystem, to avoid data breaches and other security related issues at the CSP space. So that we believe our proposed architecture can provide maximum trust between CU and CSP.
Energy management in distribution grids is one of the key challenges that needs to be overcome to increase the share of fluctuating renewable energies. Current control systems for energy management mainly demonstrate centralized- or decentralized-hierarchical control structures. Very few systems manifest a fully decentralized multiagent-based control structure. Multiagent-based control systems promise to be an advantageous approach for the future distributed energy supply system because no central control entity is necessary, which eases parameterization in case of grid topology changes, and the agents are more stable against failures and changes of control topologies. Research is necessary to prove these benefits. In this study, we introduce a design of a multiagent-based voltage control system for low-voltage grids. In detail we introduce cooperative decision-making processes and software solutions that allow the agents to perceive and control their environment, the agent-discovery and localization in different types of communication networks, agent-to-agent communication, and the integration of the multiagent system in existing grid-control infrastructures. Furthermore, the study proposes how different existing technologies can be combined into an applicable multiagent-based voltage control system: the Java/OSGi-based OpenMUC framework allows a generic field–device interaction; peer-to-peer discovery and session establishment functionalities are combined with the agent communication defined by the Foundation for Intelligent Physical Agents (FIPA). The ripple control-signal technology is applied as a fallback communication between the agent and a central grid-control center.
Smoothie: a solution for device and content independent applications including 3D imaging as content
(2014)
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information represented in different data formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of effort is being made in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including those that are mobile, considering the individual situation of the end user. Till today the research is going on in different parts of the world but the task is not completed yet. The goal of this research work is to find a way to solve the above stated problems by investigating system architectures to provide unconstrained, continuous and personalized access to the content and interactive applications everywhere and at anytime with different devices. As a Solution of the problem considered, a new architecture named “Smoothie” is proposed.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.