004 Informatik
Refine
Document Type
Conference Type
- Konferenzartikel (6)
Is part of the Bibliography
- yes (15) (remove)
Keywords
- Blockchain (2)
- Anwendungsspezifischer Prozessor (1)
- Computer-Forensik-Experten (1)
- Data Provider (1)
- Data privacy (1)
- E-Learning (1)
- Elderly care (1)
- Elektronische Bibliothek (1)
- Emotion analysis (1)
- Entity Resolution (1)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (4)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (3)
- Zentrale Einrichtungen (3)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (3)
- Fakultät Medien (M) (ab 22.04.2021) (2)
- Fakultät Wirtschaft (W) (2)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (1)
- IMLA - Institute for Machine Learning and Analytics (1)
Open Access
- Open Access (9)
- Closed Access (6)
Informatik-Veranstaltungen in der Fakultät Medien und Informationswesen (abgekürzt MI) vermitteln meist komplexe Inhalte, die anschließend in begleitenden Laborveranstaltungen praktisch und an konkreten Beispielen vertieft werden. Allerdings benötigen die Studierenden für ein lehrreiches Labor und die selbstständige Erarbeitung korrekter Lösungen einige Grundkenntnisse, die aus der jeweiligen Theorieveranstaltung mitgebracht werden müssen. Um den Studierenden weiterhin die Möglichkeit zu geben, den Stoff der Lehrveranstaltungen raum- und zeitunabhängig nachzuarbeiten und auch didaktisch aufbereitete Übungen virtuelldurchzuführen, haben wir zu den Veranstaltungen Software Engineering, Computernetze und Datenbanken webbasierte E-Learning-Materialien konzipiert und erstellt (http://mi-learning.mi.fh-offenburg. de). Diese Materialien erlauben den Lernenden, selbstbestimmt, im eigenen Lernrhythmus und über unterschiedliche Medien einen Zugang zu der Thematik zu finden. Derartige hybride Lernarrangements (Blended Learning) kombinieren die Vorteile unterschiedlicher didaktischer Methoden und der Medien.
Instant messaging systems allow users to interact in real time over the Internet. Hackers and criminals often use instant messenger programs for illicit purposes and consequently the logfiles and any possible digital evidence from such programs are of forensic interest. The current research project attempts to provide an accurate and reasonable description of some issues where to find evidence and presents possible solutions to those issues.
Webserver-Log-Forensik
(2011)
Im Zug der IT-forensischen Ermittlungen nach Einbrüchen in eines der größten deutschen Internetportale wurde im Labor für IT-Sicherheit und Computer Forensik der Hochschule Offenburg ein Forschungsprojekt gestartet, das sich mit der Analyse von Schadsoftwarespuren in Logfiles beschäftigt. Ein im Zug dieser Forschungsarbeit entstandenes Programm, der „Analyzer of Death“, analysiert und interpretiert Spuren, die PHPbasierte Backdoor-Programme in den Webserver-Logfiles hinterlassen.
Applied Information Technology opens Virtual Platform for the Legacy of Alexander von Humboldt
(2011)
The Humboldt Digital Library (HDL) is a project that aims to provide digital access to the legacy of Alexander von Humboldt. The HDL runs on an open source library developed in the Hochschule Offenburg and provides a virtual research environment in which researchers can work more effectively. This article presents the development made in the HDL to provide alternative ways of content dissemination through the OAI protocol.Through the implemtantion of the OAI-PMH data provider in the HDL, the library is accessibly in many universities and research centers everywhere around the globe.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
In the work at hand, we state that privacy and malleability of data are two aspects highly desired but not easy to associate. On the one hand, we are trying to shape data to make them usable and editable in an intelligible way, namely without losing their initial information. On the other hand, we are looking for effective privacy on data such that no external or non-authorized party could learn about their content. In such a way, we get overlapping requirements by pursuing different goals; it is trivial to be malleable without being secure, and vice versa. We propose four “real-world” use cases identified as scenarios where these two contradictory features are required and taking place in distinct environments. These considered backgrounds consist of firstly, cloud security auditing, then privacy of mobile network users and industry 4.0 and finally, privacy of COVID-19 tracing app users. After presenting useful background material, we propose to employ multiple approaches to design solutions to solve the use cases. We combine homomorphic encryption with searchable encryption and private information retrieval protocol to build an effective construction for the could auditing use case. As a second step, we develop an algorithm to generate the appropriate parameters to use the somewhat homomorphic encryption scheme by considering correctness, performance and security of the respective application. Finally, we propose an alternative use of Bloom filter data structure by adding an HMAC function to allow an outsourced third party to perform set relations in a private manner. By analyzing the overlapping bits occurring on Bloom filters while testing the inclusiveness or disjointness of the sets, we show how these functions maintain privacy and allow operations directly computed on the data structure. Then, we show how these constructions could be applied to the four selected use cases. Our obtained solutions have been implemented and we provide promising results that validate their efficiency and thus relevancy.