Refine
Year of publication
Document Type
- Conference Proceeding (113)
- Article (reviewed) (25)
- Master's Thesis (5)
- Part of a Book (4)
- Article (unreviewed) (4)
- Report (4)
- Book (2)
- Doctoral Thesis (2)
- Patent (2)
Conference Type
- Konferenzartikel (112)
- Konferenzband (1)
Keywords
- Eingebettetes System (8)
- Blockchain (6)
- Kommunikation (4)
- blockchain (4)
- IIoT (3)
- IT-Sicherheit (3)
- Internet der Dinge (3)
- Internet of Things (3)
- IoT security (3)
- Security (3)
Institute
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (161) (remove)
Open Access
- Closed Access (81)
- Open Access (41)
- Closed (28)
- Gold (8)
- Bronze (7)
- Diamond (4)
IPv6 over resource-constrained devices (6Lo) emerged as a de-facto standard for the Internet of Things (IoT) applications especially in home and building automation systems. We provide results of an investigation of the applicability of 6LoWPAN with RPL mesh networks for home and building automation use cases. The proper selection of Trickle parameters and neighbor reachable time-outs is important in the RPL protocol suite to respond efficiently to any path failure. These parameters were analyzed in the context of energy consumption w.r.t the number of control packets. The measurements were performed in an Automated Physical Testbeds (APTB). The results match the recommendation by RFC 7733 for selecting various parameters of RPL protocol suite. This paper shows the relationship between various RPL parameters and control traffic overhead during network rebuild. Comparative measurement results with Bluetooth Low Energy (BLE) in this work showed that 6Lo with RPL outperformed BLE in this use case with less control traffic overheads.
The Internet of Things (IoT) application has becoming progressively in-demand, most notably for the embedded devices (ED). However, each device has its own difference in computational capabilities, memory usage, and energy resources in connecting to the Internet by using Wireless Sensor Networks (WSNs). In order for this to be achievable, the WSNs that form the bulk of the IoT implementation requires a new set of technologies and protocol that would have a defined area, in which it addresses. Thus, IPv6 Low Power Area Network (6LoWPAN) was designed by the Internet Engineering Task Force (IETF) as a standard network for ED. Nevertheless, the communication between ED and 6LoWPAN requires appropriate routing protocols for it to achieve the efficient Quality of Service (QoS). Among the protocols of 6LoWPAN network, RPL is considered to be the best protocol, however its Energy Consumption (EC) and Routing Overhead (RO) is considerably high when it is implemented in a large network. Therefore, this paper would propose the HRPL to enchance the RPL protocol in reducing the EC and RO. In this study, the researchers would present the performance of RPL and HRPL in terms of EC, Control traffic Overhead (CTO) and latency based on the simulation of the 6LoWPAN network in fixed environment using COOJA simulator. The results show HRPL protocol achieves better performance in all the tested topology in terms of EC and CTO. However, the latency of HRPL only improves in chain topology compared with RPL. We found that further research is required to study the relationship between the latency and the load of packet transmission in order to optimize the EC usage.
In the last decade, IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, has well evolved as a primary contender for short range wireless communication and holds the promise of an Internet of Things, which is completely based on the Internet Protocol. In the meantime, various 6LoWPAN implementations are available, be it open source or commercial. One of these implementations, which was developed by the authors' team, was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements show a very good stability and short-term and long-term performance also under dynamic conditions. In addition, it can be proven that the performance predictions from other papers are consistent with real-life implementations.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things. It can be observed that most of the available solutions are following an open source approach, which significantly leads to a fast development of technologies and of markets. Although the currently available implementations are in a pretty good shape, all of them come with some significant drawbacks. It was therefore decided to start the development of an own implementation, which takes the advantages from the existing solutions, but tries to avoid the drawbacks. This paper discussed the reasoning behind this decision, describes the implementation and its characteristics, as well as the testing results. The given implementation is available as open-source project under [15].
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
Uncontrollable manufacturing variations in electrical hardware circuits can be exploited as Physical Unclonable Functions (PUFs). Herein, we present a Printed Electronics (PE)-based PUF system architecture. Our proposed Differential Circuit PUF (DiffC-PUF) is a hybrid system, combining silicon-based and PE-based electronic circuits. The novel approach of the DiffC-PUF architecture is to provide a specially designed real hardware system architecture, that enables the automatic readout of interchangeable printed DiffC-PUF core circuits. The silicon-based addressing and evaluation circuit supplies and controls the printed PUF core and ensures seamless integration into silicon-based smart systems. Major objectives of our work are interconnected applications for the Internet of Things (IoT).
We propose secure multi-party computation techniques for the distributed computation of the average using a privacy-preserving extension of gossip algorithms. While recently there has been mainly research on the side of gossip algorithms (GA) for data aggregation itself, to the best of our knowledge, the aforementioned research line does not take into consideration the privacy of the entities involved. More concretely, it is our objective to not reveal a node's private input value to any other node in the network, while still computing the average in a fully-decentralized fashion. Not revealing in our setting means that an attacker gains only minor advantage when guessing a node's private input value. We precisely quantify an attacker's advantage when guessing - as a mean for the level of data privacy leakage of a node's contribution. Our results show that by perturbing the input values of each participating node with pseudo-random noise with appropriate statistical properties (i) only a minor and configurable leakage of private information is revealed, by at the same time (ii) providing a good average approximation at each node. Our approach can be applied to a decentralized prosumer market, in which participants act as energy consumers or producers or both, referred to as prosumers.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
When designing and installing Indoor Positioning Systems, several interrelated tasks have to be solved to find an optimum placement of the Access Points. For this purpose, a mathematical model for a predefined number of access points indoors is presented. Two iterative algorithms for the minimization of localization error of a mobile object are described. Both algorithms use local search technique and signal level probabilities. Previously registered signal strengths maps were used in computer simulation.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
The M-Bus protocol (EN13757) is in widespread use for metering applications within home area and neighborhood area networks, but lacks a strict specification. This may lead to incompatibilities in real-life installations and to problems in the deployment of new M-Bus networks. This paper presents the development of a novel testbed to emulate physical Metering Bus (M-Bus) networks with different topologies and to allow the flexible verification of real M-Bus devices in real-world scenarios. The testbed is designed to support device manufacturers and service technicians in test and analysis of their devices within a specific network before their installation. The testbed is fully programmable, allowing flexible changes of network topologies, cable lengths and types. Itis easy to use, as only the master and the slaves devices have to be physically connected. This allows to autonomously perform multiple tests, including automated regression tests. The testbed is available to other researchers and developers. We invite companies and research institutions to use this M-Bus testbed to increase the common knowledge and real-world experience.
In recent years, physically unclonable functions (PUFs) have gained significant attraction in IoT security applications, such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of different devices to generate unique fingerprints for security applications. When generating PUF-based secret keys, the reliability and entropy of the keys are vital factors. This study proposes a novel method for generating PUF-based keys from a set of measurements. Firstly, it formulates the group-based key generation problem as an optimization problem and solves it using integer linear programming (ILP), which guarantees finding the optimum solution. Then, a novel scheme for the extraction of keys from groups is proposed, which we call positioning syndrome coding (PSC). The use of ILP as well as the introduction of PSC facilitates the generation of high-entropy keys with low error correction costs. These new methods have been tested by applying them on the output of a capacitor network PUF. The results confirm the application of ILP and PSC in generating high-quality keys.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
Time-Sensitive Networking (TSN) is the most promising time-deterministic wired communication approach for industrial applications. To extend TSN to "IEEE 802.11" wireless networks two challenging problems must be solved: synchronization and scheduling. This paper is focused on the first one. Even though a few solutions already meet the required synchronization accuracies, they are built on expensive hardware that is not suited for mass market products. While next Wi-Fi generation might support the required functionalities, this paper proposes a novel method that makes possible high-precision wireless synchronization using commercial low-cost components. With the proposed solution, a standard deviation of synchronization error of less than 500 ns can be achieved for many use cases and system loads on both CPU and network. This performance is comparable to modern wired real-time field busses, which makes the developed method a significant contribution for the extension of the TSN protocol to the wireless domain.
We provide a privacy-friendly cloud-based smart metering storage architecture which provides few-instance storage on encrypted measurements by at the same time allowing SQL queries on them. Our approach is most flexible with respect to two axes: on the one hand it allows to apply filtering rules on encrypted data with respect to various upcoming business cases; on the other hand it provides means for a storage-efficient handling of encrypted measurements by applying server-side deduplication techniques over all incoming smart meter measurements. Although the work at hand is purely dedicated to a smart metering architecture we believe our approach to have value for a broader class of IoT cloud storage solutions. Moreover, it is an example for Privacy-by-design supporting the positive-sum paradigm.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
The Metering Bus, also known as M-Bus, is a European standard EN13757-3 for reading out metering devices, like electricity, water, gas, or heat meters. Although real-life M-Bus networks can reach a significant size and complexity, only very simple protocol analyzers are available to observe and maintain such networks. In order to provide developers and installers with the ability to analyze the real bus signals easily, a web-based monitoring tool for the M-Bus has been designed and implemented. Combined with a physical bus interface it allows for measuring and recording the bus signals. For this at first a circuit has been developed, which transforms the voltage and current-modulated M-Bus signals to a voltage signal that can be read by a standard ADC and processed by an MCU. The bus signals and packets are displayed using a web server, which analyzes and classifies the frame fragments. As an additional feature an oscilloscope functionality is included in order to visualize the physical signal on the bus. This paper describes the development of the read-out circuit for the Wired M-Bus and the data recovery.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
Institute of Reliable Embedded Systems and Communication Electronics, Offenburg University of Applied Sciences, Germany has developed an automated testing environment, Automated Physical TestBeds (APTB), for analyzing the performance of wireless systems and its supporting protocols. Wireless physical networking nodes can connect to this APTB and the antenna output of this attaches with the RF waveguides. To model the RF environment this RF waveguides then establish wired connection among RF elements like splitters, attenuators and switches. In such kind of set up it’s well possible to vary the path characteristics by altering the attenuators and switches. The major advantage of using APTB is the possibility of isolated, well controlled, repeatable test environment in various conditions to run statistical analysis and even to execute regression tests. This paper provides an overview of the design and implementation of APTB, demonstrates its ability to automate test cases, and its efficiency.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
In this paper, we study the runtime performance of symmetric cryptographic algorithms on an embedded ARM Cortex-M4 platform. Symmetric cryptographic algorithms can serve to protect the integrity and optionally, if supported by the algorithm, the confidentiality of data. A broad range of well-established algorithms exists, where the different algorithms typically have different properties and come with different computational complexity. On deeply embedded systems, the overhead imposed by cryptographic operations may be significant. We execute the algorithms AES-GCM, ChaCha20-Poly1305, HMAC-SHA256, KMAC, and SipHash on an STM32 embedded microcontroller and benchmark the execution times of the algorithms as a function of the input lengths.
Blockchain interoperability: the state of heterogenous blockchain-to-blockchain communication
(2023)
Blockchain technology has been increasingly adopted over the past few years since the introduction of Bitcoin, with several blockchain architectures and solutions being proposed. Most proposed solutions have been developed in isolation, without a standard protocol or cryptographic structure to work with. This has led to the problem of interoperability, where solutions running on different blockchain platforms are unable to communicate, limiting the scope of use. With blockchains being adopted in a variety of fields such as the Internet of Things, it is expected that the problem of interoperability if not addressed quickly, will stifle technology advancement. This paper presents the current state of interoperability solutions proposed for heterogenous blockchain systems. A look is taken at interoperability solutions, not only for cryptocurrencies, but also for general data-based use cases. Current open issues in heterogenous blockchain interoperability are presented. Additionally, some possible research directions are presented to enhance and to extend the existing blockchain interoperability solutions. It was discovered that though there are a number of proposed solutions in literature, few have seen real-world implementation. The lack of blockchain-specific standards has slowed the progress of interoperability. It was also realized that most of the proposed solutions are developed targeting cryptocurrency-based applications.
The integration of Internet of Things devices onto the Blockchain implies an increase in the transactions that occur on the Blockchain, thus increasing the storage requirements.
A solution approach is to leverage cloud resources for storing blocks within the chain. The paper, therefore, proposes two solutions to this problem. The first being an improved hybrid architecture design which uses containerization to create a side chain on a fog node for the devices connected to it and an Advanced Time‑variant Multi‑objective Particle Swarm Optimization Algorithm (AT‑MOPSO) for determining the optimal number of blocks that should be transferred to the cloud for storage. This algorithm uses time‑variant weights for the velocity of the particle swarm optimization and the non‑dominated sorting and mutation schemes from NSGA‑III. The proposed algorithm was compared with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algorithm (SPEA‑II), and the Pareto Envelope‑based Selection Algorithm with region‑based selection (PESA‑II), and NSGA‑III. The proposed AT‑MOPSO showed better results than the aforementioned MOPSO algorithms in cloud storage cost and query probability optimization. Importantly, AT‑MOPSO achieved 52% energy efficiency compared to NSGA‑III.
To show how this algorithm can be applied to a real‑world Blockchain system, the BISS industrial Blockchain architecture was adapted and modified to show how the AT‑MOPSO can be used with existing Blockchain systems and the benefits it provides.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
Real-Time Ethernet has become the major communication technology for modern automation and industrial control systems. On the one hand, this trend increases the need for an automation-friendly security solution, as such networks can no longer be considered sufficiently isolated. On the other hand, it shows that, despite diverging requirements, the domain of Operational Technology (OT) can derive advantage from high-volume technology of the Information Technology (IT) domain. Based on these two sides of the same coin, we study the challenges and prospects of approaches to communication security in real-time Ethernet automation systems. In order to capitalize the expertise aggregated in decades of research and development, we put a special focus on the reuse of well-established security technology from the IT domain. We argue that enhancing such technology to become automation-friendly is likely to result in more robust and secure designs than greenfield designs. Because of its widespread deployment and the (to this date) nonexistence of a consistent security architecture, we use PROFINET as a showcase of our considerations. Security requirements for this technology are defined and different well-known solutions are examined according their suitability for PROFINET. Based on these findings, we elaborate the necessary adaptions for the deployment on PROFINET.
Die Vision vom "Internet der Dinge" prägt seit Jahren Forschung und Entwicklung, wenn es um smarte Technologien und die Vernetzung von Geräten geht. In der Zukunft wird die reale Welt zunehmend mit dem Internet verknüpft, wodurch zahlreiche Gegenstände (Dinge) des normalen Alltags dazu befähigt werden, zu interagieren und sowohl online als auch autark zu kommunizieren. Viele Branchen wie Medizin, Automobilbau, Energieversorgung und Unterhaltungselektronik sind gleichermaßen betroffen, wodurch trotz Risiken auch neues wirtschaftliches Potential entsteht. Im Bereich "Connected Home" sind bereits Lösungen vorhanden, mittels intelligenter Vernetzung von Haushaltsgeräten und Sensoren, die Lebensqualität in den eigenen vier Wänden zu erhöhen. Diese Arbeit beschäftigt sich mit dem Thread Protokoll; einer neuen Technologie zur Integration mehrerer Kommunikationsschnittstellen innerhalb eines Netzwerks. Darüber hinaus wird die Implementierung auf Netzwerkebene (Network Layer) vorgestellt, sowie aufbereitete Informationen bezüglich verwendeter Technologien dargestellt.
Conceptualization and implementation of automated optimization methods for private 5G networks
(2023)
Today’s companies are adjusting to the new connectivity realities. New applications require more bandwidth, lower latency, and higher reliability as industries become more distributed and autonomous. Private 5th Generation (5G) networks known as 5G Non-Public Networks (5G-NPN), is a novel 3rd Generation Partnership Project (3GPP)- based 5G network that can deliver seamless and dedicated wireless access for a particular industrial use case by providing the mentioned application’s requirements. To meet these requirements, several radio-related aspects and network parameters should be considered. In many cases, the behavior of the link connection may vary based on wireless conditions, available network resources, and User Equipment (UE) requirements. Furthermore, Optimizing these networks can be a complex task due to the large number of network parameters and KPIs that need to be considered. For these reasons, traditional solutions and static network configuration are not affordable or simply impossible. Despite the existence of papers in the literature that address several optimization methods for cellular networks in industrial scenarios, more insight into these existing but complex or unknown methods is needed.
In this thesis, a series of optimization methods were implemented to deliver an optimal configuration solution for a 5G private network. To facilitate this implementation, a testing system was implemented. This system enables remote control over the UE and 5G network, establishment of a test environment, extraction of relevant KPI reports from both UE and network sides, assessment of test results and KPIs, and effective utilization of the optimization and sampling techniques.
The research highlights the advantageous aspects of automated testing by using OFAT, Simulated Annealing, and Random Forest Regressor methods. With OFAT, as a common sampling method, a sensitivity analysis and an impact of each single parameter variation on the performance of the network were revealed. With Simulated Annealing, an optimal solution with MSE of roughly 10 was revealed. And, in the Random Forest Regressor, it was seen that this method presented a significant advantage over the simulated annealing method by providing substantial benefits in time efficiency due to its machine- learning capability. Additionally, it was seen that by providing a larger dataset or using some other machine-learning techniques, the solution might be more accurate.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Covert and Side-Channels have been known for a long time due to their versatile forms of appearance. For nearly every technical improvement or change in technology, such channels have been (re-)created or known methods have been adapted. For example the introduction of hyperthreading technology has introduced new possibilities for covert communication between malicious processes because they can now share the arithmetic logical unit (ALU) as well as the L1 and L2 cache which enables establishing multiple covert channels. Even virtualization which is known for its isolation of multiple machines is prone to covert and side-channel attacks due to the sharing of resources. Therefore itis not surprising that cloud computing is not immune to this kind of attacks. Even more, cloud computing with multiple, possibly competing users or customers using the same shared resources may elevate the risk of unwanted communication. In such a setting the ”air gap” between physical servers and networks disappears and only the means of isolation and virtual separation serve as a barrier between adversary and victim. In the work at hand we will provide a survey on weak spots an adversary trying to exfiltrate private data from target virtual machines could exploit in a cloud environment. We will evaluate the feasibility of example attacks and point out possible mitigation solutions if they exist.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
The status quo of PROFINET, a commonly used industrial Ethernet standard, provides no inherent security in its communication protocols. In this thesis an approach for protecting real-time PROFINET RTC messages against spoofing, tampering and optionally information disclosure is specified and implemented into a real-world prototype setup. Therefor authenticated encryption is used, which relies on symmetric cipher schemes. In addition a procedure to update the used symmetric encryption key in a bumpless manner, e.g. without interrupting the real-time communication, is introduced and realized.
The concept for protecting the PROFINET RTC messages was developed in collaboration with a task group within the security working group of PROFINET International. The author of this thesis has also been part of that task group. This thesis contributes by proofing the practicability of the concept in a real-world prototype setup, which consists of three FPGA-based development boards that communicate with each other to showcase bumpless key updates.
To enable a bumpless key update without disturbing the deterministic real-time traffic by dedicated messages, the key update annunciation and status is embedded into the header. By provisioning two key slots, of which only one is in used, while the other is being prepared, a well-synchronized coordinated switch between the receiver and the sender performs the key update.
The developed prototype setup allows to test the concept and builds the foundation for further research and implementation activities, e.g. the impact of cryptographic operations onto the processing time.
A physical unclonable function (PUF) is a hardware circuit that produces a random sequence based on its manufacturing-induced intrinsic characteristics. In the past decade, silicon-based PUFs have been extensively studied as a security primitive for identification and authentication. The emerging field of printed electronics (PE) enables novel application fields in the scope of the Internet of Things (IoT) and smart sensors. In this paper, we design and evaluate a printed differential circuit PUF (DiffC-PUF). The simulation data are verified by Monte Carlo analysis. Our design is highly scalable while consisting of a low number of printed transistors. Furthermore, we investigate the best operating point by varying the PUF challenge configuration and analyzing the PUF security metrics in order to achieve high robustness. At the best operating point, the results show areliability of 98.37% and a uniqueness of 50.02%, respectively. This analysis also provides useful and comprehensive insights into the design of hybrid or fully printed PUF circuits. In addition, the proposed printed DiffC-PUF core has been fabricated with electrolyte-gated field-effect transistor technology to verify our design in hardware.
Efficient, secure and reliable communication is a major precondition for powerful applications in smart metering and smart grid. This especially holds true for the so called primary communication in the Local Metrological Network (LMN) between meter and data collector, as the LMN comes with the most stringent requirements with regard to cost, range, as well as bandwidth and energy efficiency. Until today, LMN field tests are operated all over the world. In these installations, however, energy autarkic systems play a marginal role. This contribution describes the results of the framework 7 (FP 7) WiMBex project (“Remote wireless water meter reading solution based on the EN 13757 standard, providing high autonomy, interoperability and range”). In this project an energy autarkic water meter was developed and tested, which follows the specification of the Wireless M-Bus protocol (EN 13757). The complete system development covers the PCB with the RF transceiver and the microcontroller, the energy converter and storage, and the software with the protocol. This contribution especially concentrates on the design, the development and the verification of the routing protocol. The routing protocol is based on the Q mode of EN13757-5 (Wireless M-Bus) and was extended by an additional energy state related parameter. This extension is orthogonal to the existing protocol and considers both the charge level and the charge characteristics (rate of occurrences, intensity). The software was implemented in NesC under the operating system TinyOS. The system was verified in an automated test bed and in field tests in UK and Ireland.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
The Thread protocol is a recent development based on 6LoWPAN (IPv6 over IEEE 802.15.4), but with extensions regarding a more media independent approach, which – additionally – also promises true interoperability. To evaluate and analyse the operation of a Thread network a given open source 6LoWPAN stack for embedded devices (emb::6) has been extended in order to comply with the Thread specification. The implementation covers Mesh Link Establishment (MLE) and network layer functionality as well as 6LoWPAN mesh under routing mechanism based on MAC short addresses. The development has been verified on a virtualization platform and allows dynamical establishment of network topologies based on Thread's partitioning algorithm.
OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.
The increasing number of transistors being clocked at high frequencies of modern microprocessors lead to an increasing power consumption, which calls for an active dynamic thermal management. In a research project a system environment has been developed, which includes thermal modeling of the microprocessor in the board system, a software environment to control the characteristics of the system’s timing behavior, and a modified Linux scheduler, which is enhanced with a prediction controller. Measurement results are shown for this development for a Freescale i.MX6Q quad-core microprocessor.
Extensible Authentication Protocol (EAP) bietet eine flexible Möglichkeit zur Authentifizierung von Endgeräten und kann in Kombination mit TLS für eine zertifikatsbasierte Authentifizierung verwendet werden. Motiviert wird diese Arbeit von einer potenziellen Erweiterung für PROFINET, die diese Protokolle einsetzen soll.
Dabei soll eine sicherer EAP-TLS-Protokollstacks für eingebettete Systeme in der Programmiersprache Rust entwickelt werden. Durch das Ownership-System von Rust können Speicherfehler eliminiert werden, ohne dabei auf die positiven Eigenschaften von nativen Sprachen zu verzichten. Es wird ein besonderes Augenmerk auf wie die Verwendung klassischer Rust-Bibliotheken im Umfeld von eingebetteten Systemen, den Einfluss des Speichermodells auf das Design, sowie die Integration von C-Bibliotheken für automatisierte Interoperabilitätstests gelegt.
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Embedded Analog Physical Unclonable Function System to Extract Reliable and Unique Security Keys
(2020)
Internet of Things (IoT) enabled devices have become more and more pervasive in our everyday lives. Examples include wearables transmitting and processing personal data and smart labels interacting with customers. Due to the sensitive data involved, these devices need to be protected against attackers. In this context, hardware-based security primitives such as Physical Unclonable Functions (PUFs) provide a powerful solution to secure interconnected devices. The main benefit of PUFs, in combination with traditional cryptographic methods, is that security keys are derived from the random intrinsic variations of the underlying core circuit. In this work, we present a holistic analog-based PUF evaluation platform, enabling direct access to a scalable design that can be customized to fit the application requirements in terms of the number of required keys and bit width. The proposed platform covers the full software and hardware implementations and allows for tracing the PUF response generation from the digital level back to the internal analog voltages that are directly involved in the response generation procedure. Our analysis is based on 30 fabricated PUF cores that we evaluated in terms of PUF security metrics and bit errors for various temperatures and biases. With an average reliability of 99.20% and a uniqueness of 48.84%, the proposed system shows values close to ideal.
Digital networked communications are the key to all Internet-of-Things applications, especially to smart metering systems and the smart grid. In order to ensure a safe operation of systems and the privacy of users, the transport layer security (TLS) protocol, a mature and well standardized solution for secure communications, may be used. We implemented the TLS protocol in its latest version in a way suitable for embedded and resource-constrained systems. This paper outlines the challenges and opportunities of deploying TLS in smart metering and smart grid applications and presents performance results of our TLS implementation. Our analysis shows that given an appropriate implementation and configuration, deploying TLS in constrained smart metering systems is possible with acceptable overhead.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
Die neueste Generation von programmierbaren Logikbausteinen verfügt neben den konfigurierbaren Logikzellen über einen oder mehrere leistungsfähige Mikroprozessoren. In dieser Arbeit wird gezeigt, wie ein bestehendes Zwei-Chip-System auf einen Xilinx Zynq 7000 mit zwei ARM A9-Cores migriert wird. Bei dem System handelt es sich um das „GPS-gestützte Kreisel-system ADMA“ des Unternehmens GeneSys. Die neue Lösung verbessert den Datenaustausch zwischen dem ersten Mikroprozessor zur digitalen Signalverarbeitung und dem zweiten Prozessor zur Ablaufsteuerung durch ein Shared Memory. Für die schnelle und echtzeitfähige Datenübertragung werden zahlreiche hochbitratige Schnittstellengenutzt.
This paper presents an overview of EREMI, a two-year project funded under ERASMUS+ KA203, and its results. The project team’s main objective was to develop and validate an advanced interdisciplinary higher education curriculum, which includes lifelong learning components. The curriculum focuses on enhancing resource efficiency in the manufacturing industry and optimising poorly or non-digitised industrial physical infrastructure systems. The paper also discusses the results of the project, highlighting the successful achievement of its goals. EREMI effectively supports the transition to Industry 5.0 by preparing a common European pool of future experts. Through comprehensive research and collaboration, the project team has designed a curriculum that equips students with the necessary skills and knowledge to thrive in the evolving manufacturing landscape. Furthermore, the paper explores the significance of EREMI’s contributions to the field, emphasising the importance of resource efficiency and system optimisation in industrial settings. By addressing the challenges posed by under-digitised infrastructure, the project aims to drive sustainable and innovative practices in manufacturing. All five project partner organisations have been actively engaged in offering relevant educational content and framework for decentralised sustainable economic development in regional and national contexts through capacity building at a local level. A crucial element of the added value is the new channel for obtaining feedback from students. The survey results, which are outlined in the paper, offer valuable insights gathered from students, contributing to the continuous improvement of the project.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.
Exploiting Dissent: Towards Fuzzing-based Differential Black Box Testing of TLS Implementations
(2017)
The Transport Layer Security (TLS) protocol is one of the most widely used security protocols on the internet. Yet do implementations of TLS keep on suffering from bugs and security vulnerabilities. In large part is this due to the protocol's complexity which makes implementing and testing TLS notoriously difficult. In this paper, we present our work on using differential testing as effective means to detect issues in black-box implementations of the TLS handshake protocol. We introduce a novel fuzzing algorithm for generating large and diverse corpuses of mostly-valid TLS handshake messages. Stimulating TLS servers when expecting a ClientHello message, we find messages generated with our algorithm to induce more response discrepancies and to achieve a higher code coverage than those generated with American Fuzzy Lop, TLS-Attacker, or NEZHA. In particular, we apply our approach to OpenssL, BoringSSL, WolfSSL, mbedTLS, and MatrixSSL, and find several real implementation bugs; among them a serious vulnerability in MatrixSSL 3.8.4. Besides do our findings point to imprecision in the TLS specification. We see our approach as present in this paper as the first step towards fully interactive differential testing of black-box TLS protocol implementations. Our software tools are publicly available as open source projects.
Extended Performance Measurements of Scalable 6LoWPAN Networks in an Automated Physical Testbed
(2015)
IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, is becoming more and more a de facto standard for such communications for the Internet of Things, be it in the field of home and building automation, of industrial and process automation, or of smart metering and environmental monitoring. For all of these applications, scalability is a major precondition, as the complexity of the networks continuously increase. To maintain this growing amount of connected nodes a various 6LoWPAN implementations are available. One of the mentioned was developed by the authors' team and was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements prove an excellent stability and a very good short and long-term performance also under dynamic conditions. In all measurements, there is an advantage of minimum 10% with regard to the average times, like global repair time; but the advantage with reagr to average values can reach up to 30%. Moreover, it can be proven that the performance predictions from other papers are consistent with the executed real-life implementations.
Wireless sensor networks have recently found their way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researchers. Such monitoring applications, in general, don way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researc latency requirements regarding to the energy efficiency. Also a challenge of this application is the network topology as the application should be able to be deployed in very large scale. Nevertheless low power consumption of the devices making up the network must be on focus in order to maximize the lifetime of the whole system. These devices are usually battery-powered and spend most of their energy budget on radio transceiver module. A so-called Wake-On-Radio (WoR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, some designs for integration of WOR into IEEE 802.1.5.4 are to be discussed, providing an overview of trade-offs in energy consumption while deploying the WoR schemes in a monitoring system.
Wireless sensor networks have found their way into a wide range of applications among which environmental monitoring systems have attracted increasing interests of researchers. The main challenges for the applications are scalability of the network size and energy efficiency of the spatially distributed motes. These devices are mostly battery-powered and spend most of their energy budget on the radio transceiver module. A so-called Wake-On-Radio (WOR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, a novel design for integration of WOR into IEEE802.1.5.4 is presented, which flexibly allows trade-offs in energy consumption between sender and receiver station, between real-time capability and energy consumption. For identical behavior, the proposed scheme is significantly more efficient than other schemes, which were proposed in recent publications, while preserving backward compatibility with standard IEEE802.15.4 transceivers.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
Maintenance processes in Industry 4.0 applications try to achieve a high degree of quality to reduce the downtime of machinery. The monitoring of executed maintenance activities is challenging as in complex production setups, multiple stakeholders are involved. So, full transparency of the different activities and of the state of the machine can only be supported, if these stakeholders trust each other. Therefore, distributed ledger technologies, like Blockchain, can be promising candidates for supporting such applications. The goal of this paper is a formal description of business and technical interactions between non-trustful stakeholders in the context of Industry 4.0 maintenance processes using distributed ledger technologies. It also covers the integration of smart contracts for automated triggering of activities.
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
Novel manufacturing technologies, such as printed electronics, may enable future applications for the Internet of Everything like large-area sensor devices, disposable security, and identification tags. Printed physically unclonable functions (PUFs) are promising candidates to be embedded as hardware security keys into lightweight identification devices. We investigate hybrid PUFs based on a printed PUF core. The statistics on the intra- and inter-hamming distance distributions indicate a performance suitable for identification purposes. Our evaluations are based on statistical simulations of the PUF core circuit and the thereof generated challenge-response pairs. The analysis shows that hardware-intrinsic security features can be realized with printed lightweight devices.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
Hybrid low-voltage physical unclonable function based on inkjet-printed metal-oxide transistors
(2020)
Modern society is striving for digital connectivity that demands information security. As an emerging technology, printed electronics is a key enabler for novel device types with free form factors, customizability, and the potential for large-area fabrication while being seamlessly integrated into our everyday environment. At present, information security is mainly based on software algorithms that use pseudo random numbers. In this regard, hardware-intrinsic security primitives, such as physical unclonable functions, are very promising to provide inherent security features comparable to biometrical data. Device-specific, random intrinsic variations are exploited to generate unique secure identifiers. Here, we introduce a hybrid physical unclonable function, combining silicon and printed electronics technologies, based on metal oxide thin film devices. Our system exploits the inherent randomness of printed materials due to surface roughness, film morphology and the resulting electrical characteristics. The security primitive provides high intrinsic variation, is non-volatile, scalable and exhibits nearly ideal uniqueness.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things (IoT). Whereas the lower layers (IEEE802.15.4 and 6LoWPAN) are already well defined and consolidated with regard to frame formats, header compression, routing protocols and commissioning procedures, there is still an abundant choice of possibilities on the application layer. Currently, various groups are working towards standardization of the application layer, i.e. the ETSI Technical Committee on M2M, the IP for Smart Objects (IPSO) Alliance, Lightweight M2M (LWM2M) protocol of the Open Mobile Alliance (OMA), and OneM2M. This multitude of approaches leaves the system developer with the agony of choice. This paper selects, presents and explains one of the promising solutions, discusses its strengths and weaknesses, and demonstrates its implementation.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
Wireless Sensor Networks (WSN) have emerged as interesting topic in the research community due to its manifold applications. One of the main challenges of this field is the energy consumption of the nodes, which typically is quite restricted due to the required lifetime of such WSNs. To solve that problem several energy-saving MAC protocols have been developed so far. One of them recently presented by the authors is the so-called SmartMAC as an extension to the IEEE802.15.4 standard. In this paper, we present the implementation details of the porting of the SmartMAC protocol to the discrete event network simulator NS3. We develop this module for NS3 to simulate the performance, multi node execution, and multi node configuration. Along with this model, we also present an energy model for the evaluation of the energy consumption. The current implementation in NS3 is based on the LR-WPAN (Low-Rate Wireless Personal Area Networks) as specified by the IEEE802.15.4 (2006) standard. The simulation results show that the SmartMAC with its sleep and wake-up mechanisms for the transceivers, is significantly more efficient than the current NS3 MAC (Medium Access Control) scheme.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
Ultra wide band (UWB) signals are well suited both for short-range wireless communication and for high-precision localization applications. Channel impulse response (CIR) analysis in UWB systems is a major element in localization estimation. In this paper, practical aspects of CIR are presented. I.e. a technique for the construction of the accumulated echo-gram of a multipath delayed signal is proposed. Decawave hardware was used to demonstrate the technique of analysis of fine structure of signals with a sub-nanosecond resolution. Temporal stability, reliability and two-way characteristics of such echo-grams are discussed as well. The results of using two EVK1000 radio modules as a radar installation to detect a target in indoor environments prove that a low cost UWB intrusion detection and through-the-wall-vision systems might be developed using the proposed technique.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
IPv6 over LoRaWAN™
(2016)
Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.
Das Monitoring von Industrieanlagen stellt in der Wirtschaft sicher, dass hoch-automatisierte Prozesse reibungslos ablaufen können. Meistens steht hier das Monitoring der Anlagen selbst im Mittelpunkt, die Kommunikationsleitungen für den Datenaustausch auf Ethernet-Basis (z.B. Profinet) sind gegenwärtig noch nicht Teil einer kontinuierlichen Überwachung. Zwar werden auch hier die physischen Verbindungen überprüft, jedoch geschieht häufig dies nur zum Zeitpunkt der Inbetriebnahme, wenn die Anlage noch nicht in das Gesamtsystem integriert ist oder während eines Wartungszyklus, wenn die Maschine für die Dauer der Wartung aus dem Betriebsablauf genommen wird. Dies führt dazu, dass insbesondere heute, wo vor allem Ethernet zunehmend als Basis für die industrielle Kommunikation herangezogen wird, Maschinenausfälle aufgrund fehlender Kabelüberwachung immer wahrscheinlicher werden. Um dem entgegenwirken zu können, wurde im Projekt Ko2SiBus ein neues Messverfahren konzipiert, implementiert und validiert, das kostengünstig in neue oder bestehende Systeme integriert werden kann. Um die Tauglichkeit zu zeigen, wurden die Projektergebnisse in Prototypen und Demonstratoren implementiert, die sowohl als Stand-Alone aber auch als Integrationslösungen dienen können.
Eine kontinuierliche Überwachung von Ethernet-Leitungne beugt Maschinenausfällen in der Industrie vor. Aktuell fehlen jedoch geiegnete Methoden, um diese Überwachung flächendeckend durchzuführen. Im Projekt Ko²SiBus wurde deshalb ein kostengünstiges Verfahren zur kontinuierlichen Überwachung von Ethernet-Leitungen entwickelt.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.