Refine
Year of publication
Document Type
- Conference Proceeding (113)
- Article (reviewed) (25)
- Master's Thesis (5)
- Part of a Book (4)
- Article (unreviewed) (4)
- Report (4)
- Book (2)
- Doctoral Thesis (2)
- Patent (2)
Conference Type
- Konferenzartikel (112)
- Konferenzband (1)
Keywords
- Eingebettetes System (8)
- Blockchain (6)
- Kommunikation (4)
- blockchain (4)
- IIoT (3)
- IT-Sicherheit (3)
- Internet der Dinge (3)
- Internet of Things (3)
- IoT security (3)
- Security (3)
Institute
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (161) (remove)
Open Access
- Closed Access (81)
- Open Access (41)
- Closed (28)
- Gold (8)
- Bronze (7)
- Diamond (4)
This article deals with the problem of wireless synchronization between onboard computing devices of small-sized unmanned aerial vehicles (SUAV) equipped with integrated wireless chips (IWC). Accurate synchronization between several devices requires the precise timestamping of batches transmitting and receiving on each of them. The best precision is demonstrated by those solutions where timestamping is performed on the PHY level, right after modulation/demodulation of the batch. Nowadays, most of the currently produced IWC are Systems-on-a-Chip (SoC) that include both PHY and MAC, implemented with one or several processor cores application. SoC allows create more cost and energy efficient wireless devices. At the same time, it limits the developers direct access to the internal signals and significantly complicates precise timestamping for sent and received batches, required for mutual synchronization of industrial devices. Some modern IEEE 802.11 IWCs have inbuilt functions that use internal chip clock to register timestamps. However, high jitter of the interfaces between the external device and IWC degrades the comparison of the timestamps from the internal clock to those registered by external devices. To solve this problem, the article proposes a novel approach to the synchronization, based on the analysis of IWC receiver input potential. The benefit of this approach is that there is no need to demodulate and decode the received batches, thus allowing it implementation with low-cost IWCs. In this araticle, Cypress CYW43438 was taken as an example for designing hardware and software solutions for synchronization between two SUAV onboard computing devices, equipped with IWC. The results of the performed experimental studies reveal that mutual synchronization error of the proposed method does not exceed 10 μs.
The IEEE 1588 precision time protocol (PTP) is a time synchronization protocol with sub-microsecond precision primarily designed for wired networks. In this letter, we propose wireless precision time protocol (WPTP) as an extension to PTP for multi-hop wireless networks. WPTP significantly reduces the convergence time and the number of packets required for synchronization without compromising on the synchronization accuracy.
Die Vielfalt der Protokolle, die praktisch auf allen Ebenen der Netzwerkkommunikation zu berücksichtigen ist, stellt eine der großen Herausforderungen bei der fortschreitenden Automatisierung des intelligenten Hauses dar. Unter dem Überbegriff Internet der Dinge (Internet of Things) entstehen gegenwärtig zahlreiche neue Entwicklungen, Standards, Allianzen und so genannte Ökosysteme. Diese haben die Absicht einer horizontalen Integration gewerkeübergreifender Anwendungen und verfolgen fast alle das Ziel, die Situation zu vereinfachen, die Entwicklungen zu beschleunigen und Markterfolge zu erreichen. Leider macht diese Vielfalt momentan die Welt aber eher noch komplexer und bringt damit das Risiko mit sich, genau das Gegenteil der ursprünglichen Absichten zu erreichen. Dieser Beitrag versucht, die Entwicklungen möglichst systematisch zu kategorisieren und mögliche Lösungsansätze zu beschreiben.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Die Erfindung betrifft ein Verfahren zum Maximieren der von einer analogen Entropiequelle abgeleiteten Entropie, wobei das Verfahren folgende Schritte aufweist:- Bereitstellen von Eingabedaten für die analoge Entropiequelle (2);- Erzeugen von Rückgabewerten durch die analoge Entropiequelle basierend auf den Eingabedaten (3); und- Gruppieren der Rückgabewerte, wobei das Gruppieren der Rückgabewerte ein Anwenden von Versätzen auf Rückgabewerte aufweist (4).
The application of leaky feeder (radiating) cables is a common solution for the implementation of reliable radio communication in huge industrial buildings, tunnels and mining environment. This paper explores the possibilities of leaky feeders for 1D and 2D localization in wireless systems based on time of flight chirp spread spectrum technologies. The main focus of this paper is to present and analyse the results of time of flight and received signal strength measurements with leaky feeders in indoor and outdoor conditions. The authors carried out experiments to compare ranging accuracy and radio coverage area for a point-like monopole antenna and for a leaky feeder acting as a distributed antenna. In all experiments RealTrac equipment based on nanoLOC radio standard was used. The estimation of the most probable path of a chirp signal going through a leaky feeder was calculated using the ray tracing approach. The typical non-line-of-sight errors profiles are presented. The results show the possibility to use radiating cables in real time location technologies based on time-of-flight method.
In this work, we consider a duty-cycled wireless sensor network with the assumption that the on/off schedules are uncoordinated. In such networks, as all nodes may not be awake during the transmission of time synchronization messages, nodes will require to re-transmit the synchronization messages. Ideally a node should re-transmit for the maximum sleep duration to ensure that all nodes are synchronized. However, such a proposition will immensely increase the energy consumption of the nodes. Such a situation demands that there is an upper bound of the number of retransmissions. We refer to the time a node spends in re-transmission of the control message as broadcast duration. We ask the question, what should be the broadcast duration to ensure that a certain percentage of the available nodes are synchronized. The problem to estimate the broadcast duration is formulated so as to capture the probability threshold of the nodes being synchronized. Results show the proposed analytical model can predict the broadcast duration with a given lower error margin under real world conditions, thus demonstrating the efficiency of our solution.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Deep learning approaches are becoming increasingly important for the estimation of the Remaining Useful Life (RUL) of mechanical elements such as bearings. This paper proposes and evaluates a novel transfer learning-based approach for RUL estimations of different bearing types with small datasets and low sampling rates. The approach is based on an intermediate domain that abstracts features of the bearings based on their fault frequencies. The features are processed by convolutional layers. Finally, the RUL estimation is performed using a Long Short-Term Memory (LSTM) network. The transfer learning relies on a fixed-feature extraction. This novel deep learning approach successfully uses data of a low-frequency range, which is a precondition to use low-cost sensors. It is validated against the IEEE PHM 2012 Data Challenge, where it outperforms the winning approach. The results show its suitability for low-frequency sensor data and for efficient and effective transfer learning between different bearing types.
The last decades have seen the evolution of industrial production into more sophisticated processes. The development of specialized, high-end machines has increased the importance of predictive maintenance of mechanical systems to produce high-quality goods and avoid machine breakdowns. Predictive maintenance has two main objectives: to classify the current status of a machine component and to predict the maintenance interval by estimating its remaining useful life (RUL). Nowadays, both objectives are covered by machine learning and deep learning approaches and require large training datasets that are often not available. One possible solution may be transfer learning, where the knowledge of a larger dataset is transferred to a smaller one. This thesis is primarily concerned with transfer learning for predictive maintenance for fault classification and RUL estimation. The first part presents the state-of-the-art machine learning techniques with a focus on techniques applicable to predictive maintenance tasks (Chapter 2). This is followed by a presentation of the machine tool background and current research that applies the previously explained machine learning techniques to predictive maintenance tasks (Chapter 3). One novelty of this thesis is that it introduces a new intermediate domain that represents data by focusing on the relevant information to allow the data to be used on different domains without losing relevant information (Chapter 4). The proposed solution is optimized for rotating elements. Therefore, the presented intermediate domain creates different layers by focusing on the fault frequencies of the rotating elements. Another novelty of this thesis is its semi and unsupervised transfer learning-based fault classification approach for different component types under different process conditions (Chapter 5). It is based on the intermediate domain utilized by a convolutional neural network (CNN). In addition, a novel unsupervised transfer learning loss function is presented based on the maximum mean discrepancy (MMD), one of the state-of-the-art algorithms. It extends the MMD by considering the intermediate domain layers; therefore, it is called layered maximum mean discrepancy (LMMD). Another novelty is an RUL estimation transfer learning approach for different component types based on the data of accelerometers with low sampling rates (Chapter 6). It applies the feature extraction concepts of the classification approach: the presented intermediate domain and the convolutional layers. The features are then used as input for a long short-term memory (LSTM) network. The transfer learning is based on fixed feature extraction, where the trained convolutional layers are taken over. Only the LSTM network has to be trained again. The intermediate domain supports this transfer learning type, as it should be similar for different component types. In addition, it enables the practical usage of accelerometers with low sampling rates during transfer learning, which is an absolute novelty. All presented novelties are validated in detailed case studies using the example of bearings (Chapter 7). In doing so, their superiority over state-of-the-art approaches is demonstrated.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
The CAN bus still is an important fieldbus in various domains, e.g. for in-car communication or automation applications. To counter security threats and concerns in such scenarios we design, implement, and evaluate the use of an end-to-end security concept based on the Transport Layer Security protocol. It is used to establish authenticated, integrity-checked, and confidential communication channels between field devices connected via CAN. Our performance measurements show that it is possible to use TLS at least for non time-critical applications, as well as for generic embedded networks.
Wireless communication systems more and more become part of our daily live. Especially with the Internet of Things (IoT) the overall connectivity increases rapidly since everyday objects become part of the global network. For this purpose several new wireless protocols have arisen, whereas 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks) can be seen as one of the most important protocols within this sector. Originally designed on top of the IEEE802.15.4 standard it is a subject to various adaptions that will allow to use 6LoWPAN over different technologies; e.g. DECT Ultra Low Energy (ULE). Although this high connectivity offers a lot of new possibilities, there are several requirements and pitfalls coming along with such new systems. With an increasing number of connected devices the interoperability between different providers is one of the biggest challenges, which makes it necessary to verify the functionality and stability of the devices and the network. Therefore testing becomes one of the key components that decides on success or failure of such a system. Although there are several protocol implementations commonly available; e.g., for IoT based systems, there is still a lack of according tools and environments as well as for functional and conformance testing. This article describes the architecture and functioning of the proposed test framework based on Testing and Test Control Notation Version 3 (TTCN-3) for 6LoWPAN over ULE networks.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
Temperature regulation is an important component for modern high performance single -core and multi-core processors. Especially high operating frequencies and architectures with an increasing number of monolithically integrated transistors result in a high power dissipation and - since processor chips convert the consumed electrical energy into thermal energy - in high operating temperatures. High operating temperatures of processors can have drastic consequences regarding chip reliability, processor performance, and leakage currents. External components like fans or heat spreaders can help to reduce the processor temperature - with the disadvantage of additional costs and reduced reliability. Therefore, software based algorithms for dynamic temperature management are an attractive alternative and well known as Dynamic Thermal Management (DTM). However, the existing approaches for DTM are not taking into account the requirements of real-time embedded computing, which is the objective in the given project. The first steps are the profiling and the thermal modeling of the system, which is reported in this paper for a Freescale i. MX6Q quad-core microprocessor. An analytical model is developed and verified by an extensive set of measurement runs.
The Internet of Things (IoT), ubiquitous computing and ubiquitous connectivity, Cyber Physical Systems (CPS), ambient intelligence, Machine-to-Machine communication (M2M) or Car-to-Car (C2C)-communication, smart metering, smart grid, telematics, telecare, telehealth – there are many buzzwords around current developments related to the Internet.
This contribution gives an overview on such IoT-applications, as they are already used today to improve the availability of information, increase efficiency, push system limits and extend the value chain. At a closer look, the economic and technical development can be separated into different phases. It is interesting that we are currently at the threshold to a new phase, with decentralized and cooperative communication and control nodes as cornerstones. Thus, embedded systems and their connectivity are in the middle of the scene.
This recent development is described along with some example projects from the author’s team which are used in industrial automation, energy supply and distribution (home automation and smart metering), traffic engineering (cooperative driver assistance systems), and in telehealth and telecare.
Taschenbuch Digitaltechnik
(2022)
Die Digitaltechnik bestimmt in zunehmendem Maß unser Lebensumfeld. Mit der Darstellung aller Größen ausschließlich durch die diskreten Werte 0 und 1 bietet sie eine ideale Basis sowohl für Speicherung, Verarbeitung und Übertragung von Informationen als auch für die Massenproduktion kostengünstiger und leistungsfähiger Schaltkreise.
Die Digitaltechnik – als komplexes und sehr breites Wissensgebiet – findet ihre Wurzeln in der Mathematik, speziell der Booleschen Algebra. Technisch nutzbar wurde sie in dem heute bekannten Maße durch die Einführung integrierter mikroelektronischer Schaltkreise, sodass eine komplette Darstellung beide Aspekte einbeziehen muss. Viele Anwendungsgebiete der Digitaltechnik, wie z. B. die digitale Signalverarbeitung oder die digitale Kommunikationstechnik, sind mittlerweile so eigenständig, dass kaum noch Gesamtdarstellungen zu finden sind. Die verteilte Darstellung erschwert jedoch in der Regel den Zugang zu einem hochkomplexen Fachgebiet wie der Digitaltechnik.
Das Taschenbuch Digitaltechnik erleichtert diesen Zugang und informiert in kompakter und zugleich fachübergreifender Form. Es wendet sich an Student:innen von Hochschulen und Universitäten, an Lehrer:innen und Schüler:innen von Berufs- und Technikerschulen, an Ingenieur:innen und Techniker:innen in der Praxis und an alle, die ein kompaktes Nachschlagewerk zur Digitaltechnik benötigen.
Für die vierte Auflage wurde das Taschenbuch umfassend aktualisiert und um neue Hardware-Architekturen ergänzt.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
Covert channels have been known for a long time because of their versatile forms of appearance. For nearly every technical improvement or change in technology, such channels have been (re-)created or known methods have been adapted. For example, the introduction of hyperthreading technology has introduced new possibilities for covert communication between malicious processes because they can now share the arithmetic logical unit as well as the L1 and L2 caches, which enable establishing multiple covert channels. Even virtualization, which is known for its isolation of multiple machines, is prone to covert- and side-channel attacks because of the sharing of resources. Therefore, it is not surprising that cloud computing is not immune to this kind of attacks. Moreover, cloud computing with multiple, possibly competing users or customers using the same shared resources may elevate the risk of illegitimate communication. In such a setting, the “air gap” between physical servers and networks disappears, and only the means of isolation and virtual separation serve as a barrier between adversary and victim. In the work at hand, we will provide a survey on vulnerable spots that an adversary could exploit trying to exfiltrate private data from target virtual machines through covert channels in a cloud environment. We will evaluate the feasibility of example attacks and point out proposed mitigation solutions in case they exist.
WirelessHART protocol was specifically designed for real-time communication in the wireless sensor networks domain for industrial process automation requirements. Whereas the major purpose of WirelessHART is the read-out of sensors with moderate real-time requirements, an increasing demand for integration of actuator applications can be observed. Therefore, it must be verified that the WirelessHART protocol gives sufficient support to real-time industry requirements. As a result, the delay of especially burst and command messages from actuator and sensor nodes to the gateway and vice versa must be analyzed. In this paper, we implemented a WirelessHART network scenario in WirelessHART simulator in NS-2 [8], simulated and analyzed its time characteristics under ideal and noisy conditions. We evaluated the performance of the implementation in order to verify whether the requirements of industrial process and control can be met. This implementation offers an early alternative to expensive test beds for WirelessHART in real-time actuator applications.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
In recent years, both the Internet of Things (IoT) and blockchain technologies have been highly influential and revolutionary. IoT enables companies to embrace Industry 4.0, the Fourth Industrial Revolution, which benefits from communication and connectivity to reduce cost and to increase productivity through sensor-based autonomy. These automated systems can be further refined with smart contracts that are executed within a blockchain, thereby increasing transparency through continuous and indisputable logging. Ideally, the level of security for these IoT devices shall be very high, as they are specifically designed for this autonomous and networked environment. This paper discusses a use case of a company with legacy devices that wants to benefit from the features and functionality of blockchain technology. In particular, the implications of retrofit solutions are analyzed. The use of the BISS:4.0 platform is proposed as the underlying infrastructure. BISS:4.0 is
intended to integrate the blockchain technologies into existing enterprise environments. Furthermore, a security analysis of IoT and blockchain present attacks and countermeasures are presented that are identified and applied to the mentioned use case.
One of the most important questions about smart metering systems for the end users is their data privacy and security. Indeed, smart metering systems provide a lot of advantages for distribution system operators (DSO), but functionalities offered to users of existing smart meters are still limited and society is becoming increasingly critical. Smart metering systems are accused of interfering with personal rights and privacy, providing unclear tariff regulations which not sufficiently encourage households to manage their electricity consumption in advance. In the specific field of smart grids, data security appears to be a necessary condition for consumer confidence without which they will not be able to give their consent to the collection and use of personal data concerning them.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
Schlussbericht VanAssist
(2021)
Industrial companies can use blockchain to assist them in resolving their trust and security issues. In this research, we provide a fully distributed blockchain-based architecture for industrial IoT, relying on trust management and reputation to enhance nodes’ trustworthiness. The purpose of this contribution is to introduce our system architecture to show how to secure network access for users with dynamic authorization management. All decisions in the system are made by trustful nodes’ consensus and are fully distributed. The remarkable feature of this system architecture is that the influence of the nodes’ power is lowered depending on their Proof of Work (PoW) and Proof of Stake (PoS), and the nodes’ significance and authority is determined by their behavior in the network.
This impact is based on game theory and an incentive mechanism for reputation between nodes. This system design can be used on legacy machines, which means that security and distributed systems
can be put in place at a low cost on industrial systems. While there are no numerical results yet, this work, based on the open questions regarding the majority problem and the proposed solutions based on a game-theoretic mechanism and a trust management system, points to what and how industrial IoT and existing blockchain frameworks that are focusing only on the power of PoW and PoS can be secured more effectively.
With many advances in sensor technology and the Internet of Things, Vehicle Ad Hoc Net- work (VANET) is becoming a new generation. VANET’s current technical challenges are deploying decentralized architecture and protecting privacy. Because Blockchain features are decentralized, distributed, mass storage, and non-manipulation features, this paper designs a new decentralized architecture using Blockchain technology called Blockchain-based VANET. Blockchain-based VANET can effectively resolve centralized problems and mutual distrust between VANET units. To achieve this, it is needed to provide scalability on the blockchain to run for VANET. In this system, our focus is on the reliability of incoming messages on the network. Vehicles check the validity of the received messages using the proposed Bayesian formula for trust management system and some information saved in the Blockchain. Then, based on the validation result, the vehicle computes a rate for each message type and message source vehicle. Vehicles upload the computed rates to Roadside Units (RSUs) in order to calculate the net reliability value. Finally, RSUs using a sharding consensus mechanism generate blocks, including the net reliability value as a transaction. In this system, all RSUs collaboratively maintain the latest updated Blockchain. Our experimental results show that the proposed system is effective, scalable and dependable in data gathering, computing, organization, and retrieval of trust values in VANET.
This work discusses several use cases of post-mortem mobile device tracking in which privacy is required e.g. due to client-confidentiality agreements and sensibility of data from government agencies as well as mobile telecommunication providers. We argue that our proposed Bloomfilter based privacy approach is a valuable technical building block for the arising General Data Protection Regulation (GDPR) requirements in this area. In short, we apply a solution based on the Bloom filters data structure that allows a 3rd party to performsome privacy saving setrelations on a mobiletelco’s access logfile or other mobile access logfile from harvesting parties without revealing any other mobile users in the proximity of a mobile base station but still allowing to track perpetrators.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
Environmental Monitoring is an attractive application field for Wireless Sensor Network (WSN). Water Level Monitoring helps to increase the efficiency of water distribution and management. In Pakistan, the world’s largest irrigation system covers 90.000 km of channels which needs to be monitored and managed on different levels. Especially the sensor systems for the small distribution channels need to be low energy and low cost. The distribution presents a technical solution for a communication system which is developed in a research project being co-funded by German Academic Exchange Service (DAAD). The communication module is based on IEEE-802.15.4 transceivers which are enhanced through Wake-On-Radio (WOR) to combine low-energy and real-time behavior. On higher layers, IPv6 (6LoWPAN) and corresponding routing protocols like Routing Protocol for Low power and Lossy Networks (RPL) can extend range of the network. The data are stored in a database and can be viewed online via a web interface. Of course, also automatic data analysis can be performed.
Remote code attestation protocols are an essential building block to offer a reasonable system security for wireless embedded devices. In the work at hand we investigate in detail the trustability of a purely software-based remote code attestation based inference mechanism over the wireless when e.g. running the prominent protocol derivate SoftWare-based ATTestation for Embedded Devices (SWATT). Besides the disclosure of pitfalls of such a protocol class we also point out good parameter choices which allow at least a meaningful plausibility check with a balanced false positive and false negative ratio.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
In the area of cloud computing, judging the fulfillment of service-level agreements on a technical level is gaining more and more importance. To support this we introduce privacy preserving set relations as inclusiveness and disjointness based ao Bloom filters. We propose to compose them in a slightly different way by applying a keyed hash function. Besides discussing the correctness of set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. Indeed, our solution proposes to bring another layer of privacy on the sizes. We are in particular interested how the overlapping bits of a Bloom filter impact the privacy level of our approach. We concretely apply our solution to a use case of cloud security audit on access control and present our results with real-world parameters.
In the work at hand, we combine a Private Information Retrieval (PIR) protocol with Somewhat Homomorphic Encryption (SHE) and use Searchable Encryption (SE) with the objective to provide security and confidentiality features for a third party cloud security audit. During the auditing process, a third party auditor will act on behalf of a cloud service user to validate the security requirements performed by a cloud service provider. Our concrete contribution consists of developing a PIR protocol which is proceeding directly on a log database of encrypted data and allowing to retrieve a sum or a product of multiple encrypted elements. Subsequently, we concretely apply our new form of PIR protocol to a cloud audit use case where searchable encryption is employed to allow additional confidentiality requirements to the privacy of the user. Exemplarily we are considering and evaluating an audit of client accesses to a controlled resource provided by a cloud service provider.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
Die industrielle Kommunikation war früher von relativ eingeschränkten, geschlossenen Feldbussystemen geprägt. Mit der zunehmenden Öffnung von Automatisierungsnetzen durch die horizontale und vertikale Integration in Produktionsanlagen entstehen gefährliche Angriffsflächen, die zum Diebstahl von Produktionsgeheimnissen, der Manipulation oder dem kompletten Lahmlegen der Produktionsprozesse führen können. Hieraus ergeben sich grundlegend neue Anforderung an die Datensicherheit, denen mit innovativen Lösungsansätzen begegnet werden muss.
Ziel des Forschungsvorhabens „SecureField“ war es, die Umsetzbarkeit und Anwendbarkeit des Ansatzes „(D)TLS-over-Anything“ zu untersuchen und nachzuweisen, sowie einen Werkzeugkasten zur Definition und Implementierung entsprechender Sicherheitslösungen vorzubereiten. Als langjährig etablierter Standard im IT-Umfeld stellte sich das (Datagram) Transport Layer Security ((D)TLS) Protokoll in Kombination mit einer industrie- bzw. automatisierungskompatiblen Public-Key-Infrastruktur (PKI) als äußerst vielversprechende Möglichkeit dar, Datensicherheit auch im OT-Umfeld zu erzielen. Hierbei sollten insbesondere KMU adressiert werden, für welche eigene Entwicklungsarbeiten in diesem Umfeld häufig zu aufwändig und technisch sowie wirtschaftlich zu riskant sind.
Mit „SecureField“ konnten Ergebnisse auf mehreren Ebenen erzielt werden. Zunächst konnte im Projektverlauf ein umfassendes und generisches Konzept zur Ende-zu-Ende-Absicherung von Kommunikationspfaden und -protokollen im industriellen Umfeld erarbeitet werden. Dieses Konzept besteht aus einem generischen Kommunikationsmodell sowie aus einem generischen Authentifikationsmodell.
Die Erfindung betrifft in einem ersten Aspekt eine Vorrichtung zur transkutanen Aufbringung eines elektrischen Stimulationsreizes auf ein Ohr. Die Vorrichtung umfasst einen Schaltungsträger, mindestens zwei Elektroden sowie eine Steuerungseinheit, wobei die Steuerungseinheit dazu konfiguriert ist, anhand von Stimulationsparametern ein elektrisches Stimulationssignal an den Elektroden zu erzeugen. Dabei ist die Vorrichtung, insbesondere eine Oberfläche des Schaltungsträgers der Vorrichtung, auf eine anatomische Form eines Ohres angepasst, sodass Elektroden auf der Oberfläche des Schaltungsträgers aufgebracht sind und ausgewählte Bereiche des Ohres kontaktieren Die Vorrichtung ist dadurch kennzeichnet, dass diese weiterhin einen Sensor zur Erkennung mindestens eines physiologischen Parameter umfasst und eine Steuerungseinheit dazu konfiguriert ist, anhand des mindestens einen physiologischen Parameters die Stimulationsparameter für den Stimulationsreiz anzupassen.In einem weiteren Aspekt betrifft die Erfindung ein Verfahren zur Herstellung der erfindungsgemäßen Vorrichtung.
Narrowband IoT (NB-IoT) as a radio access technology for the cellular Internet of Things (cIoT) is getting more traction due to attractive system parameters, new proposals in the 3 rd Generation Partnership Project (3GPP) Release 14 for reduced power consumption and ongoing world-wide deployment. As per 3GPP, the low-power and wide-area use cases in 5G specification will be addressed by the early NB-IoT and Long-Term Evolution for Machines (LTE-M) based technologies. Since these cIoT networks will operate in a spatially distributed environment, there are various challenges to be addressed for tests and measurements of these networks. To meet these requirements, unified emulated and field testbeds for NB-IoT-networks were developed and used for extensive performance measurements. This paper analyses the results of these measurements with regard to RF coverage, signal quality, latency, and protocol consistency.
Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load “Cpuburn-a8” application and compare it to a native Linux system running on ARM processors.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
Recently, the demand for scalable, efficient and accurate Indoor Positioning Systems (IPS) has seen a rising trend due to their utility in providing Location Based Services (LBS). Visible Light Communication (VLC) based IPS designs, VLC-IPS, leverage Light Emitting Diodes (LEDs) in indoor environments for localization. Among VLC-based designs, Time Difference of Arrival (TDOA) based techniques are shown to provide very low errors in the relative position of receivers. Our considered system consists of five LEDs that act as transmitters and a single receiver (photodiode or image sensor in smart phone) whose position coordinates in an indoor environment are to be determined. As a performance criterion, Cramer Rao Lower Bound (CRLB) is derived for range estimations and the impact of various factors, such as, LED transmission frequency, position of reference LED light, and the number of LED lights, on localization accuracy has been studied. Simulation results show that depending on the optimal values of these factors, location estimation on the order of few centimeters can be realistically achieved.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
As the Industry 4.0 is evolving, the previously separated Operational Technology (OT) and Information Technology (IT) is converging. Connecting devices in the industrial setting to the Internet exposes these systems to a broader spectrum of cyber-attacks. The reason is that since OT does not have much security measures as much as IT, it is more vulnerable from the attacker's perspective. Another factor contributing to the vulnerability of OT is that, when it comes to cybersecurity, industries have focused on protecting information technology and less prioritizing the control systems. The consequences of a security breach in an OT system can be more adverse as it can lead to physical damage, industrial accidents and physical harm to human beings. Hence, for the OT networks, certificate-based authentication is implemented. This involves stages of managing credentials in their communication endpoints. In the previous works of ivESK, a solution was developed for managing credentials. This involves a CANopen-based physical demonstrator where the certificate management processes were developed. The extended feature set involving certificate management will be based on the existing solution. The thesis aims to significantly improve such a solution by addressing two key areas that is enhancing functionality and optimizing real-time performance. Regarding the first goal, firstly, an analysis of the existing feature set shall be carried out, where the correct functionality shall be guaranteed. The limitations from the previously implemented system will be addressed and to make sure it can be applied to real world scenarios, it will be implemented and tested in the physical demonstrator. This will lay a concrete foundation that these certificate management processes can be used in the industries in large-scale networks. Implementation of features like revocation mechanism for certificates, automated renewal of the credentials and authorization attribute checks for the certificate management will be implemented. Regarding the second goal, the impact of credential management processes on the ongoing CANopen real-time traffic shall be a studied. Since in real life scenarios, mission-critical applications like Industrial control systems, medical devices, and transportation networks rely on real-time communication for reliable operation, delays or disruptions caused by credential management processes can have severe consequences. Optimizing these processes is crucial for maintaining system integrity and safety. The effect to minimize the disturbance of the credential management processes on the normal operation of the CANopen network shall be characterized. This shall comprise testing real-time parameters in the network such as CPU load, network load and average delay. Results obtained from each of these tests will be studied.
In the last decade, IPv6 over Low power Wireless Personal Area Networks (IEEE802.15.4), also known as 6LoWPAN, has well evolved as a primary contender for short range wireless communications and holds the promise of an Internet of Things, which is completely based on the Internet Protocol. The authors' team has developed a 6LoWPAN protocol stack in C language, the stack without the necessity to use a specific design environment or operating system. It is highly flexible, modular, and portable and can be enhanced by several interesting modules, like a Wake-On-Radio-(WOR) MAC layer or a TLS1.2 based security sublayer. The stack is made available as open source at https://github.com/hso-esk/emb6. It was extensively tested on the Automated Physical Testbed (APTB) for Wireless Systems, which is available in the authors' lab and allows a flexible setup and full control of arbitrary topologies. The results of the measurements demonstrate a very good stability and short-term with long-term performance also under dynamic conditions.
With the surge in global data consumption with proliferation of Internet of Things (IoT), remote monitoring and control is increasingly becoming popular with a wide range of applications from emergency response in remote regions to monitoring of environmental parameters. Mesh networks are being employed to alleviate a number of issues associated with single-hop communication such as low area coverage, reliability, range and high energy consumption. Low-power Wireless Personal Area Networks (LoWPANs) are being used to help realize and permeate the applicability of IoT. In this paper, we present the design and test of IEEE 802.15.4-compliant smart IoT nodes with multi-hop routing. We first discuss the features of the software stack and design choices in hardware that resulted in high RF output power and then present field test results of different baseline network topologies in both rural and urban settings to demonstrate the deployability and scalability of our solution.
MPC-Workshop Juli 2015
(2015)
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
Das Buch bietet eine fundierte Einführung in die Chronologie bekannter Angriffe und Verwundbarkeiten auf mobile Systeme und dessen konzeptionelle Einordnung der letzten zwei Dekaden. So erhält der Leser einen einmaligen Überblick über die Vielfältigkeit nachweisbar ausgenutzter Angriffsvektoren auf verschiedenste Komponenten mobiler drahtloser Geräte sowie den teilweise inhärent sicherheitskritischen Aktivitäten moderner mobiler OS. Eine für Laien wie Sicherheitsarchitekten gleichermaßen fesselnde Lektüre, die das Vertrauen in sichere mobile Systeme stark einschränken dürfte.
Der Inhalt
Verwundbarkeit von 802.15.4: PiP-Injektion
Verwundbarkeit von WLAN: KRACK-Angriff auf WPA2
Verwundbarkeit von Bluetooth: Blueborne und Co.
Verwundbarkeiten von NFC und durch NFC
Angriffe über das Baseband
Android Sicherheitsarchitektur
Horizontale Rechteausweitung
Techniken zu Obfuskierung und De-Obfuskierung von Apps
Apps mit erhöhten Sicherheitsbedarf: Banking Apps
Positionsbestimmung durch Swarm-Mapping
Seitenkanäle zur Überwindung des ‚Air-gap‘
Ausblick: 5G Sicherheitsarchitektur
Die Zielgruppen: Studierende der Informatik, Wirtschaftsinformatik, Elektrotechnik oder verwandter Studiengänge Praktiker, IT-Sicherheitsbeauftragte, Datenschutzbeauftragte, Entscheidungsträger, Nutzer drahtloser Geräte, die an einem ‚Blick unter die Motorhaube‘ interessiert sind.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
The authors claim that location information of stationary ICT components can never be unclassified. They describe how swarm-mapping crowd sourcing is used by Apple and Google to worldwide harvest geo-location information on wireless access points and mobile telecommunication systems' base stations to build up gigantic databases with very exclusive access rights. After having highlighted the known technical facts, in the speculative part of this article, the authors argue how this may impact cyber deterrence strategies of states and alliances understanding the cyberspace as another domain of geostrategic relevance. The states and alliances spectrum of activities due to the potential existence of such databases may range from geopolitical negotiations by institutions understanding international affairs as their core business, mitigation approaches at a technical level, over means of cyber deterrence-by-retaliation.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Eine kontinuierliche Überwachung von Ethernet-Leitungne beugt Maschinenausfällen in der Industrie vor. Aktuell fehlen jedoch geiegnete Methoden, um diese Überwachung flächendeckend durchzuführen. Im Projekt Ko²SiBus wurde deshalb ein kostengünstiges Verfahren zur kontinuierlichen Überwachung von Ethernet-Leitungen entwickelt.
Das Monitoring von Industrieanlagen stellt in der Wirtschaft sicher, dass hoch-automatisierte Prozesse reibungslos ablaufen können. Meistens steht hier das Monitoring der Anlagen selbst im Mittelpunkt, die Kommunikationsleitungen für den Datenaustausch auf Ethernet-Basis (z.B. Profinet) sind gegenwärtig noch nicht Teil einer kontinuierlichen Überwachung. Zwar werden auch hier die physischen Verbindungen überprüft, jedoch geschieht häufig dies nur zum Zeitpunkt der Inbetriebnahme, wenn die Anlage noch nicht in das Gesamtsystem integriert ist oder während eines Wartungszyklus, wenn die Maschine für die Dauer der Wartung aus dem Betriebsablauf genommen wird. Dies führt dazu, dass insbesondere heute, wo vor allem Ethernet zunehmend als Basis für die industrielle Kommunikation herangezogen wird, Maschinenausfälle aufgrund fehlender Kabelüberwachung immer wahrscheinlicher werden. Um dem entgegenwirken zu können, wurde im Projekt Ko2SiBus ein neues Messverfahren konzipiert, implementiert und validiert, das kostengünstig in neue oder bestehende Systeme integriert werden kann. Um die Tauglichkeit zu zeigen, wurden die Projektergebnisse in Prototypen und Demonstratoren implementiert, die sowohl als Stand-Alone aber auch als Integrationslösungen dienen können.
IPv6 over LoRaWAN™
(2016)
Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Ultra wide band (UWB) signals are well suited both for short-range wireless communication and for high-precision localization applications. Channel impulse response (CIR) analysis in UWB systems is a major element in localization estimation. In this paper, practical aspects of CIR are presented. I.e. a technique for the construction of the accumulated echo-gram of a multipath delayed signal is proposed. Decawave hardware was used to demonstrate the technique of analysis of fine structure of signals with a sub-nanosecond resolution. Temporal stability, reliability and two-way characteristics of such echo-grams are discussed as well. The results of using two EVK1000 radio modules as a radar installation to detect a target in indoor environments prove that a low cost UWB intrusion detection and through-the-wall-vision systems might be developed using the proposed technique.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
Wireless Sensor Networks (WSN) have emerged as interesting topic in the research community due to its manifold applications. One of the main challenges of this field is the energy consumption of the nodes, which typically is quite restricted due to the required lifetime of such WSNs. To solve that problem several energy-saving MAC protocols have been developed so far. One of them recently presented by the authors is the so-called SmartMAC as an extension to the IEEE802.15.4 standard. In this paper, we present the implementation details of the porting of the SmartMAC protocol to the discrete event network simulator NS3. We develop this module for NS3 to simulate the performance, multi node execution, and multi node configuration. Along with this model, we also present an energy model for the evaluation of the energy consumption. The current implementation in NS3 is based on the LR-WPAN (Low-Rate Wireless Personal Area Networks) as specified by the IEEE802.15.4 (2006) standard. The simulation results show that the SmartMAC with its sleep and wake-up mechanisms for the transceivers, is significantly more efficient than the current NS3 MAC (Medium Access Control) scheme.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things (IoT). Whereas the lower layers (IEEE802.15.4 and 6LoWPAN) are already well defined and consolidated with regard to frame formats, header compression, routing protocols and commissioning procedures, there is still an abundant choice of possibilities on the application layer. Currently, various groups are working towards standardization of the application layer, i.e. the ETSI Technical Committee on M2M, the IP for Smart Objects (IPSO) Alliance, Lightweight M2M (LWM2M) protocol of the Open Mobile Alliance (OMA), and OneM2M. This multitude of approaches leaves the system developer with the agony of choice. This paper selects, presents and explains one of the promising solutions, discusses its strengths and weaknesses, and demonstrates its implementation.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
Hybrid low-voltage physical unclonable function based on inkjet-printed metal-oxide transistors
(2020)
Modern society is striving for digital connectivity that demands information security. As an emerging technology, printed electronics is a key enabler for novel device types with free form factors, customizability, and the potential for large-area fabrication while being seamlessly integrated into our everyday environment. At present, information security is mainly based on software algorithms that use pseudo random numbers. In this regard, hardware-intrinsic security primitives, such as physical unclonable functions, are very promising to provide inherent security features comparable to biometrical data. Device-specific, random intrinsic variations are exploited to generate unique secure identifiers. Here, we introduce a hybrid physical unclonable function, combining silicon and printed electronics technologies, based on metal oxide thin film devices. Our system exploits the inherent randomness of printed materials due to surface roughness, film morphology and the resulting electrical characteristics. The security primitive provides high intrinsic variation, is non-volatile, scalable and exhibits nearly ideal uniqueness.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
Novel manufacturing technologies, such as printed electronics, may enable future applications for the Internet of Everything like large-area sensor devices, disposable security, and identification tags. Printed physically unclonable functions (PUFs) are promising candidates to be embedded as hardware security keys into lightweight identification devices. We investigate hybrid PUFs based on a printed PUF core. The statistics on the intra- and inter-hamming distance distributions indicate a performance suitable for identification purposes. Our evaluations are based on statistical simulations of the PUF core circuit and the thereof generated challenge-response pairs. The analysis shows that hardware-intrinsic security features can be realized with printed lightweight devices.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
Maintenance processes in Industry 4.0 applications try to achieve a high degree of quality to reduce the downtime of machinery. The monitoring of executed maintenance activities is challenging as in complex production setups, multiple stakeholders are involved. So, full transparency of the different activities and of the state of the machine can only be supported, if these stakeholders trust each other. Therefore, distributed ledger technologies, like Blockchain, can be promising candidates for supporting such applications. The goal of this paper is a formal description of business and technical interactions between non-trustful stakeholders in the context of Industry 4.0 maintenance processes using distributed ledger technologies. It also covers the integration of smart contracts for automated triggering of activities.
Wireless sensor networks have found their way into a wide range of applications among which environmental monitoring systems have attracted increasing interests of researchers. The main challenges for the applications are scalability of the network size and energy efficiency of the spatially distributed motes. These devices are mostly battery-powered and spend most of their energy budget on the radio transceiver module. A so-called Wake-On-Radio (WOR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, a novel design for integration of WOR into IEEE802.1.5.4 is presented, which flexibly allows trade-offs in energy consumption between sender and receiver station, between real-time capability and energy consumption. For identical behavior, the proposed scheme is significantly more efficient than other schemes, which were proposed in recent publications, while preserving backward compatibility with standard IEEE802.15.4 transceivers.
Wireless sensor networks have recently found their way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researchers. Such monitoring applications, in general, don way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researc latency requirements regarding to the energy efficiency. Also a challenge of this application is the network topology as the application should be able to be deployed in very large scale. Nevertheless low power consumption of the devices making up the network must be on focus in order to maximize the lifetime of the whole system. These devices are usually battery-powered and spend most of their energy budget on radio transceiver module. A so-called Wake-On-Radio (WoR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, some designs for integration of WOR into IEEE 802.1.5.4 are to be discussed, providing an overview of trade-offs in energy consumption while deploying the WoR schemes in a monitoring system.
Extended Performance Measurements of Scalable 6LoWPAN Networks in an Automated Physical Testbed
(2015)
IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, is becoming more and more a de facto standard for such communications for the Internet of Things, be it in the field of home and building automation, of industrial and process automation, or of smart metering and environmental monitoring. For all of these applications, scalability is a major precondition, as the complexity of the networks continuously increase. To maintain this growing amount of connected nodes a various 6LoWPAN implementations are available. One of the mentioned was developed by the authors' team and was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements prove an excellent stability and a very good short and long-term performance also under dynamic conditions. In all measurements, there is an advantage of minimum 10% with regard to the average times, like global repair time; but the advantage with reagr to average values can reach up to 30%. Moreover, it can be proven that the performance predictions from other papers are consistent with the executed real-life implementations.
Exploiting Dissent: Towards Fuzzing-based Differential Black Box Testing of TLS Implementations
(2017)
The Transport Layer Security (TLS) protocol is one of the most widely used security protocols on the internet. Yet do implementations of TLS keep on suffering from bugs and security vulnerabilities. In large part is this due to the protocol's complexity which makes implementing and testing TLS notoriously difficult. In this paper, we present our work on using differential testing as effective means to detect issues in black-box implementations of the TLS handshake protocol. We introduce a novel fuzzing algorithm for generating large and diverse corpuses of mostly-valid TLS handshake messages. Stimulating TLS servers when expecting a ClientHello message, we find messages generated with our algorithm to induce more response discrepancies and to achieve a higher code coverage than those generated with American Fuzzy Lop, TLS-Attacker, or NEZHA. In particular, we apply our approach to OpenssL, BoringSSL, WolfSSL, mbedTLS, and MatrixSSL, and find several real implementation bugs; among them a serious vulnerability in MatrixSSL 3.8.4. Besides do our findings point to imprecision in the TLS specification. We see our approach as present in this paper as the first step towards fully interactive differential testing of black-box TLS protocol implementations. Our software tools are publicly available as open source projects.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
This paper presents an overview of EREMI, a two-year project funded under ERASMUS+ KA203, and its results. The project team’s main objective was to develop and validate an advanced interdisciplinary higher education curriculum, which includes lifelong learning components. The curriculum focuses on enhancing resource efficiency in the manufacturing industry and optimising poorly or non-digitised industrial physical infrastructure systems. The paper also discusses the results of the project, highlighting the successful achievement of its goals. EREMI effectively supports the transition to Industry 5.0 by preparing a common European pool of future experts. Through comprehensive research and collaboration, the project team has designed a curriculum that equips students with the necessary skills and knowledge to thrive in the evolving manufacturing landscape. Furthermore, the paper explores the significance of EREMI’s contributions to the field, emphasising the importance of resource efficiency and system optimisation in industrial settings. By addressing the challenges posed by under-digitised infrastructure, the project aims to drive sustainable and innovative practices in manufacturing. All five project partner organisations have been actively engaged in offering relevant educational content and framework for decentralised sustainable economic development in regional and national contexts through capacity building at a local level. A crucial element of the added value is the new channel for obtaining feedback from students. The survey results, which are outlined in the paper, offer valuable insights gathered from students, contributing to the continuous improvement of the project.
Die neueste Generation von programmierbaren Logikbausteinen verfügt neben den konfigurierbaren Logikzellen über einen oder mehrere leistungsfähige Mikroprozessoren. In dieser Arbeit wird gezeigt, wie ein bestehendes Zwei-Chip-System auf einen Xilinx Zynq 7000 mit zwei ARM A9-Cores migriert wird. Bei dem System handelt es sich um das „GPS-gestützte Kreisel-system ADMA“ des Unternehmens GeneSys. Die neue Lösung verbessert den Datenaustausch zwischen dem ersten Mikroprozessor zur digitalen Signalverarbeitung und dem zweiten Prozessor zur Ablaufsteuerung durch ein Shared Memory. Für die schnelle und echtzeitfähige Datenübertragung werden zahlreiche hochbitratige Schnittstellengenutzt.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.