Refine
Year of publication
Document Type
- Conference Proceeding (145)
- Article (reviewed) (28)
- Article (unreviewed) (19)
- Part of a Book (11)
- Patent (3)
- Report (3)
- Contribution to a Periodical (1)
Conference Type
- Konferenzartikel (143)
- Konferenz-Abstract (1)
- Sonstiges (1)
Is part of the Bibliography
- yes (210)
Keywords
- Kommunikation (11)
- Eingebettetes System (8)
- Blockchain (6)
- Sicherheit (5)
- Intelligentes Stromnetz (4)
- Internet of Things (4)
- Security (4)
- blockchain (4)
- Energieversorgung (3)
- IIoT (3)
Institute
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (134)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (120)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (85)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (3)
- Fakultät Medien (M) (ab 22.04.2021) (1)
- Zentrale Einrichtungen (1)
Open Access
- Closed Access (93)
- Open Access (59)
- Closed (38)
- Bronze (13)
- Gold (8)
- Diamond (4)
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
Die Erfindung betrifft ein Verfahren zum Maximieren der von einer analogen Entropiequelle abgeleiteten Entropie, wobei das Verfahren folgende Schritte aufweist:- Bereitstellen von Eingabedaten für die analoge Entropiequelle (2);- Erzeugen von Rückgabewerten durch die analoge Entropiequelle basierend auf den Eingabedaten (3); und- Gruppieren der Rückgabewerte, wobei das Gruppieren der Rückgabewerte ein Anwenden von Versätzen auf Rückgabewerte aufweist (4).
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
In recent years, physically unclonable functions (PUFs) have gained significant attraction in IoT security applications, such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of different devices to generate unique fingerprints for security applications. When generating PUF-based secret keys, the reliability and entropy of the keys are vital factors. This study proposes a novel method for generating PUF-based keys from a set of measurements. Firstly, it formulates the group-based key generation problem as an optimization problem and solves it using integer linear programming (ILP), which guarantees finding the optimum solution. Then, a novel scheme for the extraction of keys from groups is proposed, which we call positioning syndrome coding (PSC). The use of ILP as well as the introduction of PSC facilitates the generation of high-entropy keys with low error correction costs. These new methods have been tested by applying them on the output of a capacitor network PUF. The results confirm the application of ILP and PSC in generating high-quality keys.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
With the surge in global data consumption with proliferation of Internet of Things (IoT), remote monitoring and control is increasingly becoming popular with a wide range of applications from emergency response in remote regions to monitoring of environmental parameters. Mesh networks are being employed to alleviate a number of issues associated with single-hop communication such as low area coverage, reliability, range and high energy consumption. Low-power Wireless Personal Area Networks (LoWPANs) are being used to help realize and permeate the applicability of IoT. In this paper, we present the design and test of IEEE 802.15.4-compliant smart IoT nodes with multi-hop routing. We first discuss the features of the software stack and design choices in hardware that resulted in high RF output power and then present field test results of different baseline network topologies in both rural and urban settings to demonstrate the deployability and scalability of our solution.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Integration of BACNET OPC UA-Devices Using a JAVA OPC UA SDK Server with BACNET Open Source Library
(2014)
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
In this paper, we study the runtime performance of symmetric cryptographic algorithms on an embedded ARM Cortex-M4 platform. Symmetric cryptographic algorithms can serve to protect the integrity and optionally, if supported by the algorithm, the confidentiality of data. A broad range of well-established algorithms exists, where the different algorithms typically have different properties and come with different computational complexity. On deeply embedded systems, the overhead imposed by cryptographic operations may be significant. We execute the algorithms AES-GCM, ChaCha20-Poly1305, HMAC-SHA256, KMAC, and SipHash on an STM32 embedded microcontroller and benchmark the execution times of the algorithms as a function of the input lengths.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
Narrowband IoT (NB-IoT) as a radio access technology for the cellular Internet of Things (cIoT) is getting more traction due to attractive system parameters, new proposals in the 3 rd Generation Partnership Project (3GPP) Release 14 for reduced power consumption and ongoing world-wide deployment. As per 3GPP, the low-power and wide-area use cases in 5G specification will be addressed by the early NB-IoT and Long-Term Evolution for Machines (LTE-M) based technologies. Since these cIoT networks will operate in a spatially distributed environment, there are various challenges to be addressed for tests and measurements of these networks. To meet these requirements, unified emulated and field testbeds for NB-IoT-networks were developed and used for extensive performance measurements. This paper analyses the results of these measurements with regard to RF coverage, signal quality, latency, and protocol consistency.
Wireless communication technologies play a major role to enable megatrends like Internet of Things (IoT) and Industry 4.0. The Narrowband Wireless WAN (NBWWAN) introduced to meet the long range and low power requirements of spatially distributed wireless communication use cases. These networks introduce additional challenges in testing because the network topology and RF characteristics become particularly complex and thus a multitude of different scenarios must be tested. This paper describes the infrastructure for automated testing of radio communication and for systematic measurements of the network performance of NBWWAN.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.
IPv6 over resource-constrained devices (6Lo) emerged as a de-facto standard for the Internet of Things (IoT) applications especially in home and building automation systems. We provide results of an investigation of the applicability of 6LoWPAN with RPL mesh networks for home and building automation use cases. The proper selection of Trickle parameters and neighbor reachable time-outs is important in the RPL protocol suite to respond efficiently to any path failure. These parameters were analyzed in the context of energy consumption w.r.t the number of control packets. The measurements were performed in an Automated Physical Testbeds (APTB). The results match the recommendation by RFC 7733 for selecting various parameters of RPL protocol suite. This paper shows the relationship between various RPL parameters and control traffic overhead during network rebuild. Comparative measurement results with Bluetooth Low Energy (BLE) in this work showed that 6Lo with RPL outperformed BLE in this use case with less control traffic overheads.
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
With many advances in sensor technology and the Internet of Things, Vehicle Ad Hoc Net- work (VANET) is becoming a new generation. VANET’s current technical challenges are deploying decentralized architecture and protecting privacy. Because Blockchain features are decentralized, distributed, mass storage, and non-manipulation features, this paper designs a new decentralized architecture using Blockchain technology called Blockchain-based VANET. Blockchain-based VANET can effectively resolve centralized problems and mutual distrust between VANET units. To achieve this, it is needed to provide scalability on the blockchain to run for VANET. In this system, our focus is on the reliability of incoming messages on the network. Vehicles check the validity of the received messages using the proposed Bayesian formula for trust management system and some information saved in the Blockchain. Then, based on the validation result, the vehicle computes a rate for each message type and message source vehicle. Vehicles upload the computed rates to Roadside Units (RSUs) in order to calculate the net reliability value. Finally, RSUs using a sharding consensus mechanism generate blocks, including the net reliability value as a transaction. In this system, all RSUs collaboratively maintain the latest updated Blockchain. Our experimental results show that the proposed system is effective, scalable and dependable in data gathering, computing, organization, and retrieval of trust values in VANET.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Industrial companies can use blockchain to assist them in resolving their trust and security issues. In this research, we provide a fully distributed blockchain-based architecture for industrial IoT, relying on trust management and reputation to enhance nodes’ trustworthiness. The purpose of this contribution is to introduce our system architecture to show how to secure network access for users with dynamic authorization management. All decisions in the system are made by trustful nodes’ consensus and are fully distributed. The remarkable feature of this system architecture is that the influence of the nodes’ power is lowered depending on their Proof of Work (PoW) and Proof of Stake (PoS), and the nodes’ significance and authority is determined by their behavior in the network.
This impact is based on game theory and an incentive mechanism for reputation between nodes. This system design can be used on legacy machines, which means that security and distributed systems
can be put in place at a low cost on industrial systems. While there are no numerical results yet, this work, based on the open questions regarding the majority problem and the proposed solutions based on a game-theoretic mechanism and a trust management system, points to what and how industrial IoT and existing blockchain frameworks that are focusing only on the power of PoW and PoS can be secured more effectively.
Die Erfindung betrifft ein Verfahren zur Synchronisation eines Netzwerkgeräts für die drahtlose Kommunikation, insbesondere eines Netzwerk-Endgeräts, in einem Drahtlosnetzwerk, wobei das Netzwerkgerät einen integrierten Schaltkreis für die drahtlose Kommunikation (IWC), eine Synchronisationsevent-Detektoreinrichtung (SED) für das Detektieren von Synchronisationsevents, einen steuerbaren Clock-Generator (CCG) für das Erzeugen eines synchronisierten Zeitsignals TCCGund eine Synchronisationssteuereinrichtung (SCD) zur Steuerung des Synchronisationsvorgangs des Netzwerkgeräts umfasst. In dem Netzwerkgerät werden während einer Synchronisationsphase folgende Verfahrensschritte durchgeführt: Zunächst wird ein Synchronisations-Frame empfangen und ein Synchronisations-Timestamp TAPdetektiert. Anschließend wird ein Timestamp TBmittels einer im IWC enthaltenen IWC-Clock erzeugt, der die Empfangszeit des Synchronisations-Frames definiert. In einem weiteren Schritt wird an einem Port des IWC ein Potenzialwechsel erzeugt, der einen Synchronisationsevent darstellt. Weiterhin wird ein Timestamp TSEmittels der IWC-Clock erzeugt, der den Zeitpunkt des Synchronisationsevents definiert. Die SED detektiert den Synchronisationsevent durch Auswerten der zeitlichen Länge des Potenzialwechsels des Ports des IWC und erzeugt einen Timestamp TSunter Verwendung des synchronisierten Zeitsignals TCCG, wobei der Timestamp TSdenselben Zeitpunkt des Synchronisationsevents definiert wie der Timestamp TSE. Die Timestamps TAP, TB, TSEund TS, die mittels Verarbeitung von ein oder mehreren Synchronisationsevent-Frames gemäß den Schritten (a) bis (d) ermittelt wurden, werden dann zur Synchronisierung des vom CCG erzeugten synchronisierten Zeitsignals TCCGauf das Master-Zeitsignal verwendet.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
Android is an operating system which was developed for use in smart mobile phones and is the current leader in this market. A lot of efforts are being spent to make Android available to the embedded world, as well. Many embedded systems do not have a local GUI and are therefore called headless devices. This paper presents the results of an analysis of the general suitability of Anroid in headless embedded systems and ponders the advantages and disadvantages. It focuses on the hardware related issues, i.e. to what extent Android supports hardware peripherals normally used in embedded systems.
Die Erfindung betrifft in einem ersten Aspekt eine Vorrichtung zur transkutanen Aufbringung eines elektrischen Stimulationsreizes auf ein Ohr. Die Vorrichtung umfasst einen Schaltungsträger, mindestens zwei Elektroden sowie eine Steuerungseinheit, wobei die Steuerungseinheit dazu konfiguriert ist, anhand von Stimulationsparametern ein elektrisches Stimulationssignal an den Elektroden zu erzeugen. Dabei ist die Vorrichtung, insbesondere eine Oberfläche des Schaltungsträgers der Vorrichtung, auf eine anatomische Form eines Ohres angepasst, sodass Elektroden auf der Oberfläche des Schaltungsträgers aufgebracht sind und ausgewählte Bereiche des Ohres kontaktieren Die Vorrichtung ist dadurch kennzeichnet, dass diese weiterhin einen Sensor zur Erkennung mindestens eines physiologischen Parameter umfasst und eine Steuerungseinheit dazu konfiguriert ist, anhand des mindestens einen physiologischen Parameters die Stimulationsparameter für den Stimulationsreiz anzupassen.In einem weiteren Aspekt betrifft die Erfindung ein Verfahren zur Herstellung der erfindungsgemäßen Vorrichtung.
WirelessHART protocol was specifically designed for real-time communication in the wireless sensor networks domain for industrial process automation requirements. Whereas the major purpose of WirelessHART is the read-out of sensors with moderate real-time requirements, an increasing demand for integration of actuator applications can be observed. Therefore, it must be verified that the WirelessHART protocol gives sufficient support to real-time industry requirements. As a result, the delay of especially burst and command messages from actuator and sensor nodes to the gateway and vice versa must be analyzed. In this paper, we implemented a WirelessHART network scenario in WirelessHART simulator in NS-2 [8], simulated and analyzed its time characteristics under ideal and noisy conditions. We evaluated the performance of the implementation in order to verify whether the requirements of industrial process and control can be met. This implementation offers an early alternative to expensive test beds for WirelessHART in real-time actuator applications.
Temperature regulation is an important component for modern high performance single -core and multi-core processors. Especially high operating frequencies and architectures with an increasing number of monolithically integrated transistors result in a high power dissipation and - since processor chips convert the consumed electrical energy into thermal energy - in high operating temperatures. High operating temperatures of processors can have drastic consequences regarding chip reliability, processor performance, and leakage currents. External components like fans or heat spreaders can help to reduce the processor temperature - with the disadvantage of additional costs and reduced reliability. Therefore, software based algorithms for dynamic temperature management are an attractive alternative and well known as Dynamic Thermal Management (DTM). However, the existing approaches for DTM are not taking into account the requirements of real-time embedded computing, which is the objective in the given project. The first steps are the profiling and the thermal modeling of the system, which is reported in this paper for a Freescale i. MX6Q quad-core microprocessor. An analytical model is developed and verified by an extensive set of measurement runs.
Experiences with a telecare platform integration of ZigBee sensors into a middleware platform
(2012)
Die zunehmende Anzahl von Transistoren mit immer kleineren Strukturgrößen führt zu einer zunehmenden Leistungsaufnahme in modernen Prozessoren. Das gilt insbesondere für High-End Prozessoren, die mit einer hohen Taktfrequenz betrieben werden. Die aufgenommene Leistung wird in Wärme umgewandelt, die in einer Temperaturerhöhung der Prozessoren resultiert. Hohe Betriebstemperaturen verursachen u.a. eine verringerte Rechenleistung, eine kürzere Lebensdauer des Prozessors und höhere Leckströme. Aus diesen Gründen wird aktives, dynamisches thermisches Management immer wichtiger. Dieser Beitrag stellt eine Erweiterung zu dem Standard- Linux-Scheduler in der Kernel-Version 3.0 für eingebettete Systeme vor: einen PID-Regler, der unter Angabe einer Solltemperatur eine dynamische Frequenz- und Spannungsskalierung durchführt. Die Experimente auf dem Freescale LMX6 Quadcore-Prozessor zeigen, dass der PID-Regler die Betriebstemperatur des Prozessors an die Solltemperatur regeln kann. Er ist die Grundlage für eine in Zukunft zu entwickelnde prädiktive Regelung.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
The M-Bus protocol (EN13757) is in widespread use for metering applications within home area and neighborhood area networks, but lacks a strict specification. This may lead to incompatibilities in real-life installations and to problems in the deployment of new M-Bus networks. This paper presents the development of a novel testbed to emulate physical Metering Bus (M-Bus) networks with different topologies and to allow the flexible verification of real M-Bus devices in real-world scenarios. The testbed is designed to support device manufacturers and service technicians in test and analysis of their devices within a specific network before their installation. The testbed is fully programmable, allowing flexible changes of network topologies, cable lengths and types. Itis easy to use, as only the master and the slaves devices have to be physically connected. This allows to autonomously perform multiple tests, including automated regression tests. The testbed is available to other researchers and developers. We invite companies and research institutions to use this M-Bus testbed to increase the common knowledge and real-world experience.
Machine-to-machine communication is continuously extending to new application fields. Especially smart metering has the potential to become the first really large-scale M2M application. Although in the future distributed meter devices will be mainly connected via dedicated primary communication protocols, like ZigBee, Wireless
M-Bus or alike, a major percentage of all meters will be connected via point to point communication using GPRS or UMTS platforms. Thus, such meter devices have to be extremely cost and energy efficient, especially if the devices are battery based and powered several years by a single battery. This paper presents the development of an automated measurement unit for power and time, thus energy characteristics can be recorded. The measurement unit includes a hardware platform for the device
under test (DUT) and a database based software environment for a smooth execution and analysis of the measurements.
Blockchain interoperability: the state of heterogenous blockchain-to-blockchain communication
(2023)
Blockchain technology has been increasingly adopted over the past few years since the introduction of Bitcoin, with several blockchain architectures and solutions being proposed. Most proposed solutions have been developed in isolation, without a standard protocol or cryptographic structure to work with. This has led to the problem of interoperability, where solutions running on different blockchain platforms are unable to communicate, limiting the scope of use. With blockchains being adopted in a variety of fields such as the Internet of Things, it is expected that the problem of interoperability if not addressed quickly, will stifle technology advancement. This paper presents the current state of interoperability solutions proposed for heterogenous blockchain systems. A look is taken at interoperability solutions, not only for cryptocurrencies, but also for general data-based use cases. Current open issues in heterogenous blockchain interoperability are presented. Additionally, some possible research directions are presented to enhance and to extend the existing blockchain interoperability solutions. It was discovered that though there are a number of proposed solutions in literature, few have seen real-world implementation. The lack of blockchain-specific standards has slowed the progress of interoperability. It was also realized that most of the proposed solutions are developed targeting cryptocurrency-based applications.
Active safety systems for advanced driver assistance systems act within a complex, dynamic traffic environment featuring various sensor systems which detect the vehicles’ surroundings and interior. This paper describes the recent progress towards a performance evaluation of car-to-car communication (C2C) for active safety systems - in particular for crash constellation prediction. The methodology introduced in this work is designed to evaluate the impact of different sensors on the accuracy of a crash constellation prediction algorithm. The benefit of C2C communication (viewed as a virtual sensor) within a sensor data fusion architecture for pre-crash collision prediction is explored. Therefore, a simulation environment for accident scenarios analysis reproducing real-world sensor behaviour, is designed and implemented. Performance evaluation results show that C2C increases confidence in the estimated position of the oncoming vehicle. With C2C enhancement the given accuracy in time-to-collision (TTC) estimation is achievable about 110 ms earlier for moderate velocities at TTC range of [0.5s..0.2s]. The uncertainty in the vehicle position prediction at the time of collision can be reduced about half by integrating C2C communication into the sensor data fusion.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things (IoT). Whereas the lower layers (IEEE802.15.4 and 6LoWPAN) are already well defined and consolidated with regard to frame formats, header compression, routing protocols and commissioning procedures, there is still an abundant choice of possibilities on the application layer. Currently, various groups are working towards standardization of the application layer, i.e. the ETSI Technical Committee on M2M, the IP for Smart Objects (IPSO) Alliance, Lightweight M2M (LWM2M) protocol of the Open Mobile Alliance (OMA), and OneM2M. This multitude of approaches leaves the system developer with the agony of choice. This paper selects, presents and explains one of the promising solutions, discusses its strengths and weaknesses, and demonstrates its implementation.
Die Vielfalt der Protokolle, die praktisch auf allen Ebenen der Netzwerkkommunikation zu berücksichtigen ist, stellt eine der großen Herausforderungen bei der fortschreitenden Automatisierung des intelligenten Hauses dar. Unter dem Überbegriff Internet der Dinge (Internet of Things) entstehen gegenwärtig zahlreiche neue Entwicklungen, Standards, Allianzen und so genannte Ökosysteme. Diese haben die Absicht einer horizontalen Integration gewerkeübergreifender Anwendungen und verfolgen fast alle das Ziel, die Situation zu vereinfachen, die Entwicklungen zu beschleunigen und Markterfolge zu erreichen. Leider macht diese Vielfalt momentan die Welt aber eher noch komplexer und bringt damit das Risiko mit sich, genau das Gegenteil der ursprünglichen Absichten zu erreichen. Dieser Beitrag versucht, die Entwicklungen möglichst systematisch zu kategorisieren und mögliche Lösungsansätze zu beschreiben.
The Metering Bus, also known as M-Bus, is a European standard EN13757-3 for reading out metering devices, like electricity, water, gas, or heat meters. Although real-life M-Bus networks can reach a significant size and complexity, only very simple protocol analyzers are available to observe and maintain such networks. In order to provide developers and installers with the ability to analyze the real bus signals easily, a web-based monitoring tool for the M-Bus has been designed and implemented. Combined with a physical bus interface it allows for measuring and recording the bus signals. For this at first a circuit has been developed, which transforms the voltage and current-modulated M-Bus signals to a voltage signal that can be read by a standard ADC and processed by an MCU. The bus signals and packets are displayed using a web server, which analyzes and classifies the frame fragments. As an additional feature an oscilloscope functionality is included in order to visualize the physical signal on the bus. This paper describes the development of the read-out circuit for the Wired M-Bus and the data recovery.
Wireless Sensor Networks (WSN) have emerged as interesting topic in the research community due to its manifold applications. One of the main challenges of this field is the energy consumption of the nodes, which typically is quite restricted due to the required lifetime of such WSNs. To solve that problem several energy-saving MAC protocols have been developed so far. One of them recently presented by the authors is the so-called SmartMAC as an extension to the IEEE802.15.4 standard. In this paper, we present the implementation details of the porting of the SmartMAC protocol to the discrete event network simulator NS3. We develop this module for NS3 to simulate the performance, multi node execution, and multi node configuration. Along with this model, we also present an energy model for the evaluation of the energy consumption. The current implementation in NS3 is based on the LR-WPAN (Low-Rate Wireless Personal Area Networks) as specified by the IEEE802.15.4 (2006) standard. The simulation results show that the SmartMAC with its sleep and wake-up mechanisms for the transceivers, is significantly more efficient than the current NS3 MAC (Medium Access Control) scheme.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
A Localization System Using Inertial Measurement Units from Wireless Commercial Handheld Devices
(2013)
This paper describes a newly developed technology for the calculation of trajectories of mobile objects, which is based on commercially available sensors being integrated into modern mobile phones and other gadgets. First, a step counting technique was implemented. Second, a novel step length estimator is proposed. These two algorithms utilize the data from accelerometer sensor only. Third, the heading information was obtained using a gyroscope with complementary filter in quaternion form. The combined algorithm was implemented on a low-power ARM processor to provide the trajectory points relative to an initial point. The proposed technique was tested by 10 subjects, in different shoes with different paces. The dependence of the performance of the technology on the attaching point of the mobile device is weak. The proposed algorithms have better balance and estimation accuracy and depend in less degree on the variety in physical parameters of people in comparison with the existing techniques. In experiments inertial measurement units were mounted in different places, i.e. in the hand, in trousers or in T-shirt pockets. The return position error did not exceed 5% of the total travelled distance for all performed tests.
Ranging errors are inevitable in all local positioning systems, including those based on Time-of-Flight (ToF) technique. Results of experiments show that the major cause for these errors is a signal degradation from multipath propagation. This effect is especially critical in case of Non-Light-of-Sight (NLOS) conditions. This paper describes causes that affects ranging errors for nanoLOC™-TOF-technology and presents estimations for the probability density functions of such errors under different NLOS conditions. The provided estimations allow the improvement of the accuracy of the localization through the subsequent mitigation of the ranging errors from the measurements. Additionally, it is proposed to increase the number of cases of NLOS-conditions for the improvement of the accuracy.
On the possibility to use leaky feeders for positioning in chirp spread spectrum technologies
(2014)
Real Time Localization Systems using electromagnetic waves have significantly evolved during the last years. They also might be used in industrial and in mining environments. Here, topologies might include tunnels, where it might be difficult to ensure the field coverage. Leaky feeder cables are a common solution in case of normal radio communication. In this paper, we study the possibilities to use leaky feeders also for Time-of-Flight based real time localization in such linear topologies, like tunnels, but possibly also for 2D-localization. Theoretical analysis is verified with real-life measurements, which were performed using Chirp Spread Spectrum Technologies.
Ultra wide band (UWB) signals are well suited both for short-range wireless communication and for high-precision localization applications. Channel impulse response (CIR) analysis in UWB systems is a major element in localization estimation. In this paper, practical aspects of CIR are presented. I.e. a technique for the construction of the accumulated echo-gram of a multipath delayed signal is proposed. Decawave hardware was used to demonstrate the technique of analysis of fine structure of signals with a sub-nanosecond resolution. Temporal stability, reliability and two-way characteristics of such echo-grams are discussed as well. The results of using two EVK1000 radio modules as a radar installation to detect a target in indoor environments prove that a low cost UWB intrusion detection and through-the-wall-vision systems might be developed using the proposed technique.
Real-Time Ethernet has become the major communication technology for modern automation and industrial control systems. On the one hand, this trend increases the need for an automation-friendly security solution, as such networks can no longer be considered sufficiently isolated. On the other hand, it shows that, despite diverging requirements, the domain of Operational Technology (OT) can derive advantage from high-volume technology of the Information Technology (IT) domain. Based on these two sides of the same coin, we study the challenges and prospects of approaches to communication security in real-time Ethernet automation systems. In order to capitalize the expertise aggregated in decades of research and development, we put a special focus on the reuse of well-established security technology from the IT domain. We argue that enhancing such technology to become automation-friendly is likely to result in more robust and secure designs than greenfield designs. Because of its widespread deployment and the (to this date) nonexistence of a consistent security architecture, we use PROFINET as a showcase of our considerations. Security requirements for this technology are defined and different well-known solutions are examined according their suitability for PROFINET. Based on these findings, we elaborate the necessary adaptions for the deployment on PROFINET.
Recently, the demand for scalable, efficient and accurate Indoor Positioning Systems (IPS) has seen a rising trend due to their utility in providing Location Based Services (LBS). Visible Light Communication (VLC) based IPS designs, VLC-IPS, leverage Light Emitting Diodes (LEDs) in indoor environments for localization. Among VLC-based designs, Time Difference of Arrival (TDOA) based techniques are shown to provide very low errors in the relative position of receivers. Our considered system consists of five LEDs that act as transmitters and a single receiver (photodiode or image sensor in smart phone) whose position coordinates in an indoor environment are to be determined. As a performance criterion, Cramer Rao Lower Bound (CRLB) is derived for range estimations and the impact of various factors, such as, LED transmission frequency, position of reference LED light, and the number of LED lights, on localization accuracy has been studied. Simulation results show that depending on the optimal values of these factors, location estimation on the order of few centimeters can be realistically achieved.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
The integration of Internet of Things devices onto the Blockchain implies an increase in the transactions that occur on the Blockchain, thus increasing the storage requirements.
A solution approach is to leverage cloud resources for storing blocks within the chain. The paper, therefore, proposes two solutions to this problem. The first being an improved hybrid architecture design which uses containerization to create a side chain on a fog node for the devices connected to it and an Advanced Time‑variant Multi‑objective Particle Swarm Optimization Algorithm (AT‑MOPSO) for determining the optimal number of blocks that should be transferred to the cloud for storage. This algorithm uses time‑variant weights for the velocity of the particle swarm optimization and the non‑dominated sorting and mutation schemes from NSGA‑III. The proposed algorithm was compared with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algorithm (SPEA‑II), and the Pareto Envelope‑based Selection Algorithm with region‑based selection (PESA‑II), and NSGA‑III. The proposed AT‑MOPSO showed better results than the aforementioned MOPSO algorithms in cloud storage cost and query probability optimization. Importantly, AT‑MOPSO achieved 52% energy efficiency compared to NSGA‑III.
To show how this algorithm can be applied to a real‑world Blockchain system, the BISS industrial Blockchain architecture was adapted and modified to show how the AT‑MOPSO can be used with existing Blockchain systems and the benefits it provides.
Wireless sensor networks have recently found their way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researchers. Such monitoring applications, in general, don way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researc latency requirements regarding to the energy efficiency. Also a challenge of this application is the network topology as the application should be able to be deployed in very large scale. Nevertheless low power consumption of the devices making up the network must be on focus in order to maximize the lifetime of the whole system. These devices are usually battery-powered and spend most of their energy budget on radio transceiver module. A so-called Wake-On-Radio (WoR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, some designs for integration of WOR into IEEE 802.1.5.4 are to be discussed, providing an overview of trade-offs in energy consumption while deploying the WoR schemes in a monitoring system.
Environmental Monitoring is an attractive application field for Wireless Sensor Network (WSN). Water Level Monitoring helps to increase the efficiency of water distribution and management. In Pakistan, the world’s largest irrigation system covers 90.000 km of channels which needs to be monitored and managed on different levels. Especially the sensor systems for the small distribution channels need to be low energy and low cost. The distribution presents a technical solution for a communication system which is developed in a research project being co-funded by German Academic Exchange Service (DAAD). The communication module is based on IEEE-802.15.4 transceivers which are enhanced through Wake-On-Radio (WOR) to combine low-energy and real-time behavior. On higher layers, IPv6 (6LoWPAN) and corresponding routing protocols like Routing Protocol for Low power and Lossy Networks (RPL) can extend range of the network. The data are stored in a database and can be viewed online via a web interface. Of course, also automatic data analysis can be performed.
Wireless sensor networks have found their way into a wide range of applications among which environmental monitoring systems have attracted increasing interests of researchers. The main challenges for the applications are scalability of the network size and energy efficiency of the spatially distributed motes. These devices are mostly battery-powered and spend most of their energy budget on the radio transceiver module. A so-called Wake-On-Radio (WOR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, a novel design for integration of WOR into IEEE802.1.5.4 is presented, which flexibly allows trade-offs in energy consumption between sender and receiver station, between real-time capability and energy consumption. For identical behavior, the proposed scheme is significantly more efficient than other schemes, which were proposed in recent publications, while preserving backward compatibility with standard IEEE802.15.4 transceivers.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Digital networked communications are the key to all Internet-of-Things applications, especially to smart metering systems and the smart grid. In order to ensure a safe operation of systems and the privacy of users, the transport layer security (TLS) protocol, a mature and well standardized solution for secure communications, may be used. We implemented the TLS protocol in its latest version in a way suitable for embedded and resource-constrained systems. This paper outlines the challenges and opportunities of deploying TLS in smart metering and smart grid applications and presents performance results of our TLS implementation. Our analysis shows that given an appropriate implementation and configuration, deploying TLS in constrained smart metering systems is possible with acceptable overhead.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
One of the most important questions about smart metering systems for the end users is their data privacy and security. Indeed, smart metering systems provide a lot of advantages for distribution system operators (DSO), but functionalities offered to users of existing smart meters are still limited and society is becoming increasingly critical. Smart metering systems are accused of interfering with personal rights and privacy, providing unclear tariff regulations which not sufficiently encourage households to manage their electricity consumption in advance. In the specific field of smart grids, data security appears to be a necessary condition for consumer confidence without which they will not be able to give their consent to the collection and use of personal data concerning them.
The efficient support of Hardwae-In-theLoop (HIL) in the design process of hardwaresoftware-co-designed systems is an ongoing challenge. This paper presents a network-based integration of hardware elements into the softwarebased image processing tool „ADTF“, based on a high-performance Gigabit Ethernet MAC and a highly-efficient TCP/IP-stack. The MAC has been designed in VHDL. It was verified in a SystemCsimulation environment and tested on several Altera FPGAs.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.
Time-Sensitive Networking (TSN) is the most promising time-deterministic wired communication approach for industrial applications. To extend TSN to "IEEE 802.11" wireless networks two challenging problems must be solved: synchronization and scheduling. This paper is focused on the first one. Even though a few solutions already meet the required synchronization accuracies, they are built on expensive hardware that is not suited for mass market products. While next Wi-Fi generation might support the required functionalities, this paper proposes a novel method that makes possible high-precision wireless synchronization using commercial low-cost components. With the proposed solution, a standard deviation of synchronization error of less than 500 ns can be achieved for many use cases and system loads on both CPU and network. This performance is comparable to modern wired real-time field busses, which makes the developed method a significant contribution for the extension of the TSN protocol to the wireless domain.
This article deals with the problem of wireless synchronization between onboard computing devices of small-sized unmanned aerial vehicles (SUAV) equipped with integrated wireless chips (IWC). Accurate synchronization between several devices requires the precise timestamping of batches transmitting and receiving on each of them. The best precision is demonstrated by those solutions where timestamping is performed on the PHY level, right after modulation/demodulation of the batch. Nowadays, most of the currently produced IWC are Systems-on-a-Chip (SoC) that include both PHY and MAC, implemented with one or several processor cores application. SoC allows create more cost and energy efficient wireless devices. At the same time, it limits the developers direct access to the internal signals and significantly complicates precise timestamping for sent and received batches, required for mutual synchronization of industrial devices. Some modern IEEE 802.11 IWCs have inbuilt functions that use internal chip clock to register timestamps. However, high jitter of the interfaces between the external device and IWC degrades the comparison of the timestamps from the internal clock to those registered by external devices. To solve this problem, the article proposes a novel approach to the synchronization, based on the analysis of IWC receiver input potential. The benefit of this approach is that there is no need to demodulate and decode the received batches, thus allowing it implementation with low-cost IWCs. In this araticle, Cypress CYW43438 was taken as an example for designing hardware and software solutions for synchronization between two SUAV onboard computing devices, equipped with IWC. The results of the performed experimental studies reveal that mutual synchronization error of the proposed method does not exceed 10 μs.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things. It can be observed that most of the available solutions are following an open source approach, which significantly leads to a fast development of technologies and of markets. Although the currently available implementations are in a pretty good shape, all of them come with some significant drawbacks. It was therefore decided to start the development of an own implementation, which takes the advantages from the existing solutions, but tries to avoid the drawbacks. This paper discussed the reasoning behind this decision, describes the implementation and its characteristics, as well as the testing results. The given implementation is available as open-source project under [15].
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
Hybrid low-voltage physical unclonable function based on inkjet-printed metal-oxide transistors
(2020)
Modern society is striving for digital connectivity that demands information security. As an emerging technology, printed electronics is a key enabler for novel device types with free form factors, customizability, and the potential for large-area fabrication while being seamlessly integrated into our everyday environment. At present, information security is mainly based on software algorithms that use pseudo random numbers. In this regard, hardware-intrinsic security primitives, such as physical unclonable functions, are very promising to provide inherent security features comparable to biometrical data. Device-specific, random intrinsic variations are exploited to generate unique secure identifiers. Here, we introduce a hybrid physical unclonable function, combining silicon and printed electronics technologies, based on metal oxide thin film devices. Our system exploits the inherent randomness of printed materials due to surface roughness, film morphology and the resulting electrical characteristics. The security primitive provides high intrinsic variation, is non-volatile, scalable and exhibits nearly ideal uniqueness.
Embedded Analog Physical Unclonable Function System to Extract Reliable and Unique Security Keys
(2020)
Internet of Things (IoT) enabled devices have become more and more pervasive in our everyday lives. Examples include wearables transmitting and processing personal data and smart labels interacting with customers. Due to the sensitive data involved, these devices need to be protected against attackers. In this context, hardware-based security primitives such as Physical Unclonable Functions (PUFs) provide a powerful solution to secure interconnected devices. The main benefit of PUFs, in combination with traditional cryptographic methods, is that security keys are derived from the random intrinsic variations of the underlying core circuit. In this work, we present a holistic analog-based PUF evaluation platform, enabling direct access to a scalable design that can be customized to fit the application requirements in terms of the number of required keys and bit width. The proposed platform covers the full software and hardware implementations and allows for tracing the PUF response generation from the digital level back to the internal analog voltages that are directly involved in the response generation procedure. Our analysis is based on 30 fabricated PUF cores that we evaluated in terms of PUF security metrics and bit errors for various temperatures and biases. With an average reliability of 99.20% and a uniqueness of 48.84%, the proposed system shows values close to ideal.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.