Refine
Document Type
- Conference Proceeding (54)
- Article (reviewed) (20)
- Part of a Book (5)
- Patent (3)
- Report (2)
- Article (unreviewed) (1)
Conference Type
- Konferenzartikel (54)
Is part of the Bibliography
- yes (85)
Keywords
- Blockchain (6)
- blockchain (4)
- IIoT (3)
- Internet of Things (3)
- IoT security (3)
- Security (3)
- cryptography (3)
- Bearings (2)
- Blockchains (2)
- NB-IoT (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (85) (remove)
Open Access
- Closed Access (37)
- Open Access (24)
- Closed (22)
- Gold (8)
- Diamond (4)
- Bronze (2)
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
Novel manufacturing technologies, such as printed electronics, may enable future applications for the Internet of Everything like large-area sensor devices, disposable security, and identification tags. Printed physically unclonable functions (PUFs) are promising candidates to be embedded as hardware security keys into lightweight identification devices. We investigate hybrid PUFs based on a printed PUF core. The statistics on the intra- and inter-hamming distance distributions indicate a performance suitable for identification purposes. Our evaluations are based on statistical simulations of the PUF core circuit and the thereof generated challenge-response pairs. The analysis shows that hardware-intrinsic security features can be realized with printed lightweight devices.
A physical unclonable function (PUF) is a hardware circuit that produces a random sequence based on its manufacturing-induced intrinsic characteristics. In the past decade, silicon-based PUFs have been extensively studied as a security primitive for identification and authentication. The emerging field of printed electronics (PE) enables novel application fields in the scope of the Internet of Things (IoT) and smart sensors. In this paper, we design and evaluate a printed differential circuit PUF (DiffC-PUF). The simulation data are verified by Monte Carlo analysis. Our design is highly scalable while consisting of a low number of printed transistors. Furthermore, we investigate the best operating point by varying the PUF challenge configuration and analyzing the PUF security metrics in order to achieve high robustness. At the best operating point, the results show areliability of 98.37% and a uniqueness of 50.02%, respectively. This analysis also provides useful and comprehensive insights into the design of hybrid or fully printed PUF circuits. In addition, the proposed printed DiffC-PUF core has been fabricated with electrolyte-gated field-effect transistor technology to verify our design in hardware.
Narrowband Internet-of-Things (NB-IoT) is a 3rd generation partnership project (3GPP) standardized cellular technology, adopted for 5G and optimized for massive Machine Type Communication (mMTC). Applications are anticipated around infrastructure monitoring, asset management, smart city and smart energy applications. In this paper, we evaluate the suitability of NB-IoT for private (campus) networks in industrial environments, including complex cloud-based applications around process automation. An end-to-end system has been developed, comprising of a sensor unit connected to a NB-IoT modem, a base station (gNodeB) equipped with a beamforming array and a local (private) network architecture comprising a sensor management system in the edge cloud. The experimental study includes field tests in realistic industrial environments with latency, reliability and coverage measurements. The results show a good suitability of NB-IoT for process automation with high scalability, low-power requirements and moderate latency requirements.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
The Internet of Things (IoT) application has becoming progressively in-demand, most notably for the embedded devices (ED). However, each device has its own difference in computational capabilities, memory usage, and energy resources in connecting to the Internet by using Wireless Sensor Networks (WSNs). In order for this to be achievable, the WSNs that form the bulk of the IoT implementation requires a new set of technologies and protocol that would have a defined area, in which it addresses. Thus, IPv6 Low Power Area Network (6LoWPAN) was designed by the Internet Engineering Task Force (IETF) as a standard network for ED. Nevertheless, the communication between ED and 6LoWPAN requires appropriate routing protocols for it to achieve the efficient Quality of Service (QoS). Among the protocols of 6LoWPAN network, RPL is considered to be the best protocol, however its Energy Consumption (EC) and Routing Overhead (RO) is considerably high when it is implemented in a large network. Therefore, this paper would propose the HRPL to enchance the RPL protocol in reducing the EC and RO. In this study, the researchers would present the performance of RPL and HRPL in terms of EC, Control traffic Overhead (CTO) and latency based on the simulation of the 6LoWPAN network in fixed environment using COOJA simulator. The results show HRPL protocol achieves better performance in all the tested topology in terms of EC and CTO. However, the latency of HRPL only improves in chain topology compared with RPL. We found that further research is required to study the relationship between the latency and the load of packet transmission in order to optimize the EC usage.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
When designing and installing Indoor Positioning Systems, several interrelated tasks have to be solved to find an optimum placement of the Access Points. For this purpose, a mathematical model for a predefined number of access points indoors is presented. Two iterative algorithms for the minimization of localization error of a mobile object are described. Both algorithms use local search technique and signal level probabilities. Previously registered signal strengths maps were used in computer simulation.
In recent years, both the Internet of Things (IoT) and blockchain technologies have been highly influential and revolutionary. IoT enables companies to embrace Industry 4.0, the Fourth Industrial Revolution, which benefits from communication and connectivity to reduce cost and to increase productivity through sensor-based autonomy. These automated systems can be further refined with smart contracts that are executed within a blockchain, thereby increasing transparency through continuous and indisputable logging. Ideally, the level of security for these IoT devices shall be very high, as they are specifically designed for this autonomous and networked environment. This paper discusses a use case of a company with legacy devices that wants to benefit from the features and functionality of blockchain technology. In particular, the implications of retrofit solutions are analyzed. The use of the BISS:4.0 platform is proposed as the underlying infrastructure. BISS:4.0 is
intended to integrate the blockchain technologies into existing enterprise environments. Furthermore, a security analysis of IoT and blockchain present attacks and countermeasures are presented that are identified and applied to the mentioned use case.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
Maintenance processes in Industry 4.0 applications try to achieve a high degree of quality to reduce the downtime of machinery. The monitoring of executed maintenance activities is challenging as in complex production setups, multiple stakeholders are involved. So, full transparency of the different activities and of the state of the machine can only be supported, if these stakeholders trust each other. Therefore, distributed ledger technologies, like Blockchain, can be promising candidates for supporting such applications. The goal of this paper is a formal description of business and technical interactions between non-trustful stakeholders in the context of Industry 4.0 maintenance processes using distributed ledger technologies. It also covers the integration of smart contracts for automated triggering of activities.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
Die industrielle Kommunikation war früher von relativ eingeschränkten, geschlossenen Feldbussystemen geprägt. Mit der zunehmenden Öffnung von Automatisierungsnetzen durch die horizontale und vertikale Integration in Produktionsanlagen entstehen gefährliche Angriffsflächen, die zum Diebstahl von Produktionsgeheimnissen, der Manipulation oder dem kompletten Lahmlegen der Produktionsprozesse führen können. Hieraus ergeben sich grundlegend neue Anforderung an die Datensicherheit, denen mit innovativen Lösungsansätzen begegnet werden muss.
Ziel des Forschungsvorhabens „SecureField“ war es, die Umsetzbarkeit und Anwendbarkeit des Ansatzes „(D)TLS-over-Anything“ zu untersuchen und nachzuweisen, sowie einen Werkzeugkasten zur Definition und Implementierung entsprechender Sicherheitslösungen vorzubereiten. Als langjährig etablierter Standard im IT-Umfeld stellte sich das (Datagram) Transport Layer Security ((D)TLS) Protokoll in Kombination mit einer industrie- bzw. automatisierungskompatiblen Public-Key-Infrastruktur (PKI) als äußerst vielversprechende Möglichkeit dar, Datensicherheit auch im OT-Umfeld zu erzielen. Hierbei sollten insbesondere KMU adressiert werden, für welche eigene Entwicklungsarbeiten in diesem Umfeld häufig zu aufwändig und technisch sowie wirtschaftlich zu riskant sind.
Mit „SecureField“ konnten Ergebnisse auf mehreren Ebenen erzielt werden. Zunächst konnte im Projektverlauf ein umfassendes und generisches Konzept zur Ende-zu-Ende-Absicherung von Kommunikationspfaden und -protokollen im industriellen Umfeld erarbeitet werden. Dieses Konzept besteht aus einem generischen Kommunikationsmodell sowie aus einem generischen Authentifikationsmodell.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
This paper presents an overview of EREMI, a two-year project funded under ERASMUS+ KA203, and its results. The project team’s main objective was to develop and validate an advanced interdisciplinary higher education curriculum, which includes lifelong learning components. The curriculum focuses on enhancing resource efficiency in the manufacturing industry and optimising poorly or non-digitised industrial physical infrastructure systems. The paper also discusses the results of the project, highlighting the successful achievement of its goals. EREMI effectively supports the transition to Industry 5.0 by preparing a common European pool of future experts. Through comprehensive research and collaboration, the project team has designed a curriculum that equips students with the necessary skills and knowledge to thrive in the evolving manufacturing landscape. Furthermore, the paper explores the significance of EREMI’s contributions to the field, emphasising the importance of resource efficiency and system optimisation in industrial settings. By addressing the challenges posed by under-digitised infrastructure, the project aims to drive sustainable and innovative practices in manufacturing. All five project partner organisations have been actively engaged in offering relevant educational content and framework for decentralised sustainable economic development in regional and national contexts through capacity building at a local level. A crucial element of the added value is the new channel for obtaining feedback from students. The survey results, which are outlined in the paper, offer valuable insights gathered from students, contributing to the continuous improvement of the project.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.
Das Monitoring von Industrieanlagen stellt in der Wirtschaft sicher, dass hoch-automatisierte Prozesse reibungslos ablaufen können. Meistens steht hier das Monitoring der Anlagen selbst im Mittelpunkt, die Kommunikationsleitungen für den Datenaustausch auf Ethernet-Basis (z.B. Profinet) sind gegenwärtig noch nicht Teil einer kontinuierlichen Überwachung. Zwar werden auch hier die physischen Verbindungen überprüft, jedoch geschieht häufig dies nur zum Zeitpunkt der Inbetriebnahme, wenn die Anlage noch nicht in das Gesamtsystem integriert ist oder während eines Wartungszyklus, wenn die Maschine für die Dauer der Wartung aus dem Betriebsablauf genommen wird. Dies führt dazu, dass insbesondere heute, wo vor allem Ethernet zunehmend als Basis für die industrielle Kommunikation herangezogen wird, Maschinenausfälle aufgrund fehlender Kabelüberwachung immer wahrscheinlicher werden. Um dem entgegenwirken zu können, wurde im Projekt Ko2SiBus ein neues Messverfahren konzipiert, implementiert und validiert, das kostengünstig in neue oder bestehende Systeme integriert werden kann. Um die Tauglichkeit zu zeigen, wurden die Projektergebnisse in Prototypen und Demonstratoren implementiert, die sowohl als Stand-Alone aber auch als Integrationslösungen dienen können.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
Sequenzielle Schaltungen
(2022)
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
Digital transformation strengthens the interconnection of companies in order to develop optimized and better customized, cross-company business models. These models require secure, reliable, and traceable evidence and monitoring of contractually agreed information to gain trust between stakeholders. Blockchain technology using smart contracts allows the industry to establish trust and automate cross-company business processes without the risk of losing data control. A typical cross-company industry use case is equipment maintenance. Machine manufacturers and service providers offer maintenance for their machines and tools in order to achieve high availability at low costs. The aim of this chapter is to demonstrate how maintenance use cases are attempted by utilizing hyperledger fabric for building a chain of trust by hardened evidence logging of the maintenance process to achieve legal certainty. Contracts are digitized into smart contracts automating business that increase the security and mitigate the error-proneness of the business processes.
Deep learning approaches are becoming increasingly important for the estimation of the Remaining Useful Life (RUL) of mechanical elements such as bearings. This paper proposes and evaluates a novel transfer learning-based approach for RUL estimations of different bearing types with small datasets and low sampling rates. The approach is based on an intermediate domain that abstracts features of the bearings based on their fault frequencies. The features are processed by convolutional layers. Finally, the RUL estimation is performed using a Long Short-Term Memory (LSTM) network. The transfer learning relies on a fixed-feature extraction. This novel deep learning approach successfully uses data of a low-frequency range, which is a precondition to use low-cost sensors. It is validated against the IEEE PHM 2012 Data Challenge, where it outperforms the winning approach. The results show its suitability for low-frequency sensor data and for efficient and effective transfer learning between different bearing types.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
Embedded Analog Physical Unclonable Function System to Extract Reliable and Unique Security Keys
(2020)
Internet of Things (IoT) enabled devices have become more and more pervasive in our everyday lives. Examples include wearables transmitting and processing personal data and smart labels interacting with customers. Due to the sensitive data involved, these devices need to be protected against attackers. In this context, hardware-based security primitives such as Physical Unclonable Functions (PUFs) provide a powerful solution to secure interconnected devices. The main benefit of PUFs, in combination with traditional cryptographic methods, is that security keys are derived from the random intrinsic variations of the underlying core circuit. In this work, we present a holistic analog-based PUF evaluation platform, enabling direct access to a scalable design that can be customized to fit the application requirements in terms of the number of required keys and bit width. The proposed platform covers the full software and hardware implementations and allows for tracing the PUF response generation from the digital level back to the internal analog voltages that are directly involved in the response generation procedure. Our analysis is based on 30 fabricated PUF cores that we evaluated in terms of PUF security metrics and bit errors for various temperatures and biases. With an average reliability of 99.20% and a uniqueness of 48.84%, the proposed system shows values close to ideal.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.
Hybrid low-voltage physical unclonable function based on inkjet-printed metal-oxide transistors
(2020)
Modern society is striving for digital connectivity that demands information security. As an emerging technology, printed electronics is a key enabler for novel device types with free form factors, customizability, and the potential for large-area fabrication while being seamlessly integrated into our everyday environment. At present, information security is mainly based on software algorithms that use pseudo random numbers. In this regard, hardware-intrinsic security primitives, such as physical unclonable functions, are very promising to provide inherent security features comparable to biometrical data. Device-specific, random intrinsic variations are exploited to generate unique secure identifiers. Here, we introduce a hybrid physical unclonable function, combining silicon and printed electronics technologies, based on metal oxide thin film devices. Our system exploits the inherent randomness of printed materials due to surface roughness, film morphology and the resulting electrical characteristics. The security primitive provides high intrinsic variation, is non-volatile, scalable and exhibits nearly ideal uniqueness.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
Time-Sensitive Networking (TSN) is the most promising time-deterministic wired communication approach for industrial applications. To extend TSN to "IEEE 802.11" wireless networks two challenging problems must be solved: synchronization and scheduling. This paper is focused on the first one. Even though a few solutions already meet the required synchronization accuracies, they are built on expensive hardware that is not suited for mass market products. While next Wi-Fi generation might support the required functionalities, this paper proposes a novel method that makes possible high-precision wireless synchronization using commercial low-cost components. With the proposed solution, a standard deviation of synchronization error of less than 500 ns can be achieved for many use cases and system loads on both CPU and network. This performance is comparable to modern wired real-time field busses, which makes the developed method a significant contribution for the extension of the TSN protocol to the wireless domain.
This article deals with the problem of wireless synchronization between onboard computing devices of small-sized unmanned aerial vehicles (SUAV) equipped with integrated wireless chips (IWC). Accurate synchronization between several devices requires the precise timestamping of batches transmitting and receiving on each of them. The best precision is demonstrated by those solutions where timestamping is performed on the PHY level, right after modulation/demodulation of the batch. Nowadays, most of the currently produced IWC are Systems-on-a-Chip (SoC) that include both PHY and MAC, implemented with one or several processor cores application. SoC allows create more cost and energy efficient wireless devices. At the same time, it limits the developers direct access to the internal signals and significantly complicates precise timestamping for sent and received batches, required for mutual synchronization of industrial devices. Some modern IEEE 802.11 IWCs have inbuilt functions that use internal chip clock to register timestamps. However, high jitter of the interfaces between the external device and IWC degrades the comparison of the timestamps from the internal clock to those registered by external devices. To solve this problem, the article proposes a novel approach to the synchronization, based on the analysis of IWC receiver input potential. The benefit of this approach is that there is no need to demodulate and decode the received batches, thus allowing it implementation with low-cost IWCs. In this araticle, Cypress CYW43438 was taken as an example for designing hardware and software solutions for synchronization between two SUAV onboard computing devices, equipped with IWC. The results of the performed experimental studies reveal that mutual synchronization error of the proposed method does not exceed 10 μs.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.
One of the most important questions about smart metering systems for the end users is their data privacy and security. Indeed, smart metering systems provide a lot of advantages for distribution system operators (DSO), but functionalities offered to users of existing smart meters are still limited and society is becoming increasingly critical. Smart metering systems are accused of interfering with personal rights and privacy, providing unclear tariff regulations which not sufficiently encourage households to manage their electricity consumption in advance. In the specific field of smart grids, data security appears to be a necessary condition for consumer confidence without which they will not be able to give their consent to the collection and use of personal data concerning them.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
The integration of Internet of Things devices onto the Blockchain implies an increase in the transactions that occur on the Blockchain, thus increasing the storage requirements.
A solution approach is to leverage cloud resources for storing blocks within the chain. The paper, therefore, proposes two solutions to this problem. The first being an improved hybrid architecture design which uses containerization to create a side chain on a fog node for the devices connected to it and an Advanced Time‑variant Multi‑objective Particle Swarm Optimization Algorithm (AT‑MOPSO) for determining the optimal number of blocks that should be transferred to the cloud for storage. This algorithm uses time‑variant weights for the velocity of the particle swarm optimization and the non‑dominated sorting and mutation schemes from NSGA‑III. The proposed algorithm was compared with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algorithm (SPEA‑II), and the Pareto Envelope‑based Selection Algorithm with region‑based selection (PESA‑II), and NSGA‑III. The proposed AT‑MOPSO showed better results than the aforementioned MOPSO algorithms in cloud storage cost and query probability optimization. Importantly, AT‑MOPSO achieved 52% energy efficiency compared to NSGA‑III.
To show how this algorithm can be applied to a real‑world Blockchain system, the BISS industrial Blockchain architecture was adapted and modified to show how the AT‑MOPSO can be used with existing Blockchain systems and the benefits it provides.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.