Refine
Document Type
- Conference Proceeding (7)
- Article (reviewed) (3)
- Doctoral Thesis (1)
Conference Type
- Konferenzartikel (7)
Language
- English (11)
Is part of the Bibliography
- yes (11) (remove)
Keywords
- Predictive maintenance (2)
- Bearing fault classification (1)
- Bearings (1)
- CIoT (1)
- Cellular networks (1)
- Fault classification (1)
- Grinding machines (1)
- Industry automation (1)
- Intermediate domain (1)
- Internet der Dinge (1)
Institute
Open Access
- Closed Access (10)
- Open Access (1)
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.