Refine
Year of publication
Document Type
- Conference Proceeding (934)
- Article (reviewed) (554)
- Article (unreviewed) (124)
- Part of a Book (65)
- Master's Thesis (61)
- Contribution to a Periodical (58)
- Book (30)
- Patent (29)
- Letter to Editor (28)
- Bachelor Thesis (27)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (1966) (remove)
Keywords
- RoboCup (32)
- Dünnschichtchromatographie (27)
- COVID-19 (23)
- Export (22)
- Machine Learning (19)
- Gamification (17)
- Kommunikation (15)
- Finite-Elemente-Methode (13)
- Government Measures (13)
- TRIZ (13)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (586)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (501)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (369)
- Fakultät Wirtschaft (W) (278)
- INES - Institut für nachhaltige Energiesysteme (166)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (157)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (144)
- Fakultät Medien (M) (ab 22.04.2021) (79)
- IMLA - Institute for Machine Learning and Analytics (71)
- ACI - Affective and Cognitive Institute (57)
Open Access
- Open Access (823)
- Closed Access (666)
- Closed (287)
- Bronze (135)
- Gold (70)
- Diamond (66)
- Hybrid (44)
- Grün (12)
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
In recent years, light-weight cryptography has received a lot of attention. Many primitives suitable for resource-restricted hardware platforms have been proposed. In this paper, we present a cryptanalysis of the new stream cipher A2U2 presented at IEEE RFID 2011 [9] that has a key length of 56 bit. We start by disproving and then repairing an extremely efficient attack presented by Chai et al. [8], showing that A2U2 can be broken in less than a second in the chosen-plaintext case. We then turn our attention to the more challenging known-plaintext case and propose a number of attacks. A guess-and-determine approach combined with algebraic cryptanalysis yields an attack that requires about 249 internal guesses. We also show how to determine the 5-bit counter key and how to reconstruct the 56-bit key in about 238 steps if the attacker can freely choose the IV. Furthermore, we investigate the possibility of exploiting the knowledge of a “noisy keystream” by solving a Max-PoSSo problem. We conclude that the cipher needs to be repaired and point out a number of simple measures that would prevent the above attacks.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
In recent years, physically unclonable functions (PUFs) have gained significant attraction in IoT security applications, such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of different devices to generate unique fingerprints for security applications. When generating PUF-based secret keys, the reliability and entropy of the keys are vital factors. This study proposes a novel method for generating PUF-based keys from a set of measurements. Firstly, it formulates the group-based key generation problem as an optimization problem and solves it using integer linear programming (ILP), which guarantees finding the optimum solution. Then, a novel scheme for the extraction of keys from groups is proposed, which we call positioning syndrome coding (PSC). The use of ILP as well as the introduction of PSC facilitates the generation of high-entropy keys with low error correction costs. These new methods have been tested by applying them on the output of a capacitor network PUF. The results confirm the application of ILP and PSC in generating high-quality keys.
Steroid hormones (SHs) are a rising concern due to their high bioactivity, ubiquitous nature, and prolonged existence as a micropollutants in water, they pose a potential risk to both human health and the environment, even at low concentrations. Estrogens, progesterone, and testosterone are the three important types of steroids essential for human development and maintaining multiorgan balance, are focus to this concern. These steroid hormones originate
from various sources, including human and livestock excretions, veterinary medications, agricultural runoff, and pharmaceuticals, contributing to their presence in the environment. According to the recommendation of WHO, the guidance value for estradiol (E2) is 1 ng/L. There are several methods been attempted to remove the SH micropollutant by conventional water and wastewater technologies which are still under research. Among the various methods, electrochemical membrane reactor (EMR) is one of the emerging technologies that can address the challenge of insufficient SHs removal from the aquatic environment by conventional treatment. The degradation of SHs can be significantly influenced by various factors when treated with EMR.
In this project, the removal of SH and the important mechanism for the removal using carbon nanotube CNT-EMR is studied and the efficiency of CNT-EMR in treating the SH micropollutant is identified. By varying different parameters this experiment is carried out with the (PES-CNTs) ultrafiltration membrane. The study is carried out depending upon the SH removal based on the limiting factor such as cell voltage, flux, temperature, concentration, and type of the SH.
This thesis focuses on the development and implementation of a Datagram Transport Layer Security (DTLS) communication framework within the ns-3 network simulator, specifically targeting the LoRaWAN model network. The primary aim is to analyse the behaviour and performance of DTLS protocols across different network conditions within a LoRaWAN context. The key aspects of this work include the following.
Utilization of ns-3: This thesis leverages ns-3’s capabilities as a powerful discrete event network simulator. This platform enables the emulation of diverse network environments, characterized by varying levels of latency, packet loss, and bandwidth constraints.
Emulation of Network Challenges: The framework specifically addresses unique challenges posed by certain network configurations, such as duty cycle limitations. These constraints, which limit the time allocated for data transmission by each device, are crucial in understanding the real-world performance of DTLS protocols.
Testing in Multi-client-server Scenarios: A significant feature of this framework is its ability to test DTLS performance in complex scenarios involving multiple clients and servers. This is vital for assessing the behaviour of a protocol under realistic network conditions.
Realistic Environment Simulation: By simulating challenging network conditions, such as congestion, limited bandwidth, and resource constraints, the framework provides a realistic environment for thorough evaluation. This allows for a comprehensive analysis of DTLS in terms of security, performance, and scalability.
Overall, this thesis contributes to a deeper understanding of DTLS protocols by providing a robust tool for their evaluation under various and challenging network conditions.
Global energy demand is still on an increase during the last decade, with a lot of impact on the climate change due to the intensive use of conventional fossil-based fuels power plants to cover this demand. Most recently, leaders of the globe met in 2015 to come out with the Paris Agreement, stating that the countries will start to take a more responsible and effective behaviour toward the global warming and climate change issues. Many studies have discussed how the future energy system will look like with respecting the countries’ targets and limits of greenhouse gases and their CO2 emissions. However, these studies rarely discussed the industry sector in detail even though it is one of the major role players in the energy sector. Moreover, many studies have simulated and modelled the energy system with huge jumps of intervals in terms of years and environmental goals. In the first part of this study, a model will be developed for the German electrical grid with high spatial and temporal resolutions and different scenarios of it will be analysed meticulously on shorter periods (annual optimization), with different flexibilities and used technologies and degrees of innovations within each scenario. Moreover, the challenge in this research is to adequately map the diverse and different characteristics of the medium-sized industrial sector. In order to be able to take a first step in assessing the relevance of the industrial sector in Germany for climate protection goals, the industrial sector will be mapped in PyPSA-Eur (an open-source model data set of the European energy system at the level of the transmission network) by detailing the demand for different types of industry and assigning flexibilities to the industrial types. Synthetically generated load profiles of various industrial types are available. Flexibilities in the industrial sector are described by the project partner Fraunhofer IPA in the GaIN project and can be used. Using a scenario analysis, the development of the industrial sector and the use of flexibilities are then to be assessed quantitatively.
This paper will introduce the open-source model MyPyPSA-Ger, a myopic optimization model developed to represent the German energy system with a detailed mapping of the electricity sector, on a highly disaggregated level, spatially and temporally, with regional differences and investment limitations. Furthermore, this paper will give new outlooks on the German federal government 2050 emissions goals of the electricity sector to become greenhouse gas neutral by proposing new CO2 allowance strategies. Moreover, the regional differences in Germany will be discussed, their role and impact on the energy transition, and which regions and states will drive the renewable energy utilization forward.
Following a scenario-based analysis, the results point out the major keystones of the energy transition path from 2020 to 2050. Solar, onshore wind, and gas-fired power plants will play a fundamental role in the future electricity systems. Biomass, run of river, and offshore wind technologies will be utilized in the system as base-load generation technologies. Solar and onshore wind will be installed almost everywhere in Germany. However, due to the nature of Germany’s weather and geographical features, the southern and northern regions will play a more important role in the energy transition.
Higher CO2 allowance costs will help achieve the 1.5-degree-target of the electricity system and will allow for a rapid transition. Moreover, the more expensive, and the earlier the CO2 tax is applied to the system, the less it will cost for the energy transition, and the more emissions will be saved throughout the transition period. An earlier phase-out of coal power plants is not necessary with high CO2 taxes, due to the change in power plant’s unit commitment, as they prioritize gas before coal power plants. Having moderate to low CO2 allowance cost or no clear transition policy will be more expensive and the CO2 budget will be exceeded. Nonetheless, even with no policy, renewables still dominate the energy mix of the future.
However, maintaining the maximum historical installation rates of both national and regional levels, with the current emissions reduction strategy, will not be enough to reach the level of climate-neutral electricity system. Therefore, national and regional installation requirements to achieve the federal government emission reduction goals are determined. Energy strategies and decision makers will have to resolve great challenges in order to stay in line with the 1.5-degree-target.
Most recently, the federal government in Germany published new climate goals in order reach climate neutrality by 2045. This paper demonstrates a path to a cost optimal energy supply system for the German power grid until the year 2050. With special regard to regionality, the system is based on yearly myopic optimization with the required energy system transformation measures and the associated system costs. The results point out, that energy storage systems (ESS) are fundamental for renewables integration in order to have a feasible energy transition. Moreover, the investment in storage technologies increased the usage of the solar and wind technologies. Solar energy investments were highly accompanied with the installation of short-term battery storage. Longer-term storage technologies, such as H2, were accompanied with high installations of wind technologies. The results pointed out that hydrogen investments are expected to overrule short-term batteries if their cost continues to decrease sharply. Moreover, with a strong presence of ESS in the energy system, biomass energy is expected to be completely ruled out from the energy mix. With the current emission reduction strategy and without a strong presence of large scale ESS into the system, it is unlikely that the Paris agreement 2° C target by 2050 will be achieved, let alone the 1.5° C.
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
An import ban of Russian energy sources to Germany is currently being increasingly discussed. We want to support the discussion by showing a way how the electricity system in Germany can manage low energy imports in the short term and which measures are necessary to still meet the climate protection targets. In this paper, we examine the impact of a complete stop of Russian fossil fuel imports on the electricity sector in Germany, and how this will affect the climate coals of an earlier coal phase-out and climate neutrality by 2045.
Following a scenario-based analysis, the results gave a point of view on how much would be needed to completely rely on the scarce non-renewable energy resources in Germany. Huge amounts of investments would be needed in order to ensure a secure supply of electricity, in both generation energy sources (RES) and energy storage systems (ESS). The key findings are that a rapid expansion of renewables and storage technologies will significantly reduce the dependence of the German electricity system on energy imports. The huge integration of renewable energy does not entail any significant imports of the energy sources natural gas, hard coal, and mineral oil, even in the long term. The results showed that a ban on fossil fuel imports from Russia outlines huge opportunities to go beyond the German government's climate targets, where the 1.5-degree-target is achieved in the electricity system.
Method and system for extractin metal and oxygen from powdered metal oxides (EP000004170066A2)
(2023)
A method for extracting metal and oxygen from powdered metal oxides in electrolytic cell is proposed, the electrolytic cell comprising a container, a cathode, an anode and an oxygen-ion-conducting membrane, the method comprising providing a solid oxygen ion conducting electrolyte powder into a container, providing a feedstock comprising at least one metal oxide in powdered form into the container, applying an electric potential across the cathode and the anode, the cathode being in communication with the electrolyte powder and the anode being in communication with the membrane in communication with the electrolyte powder, such that at least one respective metallic species of the at least one metal oxide is reduced at the cathode and oxygen is oxidized at the anode to form molecular oxygen, wherein the potential across the cathode and the anode is greater than the dissociation potential of the at least one metal oxide and less than the dissociation potential of the solid electrolyte powder and the membrane.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
With the surge in global data consumption with proliferation of Internet of Things (IoT), remote monitoring and control is increasingly becoming popular with a wide range of applications from emergency response in remote regions to monitoring of environmental parameters. Mesh networks are being employed to alleviate a number of issues associated with single-hop communication such as low area coverage, reliability, range and high energy consumption. Low-power Wireless Personal Area Networks (LoWPANs) are being used to help realize and permeate the applicability of IoT. In this paper, we present the design and test of IEEE 802.15.4-compliant smart IoT nodes with multi-hop routing. We first discuss the features of the software stack and design choices in hardware that resulted in high RF output power and then present field test results of different baseline network topologies in both rural and urban settings to demonstrate the deployability and scalability of our solution.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
BACKGROUND
Various neutral and alkaline peptidases are commercially available for use in protein hydrolysis under neutral to alkaline conditions. However, the hydrolysis of proteins under acidic conditions by applying fungal aspartic peptidases (FAPs) has not been investigated in depth so far. The aim of this study, thus, was to purify a FAP from the commercial enzyme preparation, ROHALASE® BXL, determine its biochemical characteristics, and investigate its application for the hydrolysis of food and animal feed proteins under acidic conditions.
RESULTS
A Trichoderma reesei derived FAP, with an apparent molecular mass of 45.8 kDa (sodium dodecyl sulfate–polyacrylamide gel electrophoresis; SDS-PAGE) was purified 13.8-fold with a yield of 37% from ROHALASE® BXL. The FAP was identified as an aspartate protease (UniProt ID: G0R8T0) by inhibition and nano-LC-ESI-MS/MS studies. The FAP showed the highest activity at 50°C and pH 4.0. Monovalent cations, organic solvents, and reducing agents were tolerated well by the FAP. The FAP underwent an apparent competitive product inhibition by soy protein hydrolysate and whey protein hydrolysate with apparent Ki-values of 1.75 and 30.2 mg*mL−1, respectively. The FAP showed promising results in food (soy protein isolate and whey protein isolate) and animal feed protein hydrolyses. For the latter, an increase in the soluble protein content of 109% was noted after 30 min.
CONCLUSION
Our results demonstrate the applicability of fungal aspartic endopeptidases in the food and animal feed industry. Efficient protein hydrolysis of industrially relevant substrates such as acidic whey or animal feed proteins could be conducted by applying fungal aspartic peptidases. © 2022 Society of Chemical Industry.
A novel peptidyl-lys metalloendopeptidase (Tc-LysN) from Tramates coccinea was recombinantly expressed in Komagataella phaffii using the native pro-protein sequence. The peptidase was secreted into the culture broth as zymogen (~38 kDa) and mature enzyme (~19.8 kDa) simultaneously. The mature Tc-LysN was purified to homogeneity with a single step anion-exchange chromatography at pH 7.2. N-terminal sequencing using TMTpro Zero and mass spectrometry of the mature Tc-LysN indicated that the pro-peptide was cleaved between the amino acid positions 184 and 185 at the Kex2 cleavage site present in the native pro-protein sequence. The pH optimum of Tc-LysN was determined to be 5.0 while it maintained ≥60% activity between pH values 4.5—7.5 and ≥30% activity between pH values 8.5—10.0, indicating its broad applicability. The temperature maximum of Tc-LysN was determined to be 60 °C. After 18 h of incubation at 80 °C, Tc-LysN still retained ~20% activity. Organic solvents such as methanol and acetonitrile, at concentrations as high as 40% (v/v), were found to enhance Tc-LysN’s activity up to ~100% and ~50%, respectively. Tc-LysN’s thermostability, ability to withstand up to 8 M urea, tolerance to high concentrations of organic solvents, and an acidic pH optimum make it a viable candidate to be employed in proteomics workflows in which alkaline conditions might pose a challenge. The nano-LC-MS/MS analysis revealed bovine serum albumin (BSA)’s sequence coverage of 84% using Tc-LysN which was comparable to the sequence coverage of 90% by trypsin peptides.
A systematic toxicological analysis procedure using high-performance thin layer chromatography in combination with fibre optical scanning densitometry for identification of drugs in biological samples is presented. Two examples illustrate the practicability of the technique. First, the identification of a multiple intake of analgesics: codeine, propyphenazone, tramadol, flupirtine and lidocaine, and second, the detection of the sedative diphenhydramine. In both cases, authentic urine specimens were used. The identifications were carried out by an automatic measurement and computer-based comparison of in situ UV spectra with data from a compiled library of reference spectra using the cross-correlation function. The technique allowed a parallel recording of chromatograms and in situ UV spectra in the range of 197–612 nm. Unlike the conventional densitometry, a dependency of UV spectra by concentration of substance in a range of 250–1000 ng/spot was not observed.
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2018 adult size humanoid competition. Sweaty came 2nd in RoboCup 2017 adult size league. The main characteristics of Sweaty are described in the Team Description Paper 2017. The improvements that have been made or are planned to be implemented for RoboCup 2018 are described in this paper.
Soccer simulation league is one of the founding leagues of RoboCup. In this paper we discuss the past, present and planned future achievements and changes. Also we summarize the connections and inter-league achievements of this league and provide an overview of the community contributions that made this league successful.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
Linux and Linux-based operating systems have been gaining more popularity among the general users and among developers. Many big enterprises and large companies are using Linux for servers that host their websites, some even require their developers to have knowledge about Linux OS. Even in embedded systems one can find many Linux-based OS that run them. With its increasing popularity, one can deduce the need to secure such a system that many personnel rely on, be it to protect the data that it stores or to protect the integrity of the system itself, or even to protect the availability of the services it offers. Many researchers and Linux enthusiasts have been coming up with various ways to secure Linux OS, however new vulnerabilities and new bugs are always found, by malicious attackers, with every update or change, which calls for the need of more ways to secure these systems.
This Thesis explores the possibility and feasibility of another way to secure Linux OS, specifically securing the terminal of such OS, by altering the commands of the terminal, getting in the way of attackers that have gained terminal access and delaying, giving more time for the response teams and for forensics to stop the attack, minimize the damage, restore operations, and to identify collect and store evidence of the cyber-attack. This research will discuss the advantages and disadvantages of various security measures and compare and contrast with the method suggested in this research.
This research is significant because it paints a better picture of what the state of the art of Linux and Linux-based operating systems security looks like, and it addresses the concerns of security enthusiasts, while exploring new uncharted area of security that have been looked at as a not so significant part of protecting the OSes out of concern of the various limitations and problems it entails. This research will address these concerns while exploring few ways to solve them, as well as addressing the ideal areas and situations in which the proposed method can be used, and when would such method be more of a burden than help if used.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Passive solar elements for both direct and indirect gains, are systems used to maintain a comfortable living environment while saving energy, especially in the building energy retrofit and adaptation process. Sunspaces, thermal mass and glazing area and orientation have been often used in the past to guarantee adequate indoor conditions when mechanical devices were not available. After a period of neglect, nowadays they are again considered as appropriate systems to help face environmental issues in the building sector, and both international and national legislation takes into consideration the possibility of including them in the building planning tools, also providing economic incentives. Their proper design needs dynamic simulation, often difficult to perform and time consuming. Moreover, results generally suffer from several uncertainties, so quasi steady-state procedures are often used in everyday practice with good results, but some corrections are still needed. In this paper, a comparative analysis of different solutions for the construction of verandas in an existing building is presented, following the procedure provided by the slightly modified and improved Standard EN ISO 13790:2008. Advantages and disadvantages of different configurations considering thermal insulation, windows typology and mechanical ventilation systems are discussed and a general intervention strategy is proposed. The aim is to highlight the possibility of using sunspaces in order to increase the efficiency of the existing building stock, considering ease of construction and economic viability.
Energy Performance of Verandas in the Building Retrofit Process (PDF Download Available). Available from: https://www.researchgate.net/publication/303093420_Energy_Performance_of_Verandas_in_the_Building_Retrofit_Process [accessed Jul 5, 2017].
Germany was considered the world's export champion for a long time, until it was overtaken by China in 2009. Both nations provide officially supported export credits to national exporting organizations, but the two systems operate differently. German export credit guarantees serve as a substitute when the private market is unable to assume the risks of exporting companies. The German Export Credit Agency Euler Hermes is responsible for processing applications on behalf of the Federal Government. China belongs to the largest providers of export finance with the institutions China EXIM and Sinosure. While Germany is bound by the OECD consensus, which defines the level playing field, Chinese export credit agencies have greater flexibility not being bound by international rules or agreements.
Femtosecond (fs) time-resolved magneto-optics is applied to investigate laser-excited ultrafast dynamics of one-dimensional nickel gratings on fused silica and silicon substrates for a wide range of periodicities Λ = 400–1500 nm. Multiple surface acoustic modes with frequencies up to a few tens of GHz are generated. Nanoscale acoustic wavelengths Λ/n have been identified as nth-spatial harmonics of Rayleigh surface acoustic wave (SAW) and surface skimming longitudinal wave (SSLW), with acoustic frequencies and lifetimes being in agreement with theoretical calculations. Resonant magnetoelastic excitation of the ferromagnetic resonance (FMR) by SAW’s third spatial harmonic, and, most interestingly fingerprints of the parametric resonance at 1/2 SAW frequency have been observed. Numerical solutions of Landau–Lifshitz–Gilbert (LLG) equation magnetoelastically driven by complex polychromatic acoustic fields quantitatively reproduce all resonances at once. Thus, our results provide a solid experimental and theoretical base for a quantitative understanding of ultrafast fs-laser-driven magnetoacoustics and tailoring the magnetic-grating-based metasurfaces at the nanoscale.
AI-based Ground Penetrating Radar Signal Processing for Thickness Estimation of Subsurface Layers
(2023)
This thesis focuses on the estimation of subsurface layer thickness using Ground Penetrating Radar (GPR) A-scan and B-scan data through the application of neural networks. The objective is to develop accurate models capable of estimating the thickness of up to two subsurface layers.
Two different approaches are explored for processing the A-scan data. In the first approach, A-scans are compressed using Principal Component Analysis (PCA), and a regression feedforward neural network is employed to estimate the layers’ thicknesses. The second approach utilizes a regression one-dimensional Convolutional Neural Network (1-D CNN) for the same purpose. Comparative analysis reveals that the second approach yields superior results in terms of accuracy.
Subsequently, the proposed 1-D CNN architecture is adapted and evaluated for Step Frequency Continuous Wave (SFCW) radar, expanding its applicability to this type of radar system. The effectiveness of the proposed network in estimating subsurface layer thickness for SFCW radar is demonstrated.
Furthermore, the thesis investigates the utilization of GPR B-scan images as input data for subsurface layer thickness estimation. A regression CNN is employed for this purpose, although the results achieved are not as promising as those obtained with the 1-D CNN using A-scan data. This disparity is attributed to the limited availability of B-scan data, as B-scan generation is a resource-intensive process.
In this paper, a concept for an anthropomorphic replacement hand cast with silicone with an integrated sensory feedback system is presented. In order to construct the personalized replacement hand, a 3D scan of a healthy hand was used to create a 3D-printed mold using computer-aided design (CAD). To allow for movement of the index and middle fingers, a motorized orthosis was used. Information about the applied force for grasping and the degree of flexion of the fingers is registered using two pressure sensors and one bending sensor in each movable finger. To integrate the sensors and additional cavities for increased flexibility, the fingers were cast in three parts, separately from the rest of the hand. A silicone adhesive (Silpuran 4200) was examined to combine the individual parts afterwards. For this, tests with different geometries were carried out. Furthermore, different test series for the secure integration of the sensors were performed, including measurements of the registered information of the sensors. Based on these findings, skin-toned individual fingers and a replacement hand with integrated sensors were created. Using Silpuran 4200, it was possible to integrate the needed cavities and to place the sensors securely into the hand while retaining full flexion using a motorized orthosis. The measurements during different loadings and while grasping various objects proved that it is possible to realize such a sensory feedback system in a replacement hand. As a result, it can be stated that the cost-effective realization of a personalized, anthropomorphic replacement hand with an integrated sensory feedback system is possible using 3D scanning and 3D printing. By integrating smaller sensors, the risk of damaging the sensors through movement could be decreased.
Linear acceleration is a key performance determinant and major training component of many sports. Although extensive research about lower limb kinetics and kinematics is available, consistent definitions of distinctive key body positions, the underlying mechanisms and their related movement strategies are lacking. The aim of this ‘Method and Theoretical Perspective’ article is to introduce a conceptual framework which classifies the sagittal plane ‘shin roll’ motion during accelerated sprinting. By emphasising the importance of the shin segment’s orientation in space, four distinctive key positions are presented (‘shin block’, ‘touchdown’, ‘heel lock’ and ‘propulsion pose’), which are linked by a progressive ‘shin roll’ motion during swing-stance transition. The shin’s downward tilt is driven by three different movement strategies (‘shin alignment’, ‘horizontal ankle rocker’ and ‘shin drop’). The tilt’s optimal amount and timing will contribute to a mechanically efficient acceleration via timely staggered proximal-to-distal power output. Empirical data obtained from athletes of different performance levels and sporting backgrounds are required to verify the feasibility of this concept. The framework presented here should facilitate future biomechanical analyses and may enable coaches and practitioners to develop specific training programs and feedback strategies to provide athletes with a more efficient acceleration technique.
The central purpose of this paper is to present a novel framework supporting the specification and the implementation of media streaming services using XML and Java Media Framework (JMF). It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed JMF application and can be used for different streaming paradigms, e.g. push and pull services.
The central purpose of this paper is to present a novel framework supporting the specification, the implementation and retrieval of media streaming services. It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed application and can be used for different streaming paradigms, e.g. push and pull services.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
One of the main problematics of the seals tests is the time and money consuming they are. Up to now, there are few tries to do a digitalisation of a test where the seals behaviour can be known.
This work aims to digitally reproduce a seal test to extract their behaviour when working under different operation conditions to see their impact on the pimp’s efficiency. In this thesis, due to the Lomaking effect, the leakage and the forces applied on the stator will be the base of analysis.
First of all, among all the literature available for very different kind of seals and inner patterns, it has been chosen the most appropriate and precise data. The data chosen is “Test results for liquid Damper Seals using a Round-Hole Roughness Pattern for the Stator” from Fayolle, P. and “Static and Rotordynamic Characteristics of Liquid Annular Seals with Circumferentially/Grooved Stator and Smooth Rotor using three levels of circumferential Inlet-Fluid” from Torres J.M.
From the literature, dimensions of the test rig and the seals will be extracted to model them into a 3D CAD software. With the 3D CAD digitalisation, the fluid volumes for a rotor-centred position, meaning without eccentricity, will be extracted, and used. The following components have been modelled:
- Smooth Annular Liquid Seal (Grooved Rotor)
- Grooved Annular Liquid Seal (Smooth Rotor)
- Round-Hole Pattern Annular Liquid Seal (𝐻𝑑=2 𝑚𝑚) (Smooth Rotor)
- Straight Honeycomb Annular Liquid Seal (Smooth Rotor)
- Convergent Honeycomb Annular Liquid Seal (Smooth Rotor)
- Smooth Rotor / Smooth Annular Liquid Seal (Smooth Rotor)
As there is just one test rig, all the components have been adapted to the different dimensions of the seals by referencing some measures. This allows to test any seal with the same test rig.
Afterwards a CFD simulation that will be used to obtain leakage and stator forces. The parameters that will be changed are the rotational velocity of the fluid (2000 rpm, 4000 rpm, and 6000 rpm) and the pressure drop (2,068 bar, 4,137 bar, 6,205 bar, and 8,274 bar).
Those results will be compared to the literature ones, and they will determine if digitalisation can be validated or not. Even though the relative error is higher than 5% but the tendency is the same and it is thought that by changing some parameters the test results can be even closer to the literature ones.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
On a regular basis, we hear of well-known online services that have been abused or compromised as a result of data theft. Because insecure applications jeopardize users' privacy as well as the reputation of corporations and organizations, they must be effectively secured from the outset of the development process. The limited expertise and experience of involved parties, such as web developers, is frequently cited as a cause of risky programs. Consequently, they rarely have a full picture of the security-related decisions that must be made, nor do they understand how these decisions affect implementation accurately.
The selection of tools and procedures that can best assist a certain situation in order to protect an application against vulnerabilities is a critical decision. Regardless of the level of security that results from adhering to security standards, these factors inadvertently result in web applications that are insufficiently secured. JavaScript is a language that is heavily relied on as a mainstream programming language for web applications with several new JavaScript frameworks being released every year.
JavaScript is used on both the server-side in web applications development and the client-side in web browsers as well.
However, JavaScript web programming is based on a programming style in which the application developer can, and frequently must, automatically integrate various bits of code from third parties. This potent combination has resulted in a situation today where security issues are frequently exploited. These vulnerabilities can compromise an entire server if left unchecked. Even though there are numerous ad hoc security solutions for web browsers, client-side attacks are also popular. The issue is significantly worse on the server side because the security technologies available for server-side JavaScript application frameworks are nearly non-existent.
Consequently, this thesis focuses on the server-side aspect of JavaScript; the development and evaluation of robust server-side security technologies for JavaScript web applications. There is a clear need for robust security technologies and security best practices in server-side JavaScript that allow fine-grained security.
However, more than ever, there is this requirement of reducing the associated risks without hindering the web application in its functionality.
This is the problem that will be tackled in this thesis: the development of secure security practices and robust security technologies for JavaScript web applications, specifically, on the server-side, that offer adequate security guarantees without putting too many constraints on their functionality.
Integration of BACNET OPC UA-Devices Using a JAVA OPC UA SDK Server with BACNET Open Source Library
(2014)
When people with hearing loss are provided with different devices in each ear, these devices usually have different processing latencies. This leads to static temporal offsets between both ears in the order of several milliseconds. This thesis measured effects of such offsets in stimulation timing on mechanisms of binaural hearing, such as sound localization and speech understanding in noise in hearing-impaired and normal-hearing listeners.
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3–4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
Users of a cochlear implant (CI) in one ear, who are provided with a hearing aid (HA) in the contralateral ear, so-called bimodal listeners, are typically affected by a constant and relatively large interaural time delay offset due to differences in signal processing and differences in stimulation. For HA stimulation, the cochlear travelling wave delay is added to the processing delay, while for CI stimulation, the auditory nerve fibers are stimulated directly. In case of MED-EL CI systems in combination with different HA types, the CI stimulation precedes the acoustic HA stimulation by 3 to 10 ms. A self-designed, battery-powered, portable, and programmable delay line was applied to the CI to reduce the device delay mismatch in nine bimodal listeners. We used an A-B-B-A test design and determined if sound source localization improves when the device delay mismatch is reduced by delaying the CI stimulation by the HA processing delay (τ HA ). Results revealed that every subject in our group of nine bimodal listeners benefited from the approach. The root-mean-square error of sound localization improved significantly from 52.6° to 37.9°. The signed bias also improved significantly from 25.2° to 10.5°, with positive values indicating a bias toward the CI. Furthermore, two other delay values (τ HA –1 ms and τ HA +1 ms) were applied, and with the latter value, the signed bias was further reduced in some test subjects. We conclude that sound source localization accuracy in bimodal listeners improves instantaneously and sustainably when the device delay mismatch is reduced.
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
The present document is aimed to propose a suitable thermal model for the cooling down process of a one piston air cooled reciprocating compressor. In order to achieve this, a thermographic camera is used to record the temperature of different measuring points throughout different operating conditions. This data is later analyzed, with statistical tools and graphical visualization. The thermal phenomena present in the thermal process is characterized according to the compressors' geometry. Finally, using the analysis and taking into consideration the thermal phenomena the optimal thermal model is selected. This paper belongs to a bigger project and the last step is to simulate the compressor and the accuracy of the proposed model.
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
Ripple: Overview and Outlook
(2015)
Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple.
In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
In this paper we integrate the ideas of network coding and relays into an existing practical network architecture used in a wireless network scenario. Specifically, we use the COPE architecture to test our ideas. Since previous works have focused on the communication aspect at the physical layer level, we attempt to take it one step further by including the MAC layer. Our idea is based on information theoretic concepts developed by Shannon in order to reliably apply network coding to increase the net throughput.
Supporting the COVID-19 response in Asia and the Pacific—The role of the Asian Development Bank.
(2020)
The COVID-19 pandemic has affected all countries of the Asia Pacific region over the last few months with far reaching economic, health and social consequences. To counter the impact, governments have accelerated their health spending and announced large macroeconomic stabilization and stimulus policy packages. As with past disasters and crises in the region, the Asian Development Bank has reacted with a number of targeted support interventions since the very early stages of the outbreak. In mid- April 2020, the Bank then put forward a comprehensive COVID-19 Response Package totalling $20 billion to support its member countries which rests on four pillars.
The last few months have proven that multilateral development banks like the Asian Development Bank have the ability to respond quickly and to mobilize significant resources for a global emergency like COVID-19. Whilst this financial supported is urgently needed at this point, attention will need to be paid on how debt sustainability for low- and middle-income countries can be ensured in the coming years. Given the unprecedented scale of and uncertainty around the COVID-19 pandemic, it may offer a window of opportunity to redesign the way developmental finance is coordinated and the way it is delivered. This also includes a chance to “build back better” and to focus on a sustainable, resilient and green recovery.
Experimental Investigation of the Air Exchange Effectiveness of Push-Pull Ventilation Devices
(2020)
The increasing installation numbers of ventilation units in residential buildings are driven by legal objectives to improve their energy efficiency. The dimensioning of a ventilation system for nearly zero energy buildings is usually based on the air flow rate desired by the clients or requested by technical regulations. However, this does not necessarily lead to a system actually able to renew the air volume of the living space effectively. In recent years decentralised systems with an alternating operation mode and fairly good energy efficiencies entered the market and following question was raised: “Does this operation mode allow an efficient air renewal?” This question can be answered experimentally by performing a tracer gas analysis. In the presented study, a total of 15 preliminary tests are carried out in a climatic chamber representing a single room equipped with two push-pull devices. The tests include summer, winter and isothermal supply air conditions since this parameter variation is missing till now for push-pull devices. Further investigations are dedicated to the effect of thermal convection due to human heat dissipation on the room air flow. In dependence on these boundary conditions, the determined air exchange efficiency varies, lagging behind the expected range 0.5 < εa < 1 in almost all cases, indicating insufficient air exchange including short-circuiting. Local air exchange values suggest inhomogeneous air renewal depending on the distance to the indoor apertures as well as the temperature gradients between in- and outdoor. The tested measurement set-up is applicable for field measurements.
We present a 3D simulation approach utilising the diffuse interface representation of the phase-field method combined with a heat transfer equation to analyse the thermal conductivity in air-filled aluminium foams with complex cellular structures of different porosity. Algorithmic methods are introduced to create synthetic open-cell foam structures and to compute the thermal conductivity by means of phase-field modelling. A material law for the effective thermal conductivity is derived by determining the appropriate exponent depending on the relative density in the system. The results are compared with the thermal conductivity in massive aluminium and in pure air.
Much of the research in the field of audio-based machine learning has focused on recreating human speech via feature extraction and imitation, known as deepfakes. The current state of affairs has prompted a look into other areas, such as the recognition of recording devices, and potentially speakers, by only analysing sound files. Segregation and feature extraction are at the core of this approach.
This research focuses on determining whether a recorded sound can reveal the recording device with which it was captured. Each specific microphone manufacturer and model, among other characteristics and imperfections, can have subtle but compounding effects on the results, whether it be differences in noise, or the recording tempo and sensitivity of the microphone while recording. By studying these slight perturbations, it was found to be possible to distinguish between microphones based on the sounds they recorded.
After the recording, pre-processing, and feature extraction phases we completed, the prepared data was fed into several different machine learning algorithms, with results ranging from 70% to 100% accuracy, showing Multi-Layer Perceptron and Logistic Regression to be the most effective for this type of task.
This was further extended to be able to tell the difference between two microphones of the same make and model. Achieving the identification of identical models of a microphone suggests that the small deviations in their manufacturing process are enough of a factor to uniquely distinguish them and potentially target individuals using them. This however does not take into account any form of compression applied to the sound files, as that may alter or degrade some or most of the distinguishing features that are necessary for this experiment.
Building on top of prior research in the area, such as by Das et al. in in which different acoustic features were explored and assessed on their ability to be used to uniquely fingerprint smartphones, more concrete results along with the methodology by which they were achieved are published in this project’s publicly accessible code repository.
Bud type carbon nanohorns (CNHs) are composed of carbon and have a closed conical tip at one end protruding from an aggregate structure. By employing a simple oxidation process in CO2 atmosphere, it is possible to open the CNH tips which increases their specific surface area by four fold. These tip opened CNHs combine the microporous nature of activated carbons and the crystalline mesoporous character of carbon nanotubes. The results for the high pressure CO2 gas adsorption of tip opened CNHs are reported herein for the first time and are found to be superior to traditional CO2 adsorbents like zeolites. The modified CNHs are also found to be promising materials for lithium ion batteries and the performance is found to be on a par with carbon nanotubes and carbon nanofibers.
Gas adsorption studies of CO2 and N2 in spatially aligned double-walled carbon nanotube arrays
(2013)
Gas adsorption studies (CO2 and N2) over a wide pressure range on vertically, highly aligned dense double-walled carbon nanotube arrays of high purity and high specific surface area are reported. At high pressures, the adsorption capacity of these materials was found to be comparable to those of metal organic frameworks and mesoporous molecular sieves. These highly aligned CNT arrays were chemically modified by treating with oxygen plasma and structurally modified by decreasing the diameter of individual carbon nanotubes. Oxygen plasma treatment led to grafting of a large number of C–O functional groups onto the CNT surface, which further increased the gas adsorption capacity. It was found that gas adsorption is dependent on tube diameter and increases with decrease of the individual CNT diameter in the CNT bundles. As results of our studies we have found that at lower pressure regimes, plasma functionalized carbon nanotubes exhibit better adsorption characteristics whereas at higher pressures, lower diameter carbon nanotube structures exhibited better gas adsorption characteristics.
Many different methods, such as screen printing, gravure, flexography, inkjet etc., have been employed to print electronic devices. Depending on the type and performance of the devices, processing is done at low or high temperature using precursor- or particle-based inks. As a result of the processing details, devices can be fabricated on flexible or non-flexible substrates, depending on their temperature stability. Furthermore, in order to reduce the operating voltage, printed devices rely on high-capacitance electrolytes rather than on dielectrics. The printing resolution and speed are two of the major challenging parameters for printed electronics. High-resolution printing produces small-size printed devices and high-integration densities with minimum materials consumption. However, most printing methods have resolutions between 20 and 50 μm. Printing resolutions close to 1 μm have also been achieved with optimized process conditions and better printing technology.
The final physical dimensions of the devices pose severe limitations on their performance. For example, the channel lengths being of this dimension affect the operating frequency of the thin-film transistors (TFTs), which is inversely proportional to the square of channel length. Consequently, short channels are favorable not only for high-frequency applications but also for high-density integration. The need to reduce this dimension to substantially smaller sizes than those possible with today’s printers can be fulfilled either by developing alternative printing or stamping techniques, or alternative transistor geometries. The development of a polymer pen lithography technique allows scaling up parallel printing of a large number of devices in one step, including the successive printing of different materials. The introduction of an alternative transistor geometry, namely the vertical Field Effect Transistor (vFET), is based on the idea to use the film thickness as the channel length, instead of the lateral dimensions of the printed structure, thus reducing the channel length by orders of magnitude. The improvements in printing technologies and the possibilities offered by nanotechnological approaches can result in unprecedented opportunities for the Internet of Things (IoT) and many other applications. The vision of printing functional materials, and not only colors as in conventional paper printing, is attractive to many researchers and industries because of the added opportunities when using flexible substrates such as polymers and textiles. Additionally, the reduction of costs opens new markets. The range of processing techniques covers laterally-structured and large-area printing technologies, thermal, laser and UV-annealing, as well as bonding techniques, etc. Materials, such as conducting, semiconducting, dielectric and sensing materials, rigid and flexible substrates, protective coating, organic, inorganic and polymeric substances, energy conversion and energy storage materials constitute an enormous challenge in their integration into complex devices.
The suffix-free-prefix-free hash function construction and its indifferentiability security analysis
(2012)
In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value (IV) of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD) strengthening in the padding functionality of the hash functions. We propose a generic n-bit-iterated hash function framework based on an n-bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary IVs and does not possess MD strengthening. We formally prove that SFPF is indifferentiable from a random oracle (RO) when the compression function is viewed as a fixed input-length random oracle (FIL-RO). We show that some hash function constructions proposed in the literature fit in the SFPF framework while others that do not fit in this framework are not indifferentiable from a RO. We also show that the SFPF hash function framework with the provision of MD strengthening generalizes any n-bit-iterated hash function based on an n-bit compression function and with an n-bit chaining value that is proven indifferentiable from a RO.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
NEXCODE is a project promoted by the European Space Agency aimed at research design development and demonstration of a receiver chain for telecomm and links in space missions including the presence of new short low-density parity-check codes for error correction. These codes have excellent performance from the error rate viewpoint but also put new challenges as regards synchronization issues and implementation. In this paper after a short review of the results obtained through numerical simulations we present an overview of the breadboard designed for practical testing and the test-plan proposed for the verification of the breadboard and the validation of the new codes and novel synchronization techniques under relevant operation conditions.
The three lines of defense model (TLoD) aims to provide a simple and effective way to improve coordination and enhance communications on risk management and control by clarifying the essential roles and duties of different governance functions. Without effective coordination of these governance functions, work can be duplicated or key risks may be missed or misjudged. To address these challenges, professional standards recommend that the chief audit executive (CAE) coordinates activities with other internal and external governance stakeholders (assurance providers). We consider survey responses from 415 CAEs from Austria, Germany, and Switzerland to analyze determinants that help to implement the TLoD without any challenges and to explore the extent of (coordination) challenges between the internal audit function and the respective governance stakeholders. Our results show a great variance in the extent of coordination challenges dependent on different determinants and the respective governance stakeholder.
The invention relates to the field of transporting flat substrates such as silicon substrates. In particular, the invention relates to particularly protective and continuous transport of such substrates. The method according to the invention is used to transport a vertically aligned flat substrate (1) comprising two flat sides in a transport direction inside a transport channel (2) that is at least partially filled with a liquid medium (F), wherein said liquid medium (F) flows against at least one of the flat sides of the substrate (1) and has a supporting component, which lifts the sum of the weight and buoyancy force of the substrate (1), and an advancing component, which is directed in the transport direction, so that the substrate (1) is supported and transported without mechanical aids. The device according to the invention comprises a transport channel (2) for accommodating a liquid medium (F) and a substrate (1) to be guided in vertical alignment within said medium (F), wherein the transport channel (2) has inflow openings (5) in the walls (3, 4).
A two-dimensional single-phase model is developed for the steady-state and transient analysis of polymer electrolyte membrane fuel cells (PEMFC). Based on diluted and concentrated solution theories, viscous flow is introduced into a phenomenological multi-component modeling framework in the membrane. Characteristic variables related to the water uptake are discussed. A Butler–Volmer formulation of the current-overpotential relationship is developed based on an elementary mechanism of electrochemical oxygen reduction. Validated by using published V–I experiments, the model is then used to analyze the effects of operating conditions on current output and water management, especially net water transport coefficient along the channel. For a power PEMFC, the long-channel configuration is helpful for internal humidification and anode water removal, operating in counterflow mode with proper gas flow rate and humidity. In time domain, a typical transient process with closed anode is also investigated.
The state-of-the-art electrochemical impedance spectroscopy (EIS) calculations have not yet started from fully multi-dimensional modeling. For a polymer electrolyte membrane fuel cell (PEMFC) with long flow channel, the impedance plot shows a multi-arc characteristic and some impedance arcs could merge. By using a step excitation/Fourier transform algorithm, an EIS simulation is implemented for the first time based on the full 2D PEMFC model presented in the first part of this work. All the dominant transient behaviors are able to be captured. A novel methodology called ‘configuration of system dynamics’, which is suitable for any electrochemical system, is then developed to resolve the physical meaning of the impedance spectra. In addition to the high-frequency arc due to charge transfer, the Nyquist plots contain additional medium/low-frequency arcs due to mass transfer in the diffusion layers and along the channel, as well as a low-frequency arc resulting from water transport in the membrane. In some case, the impedance spectra appear partly inductive due to water transport, which demonstrates the complexity of the water management of PEMFCs and the necessity of physics-based calculations.
Photovoltaic-heat pump (PV-HP) combinations with battery and energy management systems are becoming increasingly popular due to their ability to increase the autarchy and utilization of self-generated PV electricity. This trend is driven by the ongoing electrification of the heating sector and the growing disparity between growing electricity costs and reducing feed-in tariffs in Germany. Smart control strategies can be employed to control and optimize the heat pump operation to achieve higher self-consumption of PV electricity. This work presents the evaluation results of a smart-grid ready controlled PV-HP-battery system in a single-family household in Germany, using 1-minute-high-resolution field measurement data. Within 12 months evaluation period, a self-consumption of 43% was determined. The solar fraction of the HP amounts to 36%, enabled also due to higher set temperatures for space heating and domestic hot water production. Accordingly, the SPF decreases by 4.0% the space heating and by 5.7% in the domestic hot water mode. The combined seasonal performance factor for the heat pump system increases from 4.2 to 6.7, when only considering the electricity taken from the grid and disregarding the locally generated electricity supplied from photovoltaic and battery units.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years [1]. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37 percent can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Background: This paper presents a novel approach for a hand prosthesis consisting of a flexible, anthropomorphic, 3D-printed replacement hand combined with a commercially available motorized orthosis that allows gripping.
Methods: A 3D light scanner was used to produce a personalized replacement hand. The wrist of the replacement hand was printed of rigid material; the rest of the hand was printed of flexible material. A standard arm liner was used to enable the user’s arm stump to be connected to the replacement hand. With computer-aided design, two different concepts were developed for the scanned hand model: In the first concept, the replacement hand was attached to the arm liner with a screw. The second concept involved attaching with a commercially available fastening system; furthermore, a skeleton was designed that was located within the flexible part of the replacement hand.
Results: 3D-multi-material printing of the two different hands was unproblematic and inexpensive. The printed hands had approximately the weight of the real hand. When testing the replacement hands with the orthosis it was possible to prove a convincing everyday functionality. For example, it was possible to grip and lift a 1-L water bottle. In addition, a pen could be held, making writing possible.
Conclusions: This first proof-of-concept study encourages further testing with users.
Total Cost of Ownership (TCO) is a key tool to have a complete understanding of the costs associated with an investment, as it allows to analyze not only the initial acquisition costs, but also the long-term costs related to operation, maintenance, depreciation, and other factors. In the context of the cement industry, TCO is especially important due to the complexity of the production processes and the wide variety of components and machinery involved in the process.
For this reason, a TCO analysis for the cement industry has been conducted in this study, with the objective of showing the different components of the cost of production. This analysis will allow the reader to gain knowledge about these costs, in the industrial model will be to make informed decisions on the adoption of technologies and practices that will allow them to reduce costs in the long run and improve their operational efficiency.
In particular, this study pursues to give visibility to technologies and practices that enable the reduction of carbon emissions in cement production, thus contributing to the sustainability of industry and the protection of the environment. By being at the forefront of sustainability issues, the cement industry can contribute to the achievement of environmentally friendly technologies and enable the development of people and industry.
The Oxyfuel technology has been selected as a carbon capture solution for the cement industry due to its practical application, low costs, and practical adaptation to non-capture processes. The adoption of this technology allows for a significant reduction in CO2 emissions, which is a crucial factor in achieving sustainability in the cement manufacturing process.
Carbon capture storage technologies represent a high investment, although these technologies increase the cost of production, the application of Oxyfuel technology is one of the most economically viable as the cheapest technology per capture according to the comparison. However, this price increase is a technical advantage as the carbon capture efficiency of this technology reaches 90%. This level of efficiency leads to a decrease in taxes for the generation of CO2 emissions, making the cement manufacturing process sustainable.
Schluckspecht project
(2022)
This work provides a series of methane adsorption isotherms and breakthrough curves on one 5A zeolite and one activated carbon. Breakthrough curves of CH4 were obtained from dynamic column measurements at different temperature and pressure conditions for concentrations of 4.4 – 17.3 mol.‐% in H2/CH4 mixtures. A simple model was developed to simulate the curves using measured and calculated data inputs. The results show that the model predictions agree very well with the experiments.
The separation of nitrogen and methane from hydrogen-rich mixtures is systematically investigated on a recently developed binder-free zeolite 5A. For this adsorbent, the present work provides a series of experimental data on adsorption isotherms and breakthrough curves of nitrogen and methane, as well as their mixtures in hydrogen. Isotherms were measured at temperatures of 283–313 K and pressures of up to 1.0 MPa. Breakthrough curves of CH4, N2, and CH4/N2 in H2 were obtained at temperatures of 300–305 K and pressures ranging from 0.1 to 6.05 MPa with different feed concentrations. An LDF-based model was developed to predict breakthrough curves using measured and calculated data as inputs. The number of parameters and the use of correlations were restricted to focus on the importance of measured values. For the given assumptions, the results show that the model predictions agree satisfactorily with the experiments under the different operating conditions applied.
Regarding the importance of adsorptive removal of carbon monoxide from hydrogen-rich mixtures for novel applications (e.g. fuel cells), this work provides a series of experimental data on adsorption isotherms and breakthrough curves of carbon monoxide. Three recently developed 5A zeolites and one commercial activated carbon were used as adsorbents. Isotherms were measured gravimetrically at temperatures of 278–313 K and pressures up to 0.85 MPa. Breakthrough curves of CO were obtained from dynamic column measurements at temperatures of 298–301 K, pressures ranging from 0.1 MPa to ca. 6 MPa and concentrations of CO in H2/CO mixtures of 5–17.5 mol%. A simple mathematical model was developed to simulate breakthrough curves on adsorbent beds using measured and calculated data as inputs. The number of parameters and the use of correlations to evaluate them were restricted in order to focus the importance of measured values. For the given assumptions and simplifications, the results show that the model predictions agree satisfactorily with the experimental data at the different operating conditions applied.
As a basis for the evaluation of hydrogen storage by physisorption, adsorption isotherms of H2 were experimentally determined for several porous materials at 77 K and 298 K at pressures up to 15 MPa. Activated carbons and MOFs were studied as the most promising materials for this purpose. A noble focus was given on how to determine whether a material is feasible for hydrogen storage or not, dealing with an assessment method and the pitfalls and problems of determining the viability. For a quantitative evaluation of the feasibility of sorptive hydrogen storage in a general analysis, it is suggested to compare the stored amount in a theoretical tank filled with adsorbents to the amount of hydrogen stored in the same tank without adsorbents. According to our results, an “ideal” sorbent for hydrogen storage at 77 K is calculated to exhibit a specific surface area of >2580 m2 g−1 and a micropore volume of >1.58 cm3 g−1.
Im Rahmen einer Master Thesis wurde ausgehend von einem vorhandenen System On Chip Design, welches eingehende EKG-Datensignale verarbeitet, das bestehende System so erweitert dass es komplett über den standardisierten SPI-Bus steuerbar und auslesbar ist.
The aim of this paper is to identify indicators at country level that could prove useful in improving the effectiveness of fraud detection in European Structural and Investment Funds. We analyse data for 454 funds, belonging to the period 2014-2020, from the 28 countries that were members of the European Union in 2014. Explanatory results suggest the convenience of tracking funds, especially in countries with higher GDP and higher transparency levels, and the lesser relevance of the number of irregularities for countries with higher GDP and those receiving larger funds. Fraud and fraud detection rates in individual funds vary significantly across states. Federal states, such as the Federal Republic of Germany, are comparatively successful in detecting fraud in EU funds.
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
The objective of this thesis is the conceptual design of a battery management system for the first prototype of the UWC (University of the Western Cape) Modular Battery System. The battery system is a lithium-ion battery that aims to be used in renewable energy systems and for niche electric vehicles such as golf carts.
The concept that is introduced in this thesis comprises the parameter monitoring, the safety management and has its main focus on an accurate state of charge estimation.
Another battery system that was already implemented is used as base for the parameter monitoring and the safety management for the new battery management system. In contrast to that, the concept for the state of charge estimation must be developed completely.
Different methods for the state of charge estimation which are based on the measured voltage, current and temperature are discussed, evaluated and the chosen method is conceived in this thesis. The method used for the state of charge estimation is different for the time when the battery is active than when it is inactive. During charge and discharge Coulomb counting is used and when the cell is inactive voltage versus state of charge lookup tables are used to update the estimation.
To have an accurate estimation when the cell is inactive only for a short time, a model of the voltage relaxation is used to predict the voltage when the cells are in equilibrium. This allows the algorithm to reset the state of charge that is estimated by Coulomb counting – which tends to have a growing error over time – frequently.
To evaluate the accuracy of the voltage prediction, cell tests were executed where the voltage relaxation was sampled. The recursive least square method to predict the end voltage was tested with a MATLAB programme. With the help of voltage versus state of charge lookup tables it was possible to determine the state of charge accuracy with the accuracy of the voltage prediction.
In this paper, the J-integral is derived for temperature-dependent elastic–plastic materials described by incremental plasticity. It is implemented using the equivalent domain integral method for assessment of three-dimensional cracks based on results of finite-element calculations. The J-integral considers contributions from inhomogeneous temperature fields and temperature-dependent elastic and plastic material properties as well as from gradients in the plastic strains and the hardening variables. Different energy densities are considered, the Helmholtz free energy and the stress-working density, providing a physical meaning of the J-integral as a fracture criteria for crack growth. Results obtained for a plate with two different crack configurations each loaded by a cool-down thermal shock show domain-independence of the incremental J-integral for different energy densities even for high temperature gradients and significant temperature-dependence of the yield stress and the hardening exponent in the presence of large scale yielding. Hence, the derived J-integral is an appropriate parameter for the assessment of cracks in thermomechanically loaded components.