Refine
Year of publication
Document Type
- Conference Proceeding (870)
- Article (reviewed) (408)
- Article (unreviewed) (123)
- Part of a Book (64)
- Letter to Editor (28)
- Patent (28)
- Book (27)
- Doctoral Thesis (14)
- Contribution to a Periodical (13)
- Report (3)
Conference Type
- Konferenzartikel (678)
- Konferenz-Abstract (131)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (8)
Language
- English (1585) (remove)
Has Fulltext
- no (1585) (remove)
Is part of the Bibliography
- yes (1585) (remove)
Keywords
- RoboCup (32)
- Dünnschichtchromatographie (24)
- Kommunikation (15)
- Machine Learning (15)
- Gamification (13)
- TRIZ (13)
- Export (11)
- Adsorption (10)
- Biomechanik (10)
- Energieversorgung (10)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (460)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (445)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (296)
- Fakultät Wirtschaft (W) (226)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (134)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (121)
- INES - Institut für nachhaltige Energiesysteme (115)
- IMLA - Institute for Machine Learning and Analytics (67)
- ACI - Affective and Cognitive Institute (51)
- Fakultät Medien (M) (ab 22.04.2021) (35)
Open Access
- Closed Access (622)
- Open Access (536)
- Closed (237)
- Bronze (113)
- Diamond (43)
- Hybrid (10)
- Gold (9)
- Grün (5)
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
In recent years, light-weight cryptography has received a lot of attention. Many primitives suitable for resource-restricted hardware platforms have been proposed. In this paper, we present a cryptanalysis of the new stream cipher A2U2 presented at IEEE RFID 2011 [9] that has a key length of 56 bit. We start by disproving and then repairing an extremely efficient attack presented by Chai et al. [8], showing that A2U2 can be broken in less than a second in the chosen-plaintext case. We then turn our attention to the more challenging known-plaintext case and propose a number of attacks. A guess-and-determine approach combined with algebraic cryptanalysis yields an attack that requires about 249 internal guesses. We also show how to determine the 5-bit counter key and how to reconstruct the 56-bit key in about 238 steps if the attacker can freely choose the IV. Furthermore, we investigate the possibility of exploiting the knowledge of a “noisy keystream” by solving a Max-PoSSo problem. We conclude that the cipher needs to be repaired and point out a number of simple measures that would prevent the above attacks.
With the expansion of IoT devices in many aspects of our life, the security of such systems has become an important challenge. Unlike conventional computer systems, any IoT security solution should consider the constraints of these systems such as computational capability, memory, connectivity, and power consumption limitations. Physical Unclonable Functions (PUFs) with their special characteristics were introduced to satisfy the security needs while respecting the mentioned constraints. They exploit the uncontrollable and reproducible variations of the underlying component for security applications such as identification, authentication, and communication security. Since IoT devices are typically low cost, it is important to reuse existing elements in their hardware (for instance sensors, ADCs, etc.) instead of adding extra costs for the PUF hardware. Micro-electromechanical system (MEMS) devices are widely used in IoT systems as sensors and actuators. In this thesis, a comprehensive study of the potential application of MEMS devices as PUF primitives is provided. MEMS PUF leverages the uncontrollable variations in the parameters of MEMS elements to derive secure keys for cryptographic applications. Experimental and simulation results show that our proposed MEMS PUFs are capable of generating enough entropy for a complex key generation, while their responses show low fluctuations in different environmental conditions.
Keeping in mind that the PUF responses are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In the second part of this thesis, we elaborate on different key generation schemes and their advantages and drawbacks. We propose the PUF output positioning (POP) and integer linear programming (ILP) methods, which are novel methods for grouping the PUF outputs in order to maximize the extracted entropy. To implement these methods, the key enrollment and key generation algorithms are presented. The proposed methods are then evaluated by applying on the responses of the MEMS PUF, where it can be practically shown that the proposed method outperforms other existing PUF key generation methods.
The final part of this thesis is dedicated to the application of the MEMS PUF as a security solution for IoT systems. We select the mutual authentication of IoT devices and their backend system, and propose two lightweight authentication protocols based on MEMS PUFs. The presented protocols undergo a comprehensive security analysis to show their eligibility to be used in IoT systems. As the result, the output of this thesis is a lightweight security solution based on MEMS PUFs, which introduces a very low overhead on the cost of the hardware.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
With the surge in global data consumption with proliferation of Internet of Things (IoT), remote monitoring and control is increasingly becoming popular with a wide range of applications from emergency response in remote regions to monitoring of environmental parameters. Mesh networks are being employed to alleviate a number of issues associated with single-hop communication such as low area coverage, reliability, range and high energy consumption. Low-power Wireless Personal Area Networks (LoWPANs) are being used to help realize and permeate the applicability of IoT. In this paper, we present the design and test of IEEE 802.15.4-compliant smart IoT nodes with multi-hop routing. We first discuss the features of the software stack and design choices in hardware that resulted in high RF output power and then present field test results of different baseline network topologies in both rural and urban settings to demonstrate the deployability and scalability of our solution.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
BACKGROUND
Various neutral and alkaline peptidases are commercially available for use in protein hydrolysis under neutral to alkaline conditions. However, the hydrolysis of proteins under acidic conditions by applying fungal aspartic peptidases (FAPs) has not been investigated in depth so far. The aim of this study, thus, was to purify a FAP from the commercial enzyme preparation, ROHALASE® BXL, determine its biochemical characteristics, and investigate its application for the hydrolysis of food and animal feed proteins under acidic conditions.
RESULTS
A Trichoderma reesei derived FAP, with an apparent molecular mass of 45.8 kDa (sodium dodecyl sulfate–polyacrylamide gel electrophoresis; SDS-PAGE) was purified 13.8-fold with a yield of 37% from ROHALASE® BXL. The FAP was identified as an aspartate protease (UniProt ID: G0R8T0) by inhibition and nano-LC-ESI-MS/MS studies. The FAP showed the highest activity at 50°C and pH 4.0. Monovalent cations, organic solvents, and reducing agents were tolerated well by the FAP. The FAP underwent an apparent competitive product inhibition by soy protein hydrolysate and whey protein hydrolysate with apparent Ki-values of 1.75 and 30.2 mg*mL−1, respectively. The FAP showed promising results in food (soy protein isolate and whey protein isolate) and animal feed protein hydrolyses. For the latter, an increase in the soluble protein content of 109% was noted after 30 min.
CONCLUSION
Our results demonstrate the applicability of fungal aspartic endopeptidases in the food and animal feed industry. Efficient protein hydrolysis of industrially relevant substrates such as acidic whey or animal feed proteins could be conducted by applying fungal aspartic peptidases. © 2022 Society of Chemical Industry.
A systematic toxicological analysis procedure using high-performance thin layer chromatography in combination with fibre optical scanning densitometry for identification of drugs in biological samples is presented. Two examples illustrate the practicability of the technique. First, the identification of a multiple intake of analgesics: codeine, propyphenazone, tramadol, flupirtine and lidocaine, and second, the detection of the sedative diphenhydramine. In both cases, authentic urine specimens were used. The identifications were carried out by an automatic measurement and computer-based comparison of in situ UV spectra with data from a compiled library of reference spectra using the cross-correlation function. The technique allowed a parallel recording of chromatograms and in situ UV spectra in the range of 197–612 nm. Unlike the conventional densitometry, a dependency of UV spectra by concentration of substance in a range of 250–1000 ng/spot was not observed.
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2018 adult size humanoid competition. Sweaty came 2nd in RoboCup 2017 adult size league. The main characteristics of Sweaty are described in the Team Description Paper 2017. The improvements that have been made or are planned to be implemented for RoboCup 2018 are described in this paper.
Soccer simulation league is one of the founding leagues of RoboCup. In this paper we discuss the past, present and planned future achievements and changes. Also we summarize the connections and inter-league achievements of this league and provide an overview of the community contributions that made this league successful.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Passive solar elements for both direct and indirect gains, are systems used to maintain a comfortable living environment while saving energy, especially in the building energy retrofit and adaptation process. Sunspaces, thermal mass and glazing area and orientation have been often used in the past to guarantee adequate indoor conditions when mechanical devices were not available. After a period of neglect, nowadays they are again considered as appropriate systems to help face environmental issues in the building sector, and both international and national legislation takes into consideration the possibility of including them in the building planning tools, also providing economic incentives. Their proper design needs dynamic simulation, often difficult to perform and time consuming. Moreover, results generally suffer from several uncertainties, so quasi steady-state procedures are often used in everyday practice with good results, but some corrections are still needed. In this paper, a comparative analysis of different solutions for the construction of verandas in an existing building is presented, following the procedure provided by the slightly modified and improved Standard EN ISO 13790:2008. Advantages and disadvantages of different configurations considering thermal insulation, windows typology and mechanical ventilation systems are discussed and a general intervention strategy is proposed. The aim is to highlight the possibility of using sunspaces in order to increase the efficiency of the existing building stock, considering ease of construction and economic viability.
Energy Performance of Verandas in the Building Retrofit Process (PDF Download Available). Available from: https://www.researchgate.net/publication/303093420_Energy_Performance_of_Verandas_in_the_Building_Retrofit_Process [accessed Jul 5, 2017].
Femtosecond (fs) time-resolved magneto-optics is applied to investigate laser-excited ultrafast dynamics of one-dimensional nickel gratings on fused silica and silicon substrates for a wide range of periodicities Λ = 400–1500 nm. Multiple surface acoustic modes with frequencies up to a few tens of GHz are generated. Nanoscale acoustic wavelengths Λ/n have been identified as nth-spatial harmonics of Rayleigh surface acoustic wave (SAW) and surface skimming longitudinal wave (SSLW), with acoustic frequencies and lifetimes being in agreement with theoretical calculations. Resonant magnetoelastic excitation of the ferromagnetic resonance (FMR) by SAW’s third spatial harmonic, and, most interestingly fingerprints of the parametric resonance at 1/2 SAW frequency have been observed. Numerical solutions of Landau–Lifshitz–Gilbert (LLG) equation magnetoelastically driven by complex polychromatic acoustic fields quantitatively reproduce all resonances at once. Thus, our results provide a solid experimental and theoretical base for a quantitative understanding of ultrafast fs-laser-driven magnetoacoustics and tailoring the magnetic-grating-based metasurfaces at the nanoscale.
Linear acceleration is a key performance determinant and major training component of many sports. Although extensive research about lower limb kinetics and kinematics is available, consistent definitions of distinctive key body positions, the underlying mechanisms and their related movement strategies are lacking. The aim of this ‘Method and Theoretical Perspective’ article is to introduce a conceptual framework which classifies the sagittal plane ‘shin roll’ motion during accelerated sprinting. By emphasising the importance of the shin segment’s orientation in space, four distinctive key positions are presented (‘shin block’, ‘touchdown’, ‘heel lock’ and ‘propulsion pose’), which are linked by a progressive ‘shin roll’ motion during swing-stance transition. The shin’s downward tilt is driven by three different movement strategies (‘shin alignment’, ‘horizontal ankle rocker’ and ‘shin drop’). The tilt’s optimal amount and timing will contribute to a mechanically efficient acceleration via timely staggered proximal-to-distal power output. Empirical data obtained from athletes of different performance levels and sporting backgrounds are required to verify the feasibility of this concept. The framework presented here should facilitate future biomechanical analyses and may enable coaches and practitioners to develop specific training programs and feedback strategies to provide athletes with a more efficient acceleration technique.
The central purpose of this paper is to present a novel framework supporting the specification and the implementation of media streaming services using XML and Java Media Framework (JMF). It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed JMF application and can be used for different streaming paradigms, e.g. push and pull services.
The central purpose of this paper is to present a novel framework supporting the specification, the implementation and retrieval of media streaming services. It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed application and can be used for different streaming paradigms, e.g. push and pull services.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Integration of BACNET OPC UA-Devices Using a JAVA OPC UA SDK Server with BACNET Open Source Library
(2014)
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
Ripple: Overview and Outlook
(2015)
Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple.
In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
In this paper we integrate the ideas of network coding and relays into an existing practical network architecture used in a wireless network scenario. Specifically, we use the COPE architecture to test our ideas. Since previous works have focused on the communication aspect at the physical layer level, we attempt to take it one step further by including the MAC layer. Our idea is based on information theoretic concepts developed by Shannon in order to reliably apply network coding to increase the net throughput.
Experimental Investigation of the Air Exchange Effectiveness of Push-Pull Ventilation Devices
(2020)
The increasing installation numbers of ventilation units in residential buildings are driven by legal objectives to improve their energy efficiency. The dimensioning of a ventilation system for nearly zero energy buildings is usually based on the air flow rate desired by the clients or requested by technical regulations. However, this does not necessarily lead to a system actually able to renew the air volume of the living space effectively. In recent years decentralised systems with an alternating operation mode and fairly good energy efficiencies entered the market and following question was raised: “Does this operation mode allow an efficient air renewal?” This question can be answered experimentally by performing a tracer gas analysis. In the presented study, a total of 15 preliminary tests are carried out in a climatic chamber representing a single room equipped with two push-pull devices. The tests include summer, winter and isothermal supply air conditions since this parameter variation is missing till now for push-pull devices. Further investigations are dedicated to the effect of thermal convection due to human heat dissipation on the room air flow. In dependence on these boundary conditions, the determined air exchange efficiency varies, lagging behind the expected range 0.5 < εa < 1 in almost all cases, indicating insufficient air exchange including short-circuiting. Local air exchange values suggest inhomogeneous air renewal depending on the distance to the indoor apertures as well as the temperature gradients between in- and outdoor. The tested measurement set-up is applicable for field measurements.
We present a 3D simulation approach utilising the diffuse interface representation of the phase-field method combined with a heat transfer equation to analyse the thermal conductivity in air-filled aluminium foams with complex cellular structures of different porosity. Algorithmic methods are introduced to create synthetic open-cell foam structures and to compute the thermal conductivity by means of phase-field modelling. A material law for the effective thermal conductivity is derived by determining the appropriate exponent depending on the relative density in the system. The results are compared with the thermal conductivity in massive aluminium and in pure air.
Bud type carbon nanohorns (CNHs) are composed of carbon and have a closed conical tip at one end protruding from an aggregate structure. By employing a simple oxidation process in CO2 atmosphere, it is possible to open the CNH tips which increases their specific surface area by four fold. These tip opened CNHs combine the microporous nature of activated carbons and the crystalline mesoporous character of carbon nanotubes. The results for the high pressure CO2 gas adsorption of tip opened CNHs are reported herein for the first time and are found to be superior to traditional CO2 adsorbents like zeolites. The modified CNHs are also found to be promising materials for lithium ion batteries and the performance is found to be on a par with carbon nanotubes and carbon nanofibers.
Gas adsorption studies of CO2 and N2 in spatially aligned double-walled carbon nanotube arrays
(2013)
Gas adsorption studies (CO2 and N2) over a wide pressure range on vertically, highly aligned dense double-walled carbon nanotube arrays of high purity and high specific surface area are reported. At high pressures, the adsorption capacity of these materials was found to be comparable to those of metal organic frameworks and mesoporous molecular sieves. These highly aligned CNT arrays were chemically modified by treating with oxygen plasma and structurally modified by decreasing the diameter of individual carbon nanotubes. Oxygen plasma treatment led to grafting of a large number of C–O functional groups onto the CNT surface, which further increased the gas adsorption capacity. It was found that gas adsorption is dependent on tube diameter and increases with decrease of the individual CNT diameter in the CNT bundles. As results of our studies we have found that at lower pressure regimes, plasma functionalized carbon nanotubes exhibit better adsorption characteristics whereas at higher pressures, lower diameter carbon nanotube structures exhibited better gas adsorption characteristics.
Many different methods, such as screen printing, gravure, flexography, inkjet etc., have been employed to print electronic devices. Depending on the type and performance of the devices, processing is done at low or high temperature using precursor- or particle-based inks. As a result of the processing details, devices can be fabricated on flexible or non-flexible substrates, depending on their temperature stability. Furthermore, in order to reduce the operating voltage, printed devices rely on high-capacitance electrolytes rather than on dielectrics. The printing resolution and speed are two of the major challenging parameters for printed electronics. High-resolution printing produces small-size printed devices and high-integration densities with minimum materials consumption. However, most printing methods have resolutions between 20 and 50 μm. Printing resolutions close to 1 μm have also been achieved with optimized process conditions and better printing technology.
The final physical dimensions of the devices pose severe limitations on their performance. For example, the channel lengths being of this dimension affect the operating frequency of the thin-film transistors (TFTs), which is inversely proportional to the square of channel length. Consequently, short channels are favorable not only for high-frequency applications but also for high-density integration. The need to reduce this dimension to substantially smaller sizes than those possible with today’s printers can be fulfilled either by developing alternative printing or stamping techniques, or alternative transistor geometries. The development of a polymer pen lithography technique allows scaling up parallel printing of a large number of devices in one step, including the successive printing of different materials. The introduction of an alternative transistor geometry, namely the vertical Field Effect Transistor (vFET), is based on the idea to use the film thickness as the channel length, instead of the lateral dimensions of the printed structure, thus reducing the channel length by orders of magnitude. The improvements in printing technologies and the possibilities offered by nanotechnological approaches can result in unprecedented opportunities for the Internet of Things (IoT) and many other applications. The vision of printing functional materials, and not only colors as in conventional paper printing, is attractive to many researchers and industries because of the added opportunities when using flexible substrates such as polymers and textiles. Additionally, the reduction of costs opens new markets. The range of processing techniques covers laterally-structured and large-area printing technologies, thermal, laser and UV-annealing, as well as bonding techniques, etc. Materials, such as conducting, semiconducting, dielectric and sensing materials, rigid and flexible substrates, protective coating, organic, inorganic and polymeric substances, energy conversion and energy storage materials constitute an enormous challenge in their integration into complex devices.
The suffix-free-prefix-free hash function construction and its indifferentiability security analysis
(2012)
In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value (IV) of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD) strengthening in the padding functionality of the hash functions. We propose a generic n-bit-iterated hash function framework based on an n-bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary IVs and does not possess MD strengthening. We formally prove that SFPF is indifferentiable from a random oracle (RO) when the compression function is viewed as a fixed input-length random oracle (FIL-RO). We show that some hash function constructions proposed in the literature fit in the SFPF framework while others that do not fit in this framework are not indifferentiable from a RO. We also show that the SFPF hash function framework with the provision of MD strengthening generalizes any n-bit-iterated hash function based on an n-bit compression function and with an n-bit chaining value that is proven indifferentiable from a RO.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
NEXCODE is a project promoted by the European Space Agency aimed at research design development and demonstration of a receiver chain for telecomm and links in space missions including the presence of new short low-density parity-check codes for error correction. These codes have excellent performance from the error rate viewpoint but also put new challenges as regards synchronization issues and implementation. In this paper after a short review of the results obtained through numerical simulations we present an overview of the breadboard designed for practical testing and the test-plan proposed for the verification of the breadboard and the validation of the new codes and novel synchronization techniques under relevant operation conditions.
The invention relates to the field of transporting flat substrates such as silicon substrates. In particular, the invention relates to particularly protective and continuous transport of such substrates. The method according to the invention is used to transport a vertically aligned flat substrate (1) comprising two flat sides in a transport direction inside a transport channel (2) that is at least partially filled with a liquid medium (F), wherein said liquid medium (F) flows against at least one of the flat sides of the substrate (1) and has a supporting component, which lifts the sum of the weight and buoyancy force of the substrate (1), and an advancing component, which is directed in the transport direction, so that the substrate (1) is supported and transported without mechanical aids. The device according to the invention comprises a transport channel (2) for accommodating a liquid medium (F) and a substrate (1) to be guided in vertical alignment within said medium (F), wherein the transport channel (2) has inflow openings (5) in the walls (3, 4).
A two-dimensional single-phase model is developed for the steady-state and transient analysis of polymer electrolyte membrane fuel cells (PEMFC). Based on diluted and concentrated solution theories, viscous flow is introduced into a phenomenological multi-component modeling framework in the membrane. Characteristic variables related to the water uptake are discussed. A Butler–Volmer formulation of the current-overpotential relationship is developed based on an elementary mechanism of electrochemical oxygen reduction. Validated by using published V–I experiments, the model is then used to analyze the effects of operating conditions on current output and water management, especially net water transport coefficient along the channel. For a power PEMFC, the long-channel configuration is helpful for internal humidification and anode water removal, operating in counterflow mode with proper gas flow rate and humidity. In time domain, a typical transient process with closed anode is also investigated.
The state-of-the-art electrochemical impedance spectroscopy (EIS) calculations have not yet started from fully multi-dimensional modeling. For a polymer electrolyte membrane fuel cell (PEMFC) with long flow channel, the impedance plot shows a multi-arc characteristic and some impedance arcs could merge. By using a step excitation/Fourier transform algorithm, an EIS simulation is implemented for the first time based on the full 2D PEMFC model presented in the first part of this work. All the dominant transient behaviors are able to be captured. A novel methodology called ‘configuration of system dynamics’, which is suitable for any electrochemical system, is then developed to resolve the physical meaning of the impedance spectra. In addition to the high-frequency arc due to charge transfer, the Nyquist plots contain additional medium/low-frequency arcs due to mass transfer in the diffusion layers and along the channel, as well as a low-frequency arc resulting from water transport in the membrane. In some case, the impedance spectra appear partly inductive due to water transport, which demonstrates the complexity of the water management of PEMFCs and the necessity of physics-based calculations.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years [1]. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37 percent can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
This work provides a series of methane adsorption isotherms and breakthrough curves on one 5A zeolite and one activated carbon. Breakthrough curves of CH4 were obtained from dynamic column measurements at different temperature and pressure conditions for concentrations of 4.4 – 17.3 mol.‐% in H2/CH4 mixtures. A simple model was developed to simulate the curves using measured and calculated data inputs. The results show that the model predictions agree very well with the experiments.
The separation of nitrogen and methane from hydrogen-rich mixtures is systematically investigated on a recently developed binder-free zeolite 5A. For this adsorbent, the present work provides a series of experimental data on adsorption isotherms and breakthrough curves of nitrogen and methane, as well as their mixtures in hydrogen. Isotherms were measured at temperatures of 283–313 K and pressures of up to 1.0 MPa. Breakthrough curves of CH4, N2, and CH4/N2 in H2 were obtained at temperatures of 300–305 K and pressures ranging from 0.1 to 6.05 MPa with different feed concentrations. An LDF-based model was developed to predict breakthrough curves using measured and calculated data as inputs. The number of parameters and the use of correlations were restricted to focus on the importance of measured values. For the given assumptions, the results show that the model predictions agree satisfactorily with the experiments under the different operating conditions applied.
Regarding the importance of adsorptive removal of carbon monoxide from hydrogen-rich mixtures for novel applications (e.g. fuel cells), this work provides a series of experimental data on adsorption isotherms and breakthrough curves of carbon monoxide. Three recently developed 5A zeolites and one commercial activated carbon were used as adsorbents. Isotherms were measured gravimetrically at temperatures of 278–313 K and pressures up to 0.85 MPa. Breakthrough curves of CO were obtained from dynamic column measurements at temperatures of 298–301 K, pressures ranging from 0.1 MPa to ca. 6 MPa and concentrations of CO in H2/CO mixtures of 5–17.5 mol%. A simple mathematical model was developed to simulate breakthrough curves on adsorbent beds using measured and calculated data as inputs. The number of parameters and the use of correlations to evaluate them were restricted in order to focus the importance of measured values. For the given assumptions and simplifications, the results show that the model predictions agree satisfactorily with the experimental data at the different operating conditions applied.
As a basis for the evaluation of hydrogen storage by physisorption, adsorption isotherms of H2 were experimentally determined for several porous materials at 77 K and 298 K at pressures up to 15 MPa. Activated carbons and MOFs were studied as the most promising materials for this purpose. A noble focus was given on how to determine whether a material is feasible for hydrogen storage or not, dealing with an assessment method and the pitfalls and problems of determining the viability. For a quantitative evaluation of the feasibility of sorptive hydrogen storage in a general analysis, it is suggested to compare the stored amount in a theoretical tank filled with adsorbents to the amount of hydrogen stored in the same tank without adsorbents. According to our results, an “ideal” sorbent for hydrogen storage at 77 K is calculated to exhibit a specific surface area of >2580 m2 g−1 and a micropore volume of >1.58 cm3 g−1.
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
In this paper, the J-integral is derived for temperature-dependent elastic–plastic materials described by incremental plasticity. It is implemented using the equivalent domain integral method for assessment of three-dimensional cracks based on results of finite-element calculations. The J-integral considers contributions from inhomogeneous temperature fields and temperature-dependent elastic and plastic material properties as well as from gradients in the plastic strains and the hardening variables. Different energy densities are considered, the Helmholtz free energy and the stress-working density, providing a physical meaning of the J-integral as a fracture criteria for crack growth. Results obtained for a plate with two different crack configurations each loaded by a cool-down thermal shock show domain-independence of the incremental J-integral for different energy densities even for high temperature gradients and significant temperature-dependence of the yield stress and the hardening exponent in the presence of large scale yielding. Hence, the derived J-integral is an appropriate parameter for the assessment of cracks in thermomechanically loaded components.
Background: Increasing awareness of the importance of evidence-based medicine is demonstrated not only by an increasing number of articles addressing it but also by a specialty-wide evidence-based medicine initiative. The authors critically analyzed the quality of reporting of randomized controlled trials published in this Journal over a 21-year period (1990 to 2010).
Methods: A hand search was conducted, including all issues of Plastic and Reconstructive Surgery from January of 1990 to December of 2010. All randomized controlled trials published during this time period were identified with the Cochrane decision tree for identification of randomized controlled trials. To assess the quality of reporting, a modification of the checklist of the Consolidated Standard of Reporting Trials Statement was used.
Results: Of 7121 original articles published from 1990 to 2010 in the Journal, 159 (2.23 percent) met the Cochrane criteria. A significant increase in the absolute number of randomized controlled trials was seen over the study period (p < 0.0001). The median quality of these trials from 1990 to 2010 was "fair," with a trend toward improved quality of reporting over time (p = 0.127).
Conclusions: A favorable trend is seen with respect to an increased number of published randomized controlled trials in Plastic and Reconstructive Surgery. Adherence to standard reporting guidelines is recommended, however, to further improve the quality of reporting. Consideration may be given to providing information regarding the quality of reporting in addition to the "level of evidence pyramid," thus facilitating critical appraisal.
It is the purpose of this paper to address ethical issues concerning the development and application of Assistive Technology at Workplaces (ATW). We shall give a concrete technical concept how such technology might be constructed and propose eight technical functions it should adopt in order to serve its purpose. Then, we discuss the normative questions why one should use ATW, and by what means. We argue that ATW is good to the extent that it ensures social inclusion and consider four normative domains in which its worth might consists in. In addition, we insist that ATW must satisfy two requirements of good workplaces, which we specify as (a) an exploitation restraint and (b) a duty of care.
Accelerated transformation of the society and industry through digi-talization, artificial intelligence and other emerging technologies has intensified the need for university graduates that are capable of rapidly finding breakthrough solutions to complex problems, and can successfully implement innovation con-cepts. However, there are only few universities making significant efforts to com-prehensively incorporate creative and systematic tools of TRIZ (theory of in-ventive problem solving) and KBI (knowledge-based innovation) into their de-gree structure. Engineering curricula offer little room for enhancing creativity and inventiveness by means of discipline‐specific subjects. Moreover, many ed-ucators mistakenly believe that students are either inherently creative, or will in-evitably obtain adequate problem-solving skills as a result of their university study. This paper discusses challenges of intelligent integration of TRIZ and KBI into university curricula. It advocates the need for development of standard guidelines and best-practice recommendations in order to facilitate sustainable education of ambitious, talented, and inventive specialists. Reflections of educa-tors that teach TRIZ and KBI to students from mechanical, electrical, process engineering, and business administration are presented.
This paper presents the results of the idea generation experiment that repeats the study originally conducted at RMIT. In order to establish the influence that the experimental treatments make on the number and the breadth of solution ideas proposed by problem solvers with different knowledge levels, students from different years of study were recruited. Ninety students from the Offenburg University of Applied Sciences, Germany were divided into three groups. All students were asked to generate ideas on cleaning lime deposits from the inside of a water pipe and were given 16 minutes to record their individual ideas. Students of two experimental groups were shown some words for two minuted each. The Su-Field group was exposed to the eight fields of MATCEMIB. The Random Word group was shown eight random words every two minutes. The Su-Field group outperformed both the Control group and the Random Word group in the number of ideas generated. It was also found that the students from the Su-Field group proposed significantly broader solutions than the students from the Control and Random Word groups. The overall results of the experiment support the conclusions made by the RMIT researchers that simple ideation techniques can significantly improve idea generation and that the systematised Substance-Field Analysis is a suitable heuristic for engineering students.
Structured Innovation with TRIZ in Science and Industry - Creating Value for Customers and Society
(2016)
The design of control systems of concentrator photovoltaic power plants will be more challenging in the future. Reasons are cost pressure, the increasing size of power plants, and new applications for operation, monitoring and maintenance required by grid operators, manufacturers and plant operators. Concepts and products for fixed-mounted photovoltaic can only partly be adapted since control systems for concentrator photovoltaic are considerable more complex due to the required high accurate sun-tracking. In order to assure reliable operation during a lifetime of more than 20 years, robustness of the control system is one crucial design criteria. This work considers common engineering technics for robustness, safety and security. Potential failures of the control system are identified and their effects are analyzed. Different attack scenarios are investigated. Outcomes are design criteria that encounter both: failures of system components and malicious attacks on the control system of future concentrator photovoltaic power plants. Such design criteria are a transparent state management through all system layers, self-tests and update capabilities for security concerns. The findings enable future research to develop a more robust and secure control system for concentrator photovoltaics when implementing new functionalities in the next generation.
The communication system of a large-scale concentrator photovoltaic power plant is very challenging. Manufacturers are building power plants having thousands of sun tracking systems equipped with communication and distributed over a wide area. Research is necessary to build a scalable communication system enabling modern control strategies. This poster abstract describes the ongoing work on the development of a simulation model of such power plants in OMNeT++. The model uses the INET Framework to build a communication network based on Ethernet. First results and problems of timing and data transmission experiments are outlined. The model enables research on new communication and control approaches to improve functionality and efficiency of power plants based on concentrator photovoltaic technology.
The design of control systems in large-scale CPV power plants will be more challenging in the future. Reasons are the increasing size of power plants, the requirements of grid operators, new functions, and new technological trends in industrial automation or communication technology. Concepts and products from fixed-mounted PV can only partly be adopted since control systems for sun-tracking installations are considerable more complex due to the higher quantity of controllable entities. The objective of this paper is to deliver design considerations for next generation control systems. Therefore, the work identifies new applications of future control systems categorized into operation, monitoring and maintenance domains. The key-requirements of the technical system and the application layer are identified. In the resulting section, new strategies such as a more decentralized architecture are proposed and design criteria are derived. The contribution of this paper should allow manufacturers and research institutes to consider the design criteria in current development and to place further research on new functions and control strategies precisely.
In the present study, in vitro toxicity as well as biopersistence and photopersistence of four artificial sweeteners (acesulfame, cyclamate, saccharine, and sucralose) and five antibiotics (levofloxacin, lincomycin, linezolid, marbofloxacin, and sarafloxacin) and of their phototransformation products (PTPs) were investigated. Furthermore, antibiotic activity was evaluated after UV irradiation and after exposure to inocula of a sewage treatment plant. The study reveals that most of the tested compounds and their PTPs were neither readily nor inherently biodegradable in the Organisation for Economic Co-operation and Development (OECD)-biodegradability tests. The study further demonstrates that PTPs are formed upon irradiation with an Hg lamp (UV light) and, to a lesser extent, upon irradiation with a Xe lamp (mimics sunlight). Comparing the nonirradiated with the corresponding irradiated solutions, a higher chronic toxicity against bacteria was found for the irradiated solutions of linezolid. Neither cytotoxicity nor genotoxicity was found in human cervical (HeLa) and liver (Hep-G2) cells for any of the investigated compounds or their PTPs. Antimicrobial activity of the tested fluoroquinolones was reduced after UV treatment, but it was not reduced after a 28-day exposure to inocula of a sewage treatment plant. This comparative study shows that PTPs can be formed as a result of UV treatment. The study further demonstrated that UV irradiation can be effective in reducing the antimicrobial activity of antibiotics, and consequently may help to reduce antimicrobial resistance in wastewaters. Nevertheless, the study also highlights that some PTPs may exhibit a higher ecotoxicity than the respective parent compounds. Consequently, UV treatment does not transform all micropollutants into harmless compounds and may not be a large-scale effluent treatment option.
The formation and analysis of ten microporous triazolyl isophthalate based MOFs, including nine isomorphous and one isostructural compound is presented. The compounds 1 M – 3 M with the general formula [ M ( R 1 - R 2 - trz - ia ) ] ∞ 3 ·x H 2 O (M 2+ = Co 2+ , Cu 2+ , Zn 2+ , Cd 2+ ; R 1 = H, Me; R 2 = 2py, 2pym, prz (2py = 2-pyridinyle; 2pym = 2-pyrimidinyle; prz = pyrazinyle)) crystallize with rtl topology. They are available as single crystals and also easily accessible in a multi-gram scale via refluxing the metal salts and the protonated ligands in a solvent. Their isomorphous structures facilitate the synthesis of heteronuclear MOFs; in case of 2 M , Co 2+ ions could be gradually substituted by Cu 2+ ions. The Co 2+ :Cu 2+ ratios were determined by ICP-OES spectroscopy, the distribution of Co 2+ and Cu 2+ in the crystalline samples are investigated by SEM-EDX analysis leading to the conclusions that Cu 2+ is more favorably incorporated into the framework compared to Co 2+ and, moreover, that the distribution of the two metal ions between the crystals and within the crystals is inhomogeneous if the crystals were grown slowly. The various compositions of the heteronuclear materials lead to different colors and the sorption properties for CO 2 and N 2 are dependent on the integrated metal ions.
The invention relates to a method and to a device for determining the state of charge (SOC) of a rechargeable battery (106) of a specified battery type or a parameter physically related thereto, in particular a remaining charge amount Q contained in the battery, the method operating by means of a voltage-controlled battery model (102), which is parameterized for the battery (106) in question or a corresponding battery type. It is merely necessary to measure the battery voltage Umess and to provide said battery voltage to the battery model (102) as an input variable. The invention further relates to a method and to a device for determining the state of health (SOH) of a battery (102), wherein the battery model (102) also used to determine the SOC provides a modeled battery current Imod. Modeled charge amounts during charging and discharging phases of the battery (106) can be determined from said modeled battery current and can be compared with measured charge amounts, which are determined from the measured battery current Imess. Because the battery model (102) does not age, the SOH of the battery can thereby be determined.
Passive hybridization refers to a parallel connection of photovoltaic and battery cells on the direct current level without any active controllers or inverters. We present the first study of a lithium-ion battery cell connected in parallel to a string of four or five serially-connected photovoltaic cells. Experimental investigations were performed using a modified commercial photovoltaic module and a lithium titanate battery pouch cell, representing an overall 41.7 W-peak (photovoltaic)/36.8 W-hour (battery) passive hybrid system. Systematic and detailed monitoring of this system over periods of several days with different load scenarios was carried out. A scaled dynamic synthetic load representing a typical profile of a single-family house was successfully supplied with 100 % self-sufficiency over a period of two days. The system shows dynamic, fully passive self-regulation without maximum power point tracking and without battery management system. The feasibility of a photovoltaic/lithium-ion battery passive hybrid system could therefore be demonstrated.
Simulation-based degradation assessment of lithium-ion batteries in a hybrid electric vehicle
(2017)
Covert and Side-Channels have been known for a long time due to their versatile forms of appearance. For nearly every technical improvement or change in technology, such channels have been (re-)created or known methods have been adapted. For example the introduction of hyperthreading technology has introduced new possibilities for covert communication between malicious processes because they can now share the arithmetic logical unit (ALU) as well as the L1 and L2 cache which enables establishing multiple covert channels. Even virtualization which is known for its isolation of multiple machines is prone to covert and side-channel attacks due to the sharing of resources. Therefore itis not surprising that cloud computing is not immune to this kind of attacks. Even more, cloud computing with multiple, possibly competing users or customers using the same shared resources may elevate the risk of unwanted communication. In such a setting the ”air gap” between physical servers and networks disappears and only the means of isolation and virtual separation serve as a barrier between adversary and victim. In the work at hand we will provide a survey on weak spots an adversary trying to exfiltrate private data from target virtual machines could exploit in a cloud environment. We will evaluate the feasibility of example attacks and point out possible mitigation solutions if they exist.
Several cloud schedulers have been proposed in the literature with different optimization goals such as reducing power consumption, reducing the overall operational costs or decreasing response times. A less common goal is to enhance the system security by applying specific scheduling decisions. The security risk of covert channels is known for quite some time, but is now back in the focus of research because of the multitenant nature of cloud computing and the co-residency of several per-tenant virtual machines on the same physical machine. Especially several cache covert channels have been identified that aim to bypass a cloud infrastructure's sandboxing mechanism. For instance, cache covert channels like the one proposed by Xu et. al. use the idealistic scenario with two alternately running colluding processes in different VMs accessing the cache to transfer bits by measuring cache access time. Therefore, in this paper we present a cascaded cloud scheduler coined C 3 -Sched aiming at mitigating the threat of a leakage of customers data via cache covert channels by preventing processes to access cache lines alternately. At the same time we aim at maintaining the cloud performance and minimizing the global scheduling overhead.
Covert channels have been known for a long time because of their versatile forms of appearance. For nearly every technical improvement or change in technology, such channels have been (re-)created or known methods have been adapted. For example, the introduction of hyperthreading technology has introduced new possibilities for covert communication between malicious processes because they can now share the arithmetic logical unit as well as the L1 and L2 caches, which enable establishing multiple covert channels. Even virtualization, which is known for its isolation of multiple machines, is prone to covert- and side-channel attacks because of the sharing of resources. Therefore, it is not surprising that cloud computing is not immune to this kind of attacks. Moreover, cloud computing with multiple, possibly competing users or customers using the same shared resources may elevate the risk of illegitimate communication. In such a setting, the “air gap” between physical servers and networks disappears, and only the means of isolation and virtual separation serve as a barrier between adversary and victim. In the work at hand, we will provide a survey on vulnerable spots that an adversary could exploit trying to exfiltrate private data from target virtual machines through covert channels in a cloud environment. We will evaluate the feasibility of example attacks and point out proposed mitigation solutions in case they exist.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
A new RFID/NFC (ISO 15693 standard) based inductively powered passive SoC (System on chip) for biomedical applications is presented here. The proposed SOC consists of an integrated 32 bit microcontroller, RFID/NFC frontend, sensor interface circuit, analog to digital converter and some peripherals such as timer, SPI interface and memory devices. An energy harvesting unit supplies the power required for the entire system for complete passive operation. The complete chip is realized on CMOS 0.18 μm technology with a chip area of 1.5 mm × 3.0 mm.
In this paper, a complete passive transponder device has been discussed which is meant to monitor leakage in silicone breast implants. The passive tag operates in the HF frequency range of 13.56MHz using RFID ISO 15693 standard. The complete system consists of the transponder, reader and a PC. This paper focusses on the development of such a state of the art passive RFID transponder to monitor the wellness of the silicone breast implants periodically in order to detect leakage in the same. Keyword: RFID (Radio frequency identification device), EM (Electromagnetic) field, Passive Transponder, Silicone breast implants.
Team description papers of magmaOffenburg are incremental in the sense that each year we address a different topic of our team and the tools around our team. In this year’s team description paper we focus on the architecture of the software. It is a main factor for being able to keep the code maintainable even after 15 years of development. We also describe how we make sure that the code follows this architecture.
In the past two decades much has been published on whiplash injury, yet both the confusion regarding the condition, and the medicolegal discussion about it have increased. In this paper, functional imaging research results are summarized using MRIcroGL3D visualization software and assembled in an image comprising regions of cerebral activation and deactivation.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
The majority of anterior cruciate ligament (ACL) injuries in team sports are non-contact injuries, with cutting maneuvers identified as high-risk tasks. Young female handball players have been shown to be at greater risk for ACL injuries than males. One risk factor for ACL injuries is the magnitude of the knee abduction moment (KAM). Cutting technique variables on foot placement, overall approach and knee kinematics have been shown to influence the KAM. Since injury risk is believed to increase with increasing task complexity, the purpose of the study was to test the effect of task complexity on technique variables that influence the KAM in female handball players during fake-and-cut tasks.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
The purpose of this study was to 1) compare knee joint kinematics and kinetics of fake-and-cut tasks of varying complexity in 51 female handball players and 2) present a case study of one athlete who ruptured her ACL three weeks post data collection. External knee joint moments and knee joint angles in all planes at the instance of the peak external knee abduction moment (KAM) as well as moment and angle time curves were analyzed. Peak KAMs and knee internal rotation moments were substantially higher than published values obtained during simple change-of-direction tasks and, along with flexion angles, differed significantly between the tasks. Introducing a ball reception and a static defender increased joint loads while they partially decreased again when anticipation was lacking. Our results suggest to use game-specific assessments of injury risk while complexity levels do not directly increase knee loading. Extreme values of several risk factors for a post-test injured athlete highlight the need and usefulness of appropriate screenings.
This chapter portrays the historical and mathematical background of dynamic and procedural content generation (PCG). We portray and compare various PCG methods and analyze which mathematical approach is suited for typical applications in game design. In the next step, a structural overview of games applying PCG as well as types of PCG is presented. As abundant PCG content can be overwhelming, we discuss context-aware adaptation as a way to adapt the challenge to individual players’ requirements. Finally, we take a brief look at the future of PCG.
HPTLC on amino plates, with simple heating of the plates for derivatization, has been used for quantification of glucosamine in nutritional supplements. On heating the plate glucosamine reacts to form a compound which strongly absorbs light between 305 and 330 nm, with weak fluorescence. The reaction product can be detected sensitively either by absorption of light or by fluorescence detection. The detection limit in absorption mode is approximately 25 ng per spot. In fluorescence mode a detection limit of 15 ng is achievable. A calibration plot for absorption detection is linear in the range 25 to 4000 ng glucosamine. The derivative formed from glucosamine by heating is stable for months, and the relative standard deviation is 1.64% for 600 ng glucosamine. The amounts of glucosamine found in nutritional supplements were in agreement with the label declarations.
In thin-layer chromatography, fiber-bundle arrays have been introduced for spectral absorption measurements in the UV-region. Using all-silica fiber bundles, the exciting light will be detected after re-emission on the plate with a fiberoptic spectrometer. In addition, fluorescence light can be detected which will be masked by the re-emitted light. Therefore, it is helpful to separate the absorption and fluorescence on the TLC-plate. A modified three-array assembly has been developed: using one array for detection, the two others are used for excitation with broadband band deuterium-light and with UV-LEDs adjusted to the substances under test. As an example, the quantification of glucosamine in nutritional supplements or spinach leaf extract will be described. Using simply heating of the amino-plate for derivation, the reaction product of Glucosamine can be detected sensitively either by light absorption or by fluorescence, using the new fiber-optic assembly. In addition, the properties of the new 3-row fiber-optic array and the commercially available UV-LEDs will be shown, in the interesting wavelength region for excitation of fluorescence, from 260 nm to 360 nm. The squint angle having an influence on coupling efficiency and spatial resolution will be measured with the inverse farfield method. Some properties of UV-LEDs for analytical applications will be described and discussed, too.
Background
To assess the in-field walking mechanics during downhill hiking of patients with total knee arthroplasty five to 14 months after surgery and an age-matched healthy control group and relate them to the knee flexor and extensor muscle strength.
Methods
Participants walked on a predetermined hiking trail at a self-selected, comfortable pace wearing an inertial sensor system for recording the whole-body 3D kinematics. Sagittal plane hip, knee, and ankle joint angles were evaluated over the gait cycle at level walking and two different negative slopes. The concentric and eccentric lower extremity muscle strength of the knee flexors and extensors isokinetically at 50 and 120°/s were measured.
Findings
Less knee flexion angles during stance have been measured in patients in the operated limb compared to healthy controls in all conditions (level walking, moderate downhill, steep downhill). The differences increased with steepness. Muscle strength was lower in patients for both muscle groups and all measured conditions. The functional hamstrings to quadriceps ratio at 120°/sec correlated with knee angle during level and downhill walking at the moderate slope in patients, showing higher ratios with lower peak knee flexion angles.
Interpretation
The study shows that even if rehabilitation has been completed successfully and complication-free, five to 14 months after surgery, the muscular condition was still insufficient to display a normal gait pattern during downhill hiking. The muscle balance between quadriceps and hamstring muscles seems related to the persistence of a stiff knee gait pattern after knee arthroplasty. LoE: III.