Refine
Year of publication
- 2023 (488) (remove)
Document Type
- Conference Proceeding (115)
- Bachelor Thesis (104)
- Article (reviewed) (84)
- Master's Thesis (65)
- Article (unreviewed) (38)
- Part of a Book (24)
- Other (13)
- Patent (13)
- Book (9)
- Doctoral Thesis (6)
Conference Type
- Konferenzartikel (88)
- Konferenz-Abstract (20)
- Konferenz-Poster (2)
- Konferenzband (2)
- Sonstiges (2)
Keywords
- Biomechanik (19)
- Künstliche Intelligenz (12)
- IT-Sicherheit (11)
- Deep Leaning (10)
- Marketing (10)
- Wärmepumpe (8)
- Medizintechnik (7)
- Social Media (7)
- Deep Learning (6)
- Machine Learning (6)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (151)
- Fakultät Medien (M) (ab 22.04.2021) (136)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (132)
- Fakultät Wirtschaft (W) (69)
- INES - Institut für nachhaltige Energiesysteme (38)
- IMLA - Institute for Machine Learning and Analytics (27)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (23)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (21)
- POIM - Peter Osypka Institute of Medical Engineering (12)
- IfTI - Institute for Trade and Innovation (9)
Open Access
- Closed (302)
- Open Access (168)
- Bronze (59)
- Diamond (47)
- Gold (37)
- Hybrid (20)
- Closed Access (18)
- Grün (5)
Die Erfindung betrifft ein Verfahren zum Maximieren der von einer analogen Entropiequelle abgeleiteten Entropie, wobei das Verfahren folgende Schritte aufweist:- Bereitstellen von Eingabedaten für die analoge Entropiequelle (2);- Erzeugen von Rückgabewerten durch die analoge Entropiequelle basierend auf den Eingabedaten (3); und- Gruppieren der Rückgabewerte, wobei das Gruppieren der Rückgabewerte ein Anwenden von Versätzen auf Rückgabewerte aufweist (4).
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
Method and system for extractin metal and oxygen from powdered metal oxides (EP000004170066A2)
(2023)
A method for extracting metal and oxygen from powdered metal oxides in electrolytic cell is proposed, the electrolytic cell comprising a container, a cathode, an anode and an oxygen-ion-conducting membrane, the method comprising providing a solid oxygen ion conducting electrolyte powder into a container, providing a feedstock comprising at least one metal oxide in powdered form into the container, applying an electric potential across the cathode and the anode, the cathode being in communication with the electrolyte powder and the anode being in communication with the membrane in communication with the electrolyte powder, such that at least one respective metallic species of the at least one metal oxide is reduced at the cathode and oxygen is oxidized at the anode to form molecular oxygen, wherein the potential across the cathode and the anode is greater than the dissociation potential of the at least one metal oxide and less than the dissociation potential of the solid electrolyte powder and the membrane.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
Linux and Linux-based operating systems have been gaining more popularity among the general users and among developers. Many big enterprises and large companies are using Linux for servers that host their websites, some even require their developers to have knowledge about Linux OS. Even in embedded systems one can find many Linux-based OS that run them. With its increasing popularity, one can deduce the need to secure such a system that many personnel rely on, be it to protect the data that it stores or to protect the integrity of the system itself, or even to protect the availability of the services it offers. Many researchers and Linux enthusiasts have been coming up with various ways to secure Linux OS, however new vulnerabilities and new bugs are always found, by malicious attackers, with every update or change, which calls for the need of more ways to secure these systems.
This Thesis explores the possibility and feasibility of another way to secure Linux OS, specifically securing the terminal of such OS, by altering the commands of the terminal, getting in the way of attackers that have gained terminal access and delaying, giving more time for the response teams and for forensics to stop the attack, minimize the damage, restore operations, and to identify collect and store evidence of the cyber-attack. This research will discuss the advantages and disadvantages of various security measures and compare and contrast with the method suggested in this research.
This research is significant because it paints a better picture of what the state of the art of Linux and Linux-based operating systems security looks like, and it addresses the concerns of security enthusiasts, while exploring new uncharted area of security that have been looked at as a not so significant part of protecting the OSes out of concern of the various limitations and problems it entails. This research will address these concerns while exploring few ways to solve them, as well as addressing the ideal areas and situations in which the proposed method can be used, and when would such method be more of a burden than help if used.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Femtosecond (fs) time-resolved magneto-optics is applied to investigate laser-excited ultrafast dynamics of one-dimensional nickel gratings on fused silica and silicon substrates for a wide range of periodicities Λ = 400–1500 nm. Multiple surface acoustic modes with frequencies up to a few tens of GHz are generated. Nanoscale acoustic wavelengths Λ/n have been identified as nth-spatial harmonics of Rayleigh surface acoustic wave (SAW) and surface skimming longitudinal wave (SSLW), with acoustic frequencies and lifetimes being in agreement with theoretical calculations. Resonant magnetoelastic excitation of the ferromagnetic resonance (FMR) by SAW’s third spatial harmonic, and, most interestingly fingerprints of the parametric resonance at 1/2 SAW frequency have been observed. Numerical solutions of Landau–Lifshitz–Gilbert (LLG) equation magnetoelastically driven by complex polychromatic acoustic fields quantitatively reproduce all resonances at once. Thus, our results provide a solid experimental and theoretical base for a quantitative understanding of ultrafast fs-laser-driven magnetoacoustics and tailoring the magnetic-grating-based metasurfaces at the nanoscale.
AI-based Ground Penetrating Radar Signal Processing for Thickness Estimation of Subsurface Layers
(2023)
This thesis focuses on the estimation of subsurface layer thickness using Ground Penetrating Radar (GPR) A-scan and B-scan data through the application of neural networks. The objective is to develop accurate models capable of estimating the thickness of up to two subsurface layers.
Two different approaches are explored for processing the A-scan data. In the first approach, A-scans are compressed using Principal Component Analysis (PCA), and a regression feedforward neural network is employed to estimate the layers’ thicknesses. The second approach utilizes a regression one-dimensional Convolutional Neural Network (1-D CNN) for the same purpose. Comparative analysis reveals that the second approach yields superior results in terms of accuracy.
Subsequently, the proposed 1-D CNN architecture is adapted and evaluated for Step Frequency Continuous Wave (SFCW) radar, expanding its applicability to this type of radar system. The effectiveness of the proposed network in estimating subsurface layer thickness for SFCW radar is demonstrated.
Furthermore, the thesis investigates the utilization of GPR B-scan images as input data for subsurface layer thickness estimation. A regression CNN is employed for this purpose, although the results achieved are not as promising as those obtained with the 1-D CNN using A-scan data. This disparity is attributed to the limited availability of B-scan data, as B-scan generation is a resource-intensive process.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
One of the main problematics of the seals tests is the time and money consuming they are. Up to now, there are few tries to do a digitalisation of a test where the seals behaviour can be known.
This work aims to digitally reproduce a seal test to extract their behaviour when working under different operation conditions to see their impact on the pimp’s efficiency. In this thesis, due to the Lomaking effect, the leakage and the forces applied on the stator will be the base of analysis.
First of all, among all the literature available for very different kind of seals and inner patterns, it has been chosen the most appropriate and precise data. The data chosen is “Test results for liquid Damper Seals using a Round-Hole Roughness Pattern for the Stator” from Fayolle, P. and “Static and Rotordynamic Characteristics of Liquid Annular Seals with Circumferentially/Grooved Stator and Smooth Rotor using three levels of circumferential Inlet-Fluid” from Torres J.M.
From the literature, dimensions of the test rig and the seals will be extracted to model them into a 3D CAD software. With the 3D CAD digitalisation, the fluid volumes for a rotor-centred position, meaning without eccentricity, will be extracted, and used. The following components have been modelled:
- Smooth Annular Liquid Seal (Grooved Rotor)
- Grooved Annular Liquid Seal (Smooth Rotor)
- Round-Hole Pattern Annular Liquid Seal (𝐻𝑑=2 𝑚𝑚) (Smooth Rotor)
- Straight Honeycomb Annular Liquid Seal (Smooth Rotor)
- Convergent Honeycomb Annular Liquid Seal (Smooth Rotor)
- Smooth Rotor / Smooth Annular Liquid Seal (Smooth Rotor)
As there is just one test rig, all the components have been adapted to the different dimensions of the seals by referencing some measures. This allows to test any seal with the same test rig.
Afterwards a CFD simulation that will be used to obtain leakage and stator forces. The parameters that will be changed are the rotational velocity of the fluid (2000 rpm, 4000 rpm, and 6000 rpm) and the pressure drop (2,068 bar, 4,137 bar, 6,205 bar, and 8,274 bar).
Those results will be compared to the literature ones, and they will determine if digitalisation can be validated or not. Even though the relative error is higher than 5% but the tendency is the same and it is thought that by changing some parameters the test results can be even closer to the literature ones.
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3–4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
Social-Media-Content - Auswirkungen auf Fear of Missing Out und den Selbstwert junger Nutzer*innen
(2023)
Social-Media-Marketing ist ein wichtiger Baustein einer erfolgreichen Content-Strategie. Insbesondere jüngere Zielgruppen sind auf Social Media anzutreffen – und das oftmals über viele Stunden täglich. Neben den Vorteilen, die Social Media den Nutzer*innen bietet, gibt es aber auch Schattenseiten. Zwei negative Aspekte, die sogenannte Fear of Missing Out und ein verminderter Selbstwert, wurden im Frühjahr 2022 in einer empirischen Befragung von 1338 Personen zwischen 14 und 30 Jahren untersucht. Daneben wurden auch Daten zum grundsätzlichen Social-Media-Nutzungsverhalten erhoben. Die zentralen Erkenntnisse, die sich aus der Studie ableiten, werden in diesem Kapitel vorgestellt und mit Bezug auf ihre Relevanz für das Content-Marketing hin eingeordnet.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
The objective of this project is to enhance the operations of a micro-enterprise that deals with food ingredients. The emphasis is on streamlining procedures and executing effective tactics. By utilizing tools like SWOT analysis, evaluations, and strategy development, the company's strengths, weaknesses, opportunities, and threats were assessed. The company developed business-level and functional-level strategies to expedite growth and attain objectives based on the findings. Moreover, precise suggestions were given to minimize the quantity of SKUs and optimize operations. The work highlighted the significance of developing a process map for streamlining operations, boosting efficiency, and elevating customer contentment. Through the implementation of said recommendations and strategies, the company can strategically position itself for success within the highly competitive food ingredients industry.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
Much of the research in the field of audio-based machine learning has focused on recreating human speech via feature extraction and imitation, known as deepfakes. The current state of affairs has prompted a look into other areas, such as the recognition of recording devices, and potentially speakers, by only analysing sound files. Segregation and feature extraction are at the core of this approach.
This research focuses on determining whether a recorded sound can reveal the recording device with which it was captured. Each specific microphone manufacturer and model, among other characteristics and imperfections, can have subtle but compounding effects on the results, whether it be differences in noise, or the recording tempo and sensitivity of the microphone while recording. By studying these slight perturbations, it was found to be possible to distinguish between microphones based on the sounds they recorded.
After the recording, pre-processing, and feature extraction phases we completed, the prepared data was fed into several different machine learning algorithms, with results ranging from 70% to 100% accuracy, showing Multi-Layer Perceptron and Logistic Regression to be the most effective for this type of task.
This was further extended to be able to tell the difference between two microphones of the same make and model. Achieving the identification of identical models of a microphone suggests that the small deviations in their manufacturing process are enough of a factor to uniquely distinguish them and potentially target individuals using them. This however does not take into account any form of compression applied to the sound files, as that may alter or degrade some or most of the distinguishing features that are necessary for this experiment.
Building on top of prior research in the area, such as by Das et al. in in which different acoustic features were explored and assessed on their ability to be used to uniquely fingerprint smartphones, more concrete results along with the methodology by which they were achieved are published in this project’s publicly accessible code repository.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
Total Cost of Ownership (TCO) is a key tool to have a complete understanding of the costs associated with an investment, as it allows to analyze not only the initial acquisition costs, but also the long-term costs related to operation, maintenance, depreciation, and other factors. In the context of the cement industry, TCO is especially important due to the complexity of the production processes and the wide variety of components and machinery involved in the process.
For this reason, a TCO analysis for the cement industry has been conducted in this study, with the objective of showing the different components of the cost of production. This analysis will allow the reader to gain knowledge about these costs, in the industrial model will be to make informed decisions on the adoption of technologies and practices that will allow them to reduce costs in the long run and improve their operational efficiency.
In particular, this study pursues to give visibility to technologies and practices that enable the reduction of carbon emissions in cement production, thus contributing to the sustainability of industry and the protection of the environment. By being at the forefront of sustainability issues, the cement industry can contribute to the achievement of environmentally friendly technologies and enable the development of people and industry.
The Oxyfuel technology has been selected as a carbon capture solution for the cement industry due to its practical application, low costs, and practical adaptation to non-capture processes. The adoption of this technology allows for a significant reduction in CO2 emissions, which is a crucial factor in achieving sustainability in the cement manufacturing process.
Carbon capture storage technologies represent a high investment, although these technologies increase the cost of production, the application of Oxyfuel technology is one of the most economically viable as the cheapest technology per capture according to the comparison. However, this price increase is a technical advantage as the carbon capture efficiency of this technology reaches 90%. This level of efficiency leads to a decrease in taxes for the generation of CO2 emissions, making the cement manufacturing process sustainable.
In Zeiten großer Veränderungen haben genossenschaftlich organisierte KMU die Möglichkeit, auf komplexe Herausforderungen mit kooperativen Lösungsansätzen zu reagieren, vor allem wenn dabei die Kraft und Kreativität der Gemeinschaft genutzt wird. Getreu dem Motto „Was einer alleine nicht schafft, das schaffen viele“ des Genossenschaftsvorreiters Friedrich Wilhelm Raiffeisen ist gemeinschaftliches unternehmerisches Handeln identitätsstiftend und motivierend, woraus wiederum eine sich selbst verstärkende Eigendynamik entstehen kann. Wie Mittelstand, Politik und Gesellschaft davon profitieren, stellen Prof. Dr. Tobias Popovic und Prof. Dr. Thomas Baumgärtler in diesem Beitrag dar.
In this paper, the J-integral is derived for temperature-dependent elastic–plastic materials described by incremental plasticity. It is implemented using the equivalent domain integral method for assessment of three-dimensional cracks based on results of finite-element calculations. The J-integral considers contributions from inhomogeneous temperature fields and temperature-dependent elastic and plastic material properties as well as from gradients in the plastic strains and the hardening variables. Different energy densities are considered, the Helmholtz free energy and the stress-working density, providing a physical meaning of the J-integral as a fracture criteria for crack growth. Results obtained for a plate with two different crack configurations each loaded by a cool-down thermal shock show domain-independence of the incremental J-integral for different energy densities even for high temperature gradients and significant temperature-dependence of the yield stress and the hardening exponent in the presence of large scale yielding. Hence, the derived J-integral is an appropriate parameter for the assessment of cracks in thermomechanically loaded components.
Zur ergonomischen Unterstützung von Industriearbeitern werden zunehmend Exoskelette eingesetzt. Studien über die Wirkung und den Einfluss von Exoskeletten auf den Körper sind jedoch rar. Diese Arbeit beschäftigt sich daher mit der Wirkung des Rückenexoskeletts BionicBack des deutschen Exoskelett Herstellers hTRIUS auf die Wirbelsäulenkrümmung bei industriellen Hebearbeiten. Im Speziellen wird die Wirbelsäulenkrümmung beim Umpalettieren aus drei verschiedenen Hebehöhen (91 cm, 59 cm, 15 cm) mit Hilfe eines markerbasierten 3D Motion Capture Systems untersucht. Um den Versuchsaufbau alltagsnah und realistisch zu gestalten, wurde diese Pilotstudie in Kooperation mit der Firma Zehnder am Standort Lahr durchgeführt, die sowohl die Probanden als auch den Versuchsaufbau zur Verfügung stellte. Vier gesunde männliche Probanden mit einem durchschnittlichen Alter von 39,5 Jahren (SD = 6,5), einem durchschnittlichen Körpergewicht von 72,75 kg (SD = 7,1) und einer durchschnittlichen Körpergröße von 175 cm (SD = 2,6) wurden in zwei Schichten eingeteilt. Mit den Probanden wurden vor und nach der Schicht sowie an zwei aufeinander folgenden Tagen Messungen durchgeführt, wobei an einem Tag das BionicBack während der Arbeit und der Messung getragen wurde und am an-deren Tag nicht. Während einer Messung nahmen die Testpersonen ein Paket mit einem Gewicht von 21,1 kg dreimal von jeder Hebehöhe von einer Palette auf und legten es auf einer anderen ab. Anschließend wurde die Krümmung der Wirbelsäule am tiefsten Punkt der Hebebewegung untersucht, wobei die Gesamtkrümmung in dieser Position durch die Addition von drei repräsentativen Segmentwinkeln dargestellt wird. Die Abweichung dieser Gesamtkrümmungen in der tiefsten Beugeposition von der individuellen neutralen Wirbelsäulenstellung der Probanden im Stehen ergeben die Werte, die zwischen den einzelnen Versuchsbedingungen verglichen werden. Die Ergebnisse zeigen, dass das BionicBack den Abstand zur Neutralstellung bzw. die Gesamtkrümmung des Rückens im Vergleich zu ohne BionicBack bis zu -11,5° (Median: -11,5° (SD = 5,2); Mittelwert: -8,4° (SD = 6,4)) entsprechend -30% vor der Schicht und bis zu -5,6° (Median: -5,6° (SD = 3,5); Mittelwert: -4,1° (SD = 5,4)) ent-sprechend -17% nach der Schicht reduzieren kann. Die Betrachtung der einzelnen Segmentwinkel zeigt, dass die Reduzierung des Abstandes von der Neutralstellung hauptsächlich im Lendenwirbelbereich stattfindet. Der Vergleich der Wirbelsäulen-krümmung vor und nach der Schicht ohne BionicBack zeigt, dass die Wirbelsäulen-krümmung nach der Schicht, mit Ausnahme der tiefsten Hebehöhe, eine größere Abweichung von der Neutralstellung aufweist als vor der Schicht. Der Vergleich mit BionicBack zeigt, dass die Wirbelsäulenkrümmung nach der Schicht mit Ausnahme der niedrigsten Hubhöhe nicht bzw. weniger von der Neutralstellung abweicht als vor der Schicht. Aufgrund der Ergebnisse wird vermutet, dass das BionicBack durch die Unterstützung einer neutraleren Rückenhaltung das Verletzungsrisiko reduzieren kann. Des Weiteren wird vermutet, dass die Muskelermüdung während einer Arbeitsschicht einen Einfluss auf die Wirbelsäulenkrümmung beim Heben hat. Es wird angenommen, dass dieser Einfluss durch das BionicBack reduziert werden kann. Allerdings dürfen die Grenzen dieser Pilotstudie nicht außer Acht gelassen werden. Sei es die Anzahl der Versuchspersonen, die keine Aussage über die Allgemeingültigkeit zu-lässt und keine effektive statistische Analyse erlaubt, oder systematische Fehler, die aufgrund der Modellvereinfachung und der Methodik auftreten können. Weitere Untersuchungen sind erforderlich, um die Ergebnisse zu validieren. Diese Arbeit soll die Grundlage für weitere Studien mit einer weiterentwickelten Methodik und einer größeren Anzahl von Probanden bilden.
A balcony photovoltaic (PV) system, also known as a micro-PV system, is a small PV system consisting of one or two solar modules with an output of 100–600 Wp and a corresponding inverter that uses standard plugs to feed the renewable energy into the house grid. In the present study we demonstrate the integration of a commercial lithium-ion battery into a commercial micro-PV system. We firstly show simulations over one year with one second time resolution which we use to assess the influence of battery and PV size on self-consumption, self-sufficiency and the annual cost savings. We then develop and operate experimental setups using two different architectures for integrating the battery into the micro-PV system. In the passive hybrid architecture, the battery is in parallel electrical connection to the PV module. In the active hybrid architecture, an additional DC-DC converter is used. Both architectures include measures to avoid maximum power point tracking of the battery by the module inverter. Resulting PV/battery/inverter systems with 300 Wp PV and 555 Wh battery were tested in continuous operation over three days under real solar irradiance conditions. Both architectures were able to maintain stable operation and demonstrate the shift of PV energy from the day into the night. System efficiencies were observed comparable to a reference system without battery. This study therefore demonstrates the feasibility of both active and passive coupling architectures.
Bilder und Filme sind aus dem modernen Leben nicht mehr wegzudenken. Besonders die Präsenz im Internet – die Webseite der Firma, der persönliche Blog oder der Auftritt in sozialen Medien – kommt nicht mehr ohne die richtige Visualisierung aus. Wer entsprechende Fähigkeiten und Ausrüstung besitzt, erstellt sich seine benötigten Inhalte selbst. Wer diese Möglichkeit nicht hat, muss auf Werke anderer zurückgreifen. Einfach aus dem Internet herunterladen und benutzen darf man diese allerdings nicht, denn sie sind in den meisten Fällen urheberrechtlich geschützt. Große Firmen und Personen mit entsprechendem Budget buchen für diese Zwecke deshalb extra Fotografen oder kaufen ihre gewünschten Inhalte auf beliebten Stock-Plattformen wie Shutterstock oder Getty Images. Diese Abschlussarbeit richtet sich an diejenigen, die das nicht können: (Werk-)Studenten im Bereich Medien, Gestaltung oder Onlinemarketing sowie kleinere Firmen und Einzelpersonen, welche Bilder und Filme für private, schulische, redaktionelle oder kommerzielle Zwecke benötigen.
Ziel dieser Arbeit ist das Vorstellen der zahlreichen Möglichkeiten, im Internet Bilder und Filme für die kostenlose und rechtssichere Nutzung zu recherchieren. Dabei werden verschiedene Arten der kommerziellen und nichtkommerziellen Nutzung berücksichtigt, sodass möglichst viele Leser einen Nutzen aus dieser Abschlussarbeit ziehen können. Hierzu wird zunächst die Rechtslage in Deutschland geklärt und anschließend 7 Plattformen sowie 2 Suchmaschinen für Bilder und 7 Plattformen für Filme untersucht und miteinander verglichen. Für jede Plattform werden ihre typischen Features vorgestellt und ihre Bedeutung für die Recherche aufgezeigt. Dies soll als Hilfestellung für einen effizienten Recherche-Arbeitsablauf auf der jeweiligen Plattform dienen. Anschließend wird die jeweilige Bild- und Videoauswahl anhand beispielhafter Suchbegriffe dargestellt.
To date, many experiments have been performed to study how the internal geometrical shapes of the annular liquid seal can reduce internal leakage and increase pump efficiency. These can be time-consuming and expensive as all rotordynamic coefficients must be determined in each case.
Nowadays, accurate simulation methods to calculate rotordynamic coefficients of annular seals are still rare. Therefore, new numerical methods must be designed and validated for annular seals.
The present study aims to contribute to this labour by providing a summary of the available test rig and seals dimensions and experimental results obtained in the following experiments:
− Kaneko, S et al., Experimental Study on Static and Dynamic Characteristics of Liquid Annular Convergent-Tapered Seals with Honeycomb Roughness Pattern (2003) [1] − J. Alex Moreland, Influence of pre-swirl and eccentricity in smooth stator/grooved rotor liquid annular seals, static and rotordynamic characteristics (2016) [2]
A 3D CAD simulation with Siemens NX Software of the test rig used in J. Alex Moreland’s experiment has been made. The following annular liquid seals have also been 3D modelled, as well as their fluid volume:
− Smooth Annular Liquid Seal (SS/GR) (J. Alex Moreland experiment)
− Grooved Annular Liquid Seal (GS/SR)
− Round-Hole Pattern Annular Liquid Seal (𝐻𝑑=2 mm) (GS/SR)
− Straight Honeycomb Annular Liquid Seal (GS/SR)
− Convergent Honeycomb Annular Liquid Seal (No. 3) (GS/SR)
− Smooth Annular Liquid Seal (SS/SR) (S. Kaneko experiment)
In the case of the seals used in S. Kaneko’s experiments, the test rig has been adapted to each seal, defining interpart expressions which can be easily modified.
Afterwards, it has been done a CFD simulation of the Smooth Annular Liquid Seal using Ansys CFX Software. To do so, the fluid volume geometry has been simplified to do a first approximation. Results have been compared for an eccentricity 𝜀0=0.00 for the following ranges of rotor speeds and differential of pressure:
− Δ𝑃= 2.07, 4.14, 6.21, and 8.27 bar,
− 𝜔= 2, 4, 6 and 8 krpm.
Even results obtained have the same trend as the one proportionated by the literature, they cannot be validated as the error is above 5%. It is also observed that as the pressure drop increases, the relative error decreases considerably.
Der Heel-Rise Test (HRT) wird in der Klinik und der Therapie benutzt, um die Funktionsfähigkeit der Wadenmuskulatur einzuschätzen. Eine Orientierung am Normwert von 25 Wiederholungen hilft dabei, die Muskulatur als normal oder anormal einzustufen. Dieser Wert beruht jedoch auf eine älteren und nicht mehr zeitgemäßen Studie. Auch ist fraglich, ob der absolut erreichte Wert eines HRTs eine direkte Aussage über die Funktionsfähigkeit der Plantarflexoren geben kann.
Das Ziel dieser Arbeit ist somit den HRT mit einer Maximalkraftmessung auf dem Isokineten zu vergleichen und diese auf einen möglichen Zusammenhang zu prüfen. Dazu kann folgende Forschungsfrage aufgestellt werden: „Können die Messergebnisse des HRT eine positive Korrelation mit einer Maximalkraftmessung am Isokineten eingehen?“
Für die Beantwortung der Forschungsfrage ist eine quantitative Untersuchung der beidseitigen Wadenkraft von 20 jungen und gesunden Teilnehmer*innen durchgeführt worden. Dabei wurde das Bein, mit dem die Kraftuntersuchung beginnt, randomisiert. Da ein linearer Zusammenhang zwischen den HRT-Messwerten und einer Maximalkraftmessung auf dem Isokineten vermutet wird, wird dieser durch eine Korrelationsanalyse nach Bravais-Pearson geprüft.
Der Vergleich der Kraftmessungen zeigt, dass die HRT-Ergebnisse eine moderate bis hohe positive Korrelation mit den Maximalkraftwerten auf dem Isokineten eingehen. Dabei hat die Beindominanz sowie die Testreihenfolge der Beine keinen großen Einfluss auf die Ergebnisse. Untersucht man Männer und Frauen getrennt, hebt sich jedoch die positive Korrelation auf und es kann ein geringer, bis kein Zusammenhang zwischen dem HRT und der Maximalkraft auf dem Isokineten festgestellt werden. Zudem ist zu erkennen, dass Männer in allen Kraftuntersuchungen höhere Kraftergebnisse erzielt haben. Da die Stichprobe nur junge, gesunde und aktive Menschen umfasst, sind Aussagen über erkrankte Personen, ältere Menschen oder den Einfluss der Leistungsfähigkeit nicht möglich.
Durch die positive Korrelation des HRT mit dem Goldstandard der Kraftdiagnostik, der Isokinetitk, scheint die Kritik am HRT entkräftet zu werden. Jedoch sind die Ergebnisse mit Vorsicht zu betrachten, da sich bei Betrachtung des Geschlechts die positive Korrelation aufhebt. Das Ergebnis der Arbeit soll, im Anbetracht der Limitation, trotzdem ermutigen, den HRT weiterhin für die Kraftdiagnostik der Wadenmuskulatur zu nutzen. Die eingeschränkte Stichprobengröße, eine fehlende Standardisierung der HRT-Durchführung, sowie die vielen Auswahlwahlmöglichkeiten in der Isokinetik machen es kompliziert die Ergebnisse dieser Arbeit auf andere Personengruppen oder Messmethoden zu übertragen. Dennoch gibt die Untersuchung einen ersten Einblick und ermöglicht die Aussagekraft des HRT zu stützen und somit seine Bedeutung und Qualität für die Kraftdiagnostik zu verbessern.
For the treatment of bone defects, biodegradable, compressive biomaterials are needed as replacements that degrade as the bone regenerates. The problem with existing materials has either been their insufficient mechanical strength or the excessive differences in their elastic modulus, leading to stress shielding and eventual failure. In this study, the compressive strength of CPC ceramics (with a layer thickness of more than 12 layers) was compared with sintered β-TCP ceramics. It was assumed that as the number of layers increased, the mechanical strength of 3D-printed scaffolds would increase toward the value of sintered ceramics. In addition, the influence of the needle inner diameter on the mechanical strength was investigated. Circular scaffolds with 20, 25, 30, and 45 layers were 3D printed using a 3D bioplotter, solidified in a water-saturated atmosphere for 3 days, and then tested for compressive strength together with a β-TCP sintered ceramic using a Zwick universal testing machine. The 3D-printed scaffolds had a compressive strength of 41.56 ± 7.12 MPa, which was significantly higher than that of the sintered ceramic (24.16 ± 4.44 MPa). The 3D-printed scaffolds with round geometry reached or exceeded the upper limit of the compressive strength of cancellous bone toward substantia compacta. In addition, CPC scaffolds exhibited more bone-like compressibility than the comparable β-TCP sintered ceramic, demonstrating that the mechanical properties of CPC scaffolds are more similar to bone than sintered β-TCP ceramics.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
Team description papers of magmaOffenburg are incremental in the sense that each year we address a different topic of our team and the tools around our team. In this year’s team description paper we focus on the architecture of the software. It is a main factor for being able to keep the code maintainable even after 15 years of development. We also describe how we make sure that the code follows this architecture.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit
einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, wobei die Zuführeinrichtung im Bereich der Bodenseite der Begasungskolonne angeordnet ist, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium
aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium im Bereich der Bodenseite der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2mittels methanogener Mikroorganismen durch Umsetzung von H2und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2enthaltendes Gas zugeführt wird.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
Das Ziel der vorliegenden Bachelorarbeit ist die Implementierung und Verbesserung der nichtmodellbasierten und pixelweisen Kalibrierung von Industriekameras in MATLAB. Hierfür wird eine homogene Helligkeitsregulierung zwischen Monitor und Kamera mittels Randfindung, Einstellen der Belichtungszeit und Regulierung der Monitorgrauwerte entwickelt, um systembasierte Fehler der Kamera wie die Vignettierung ausgleichen zu können. In mehreren Versuchen wird die Implementierung validiert. Im Rahmen der Bachelorarbeit wird herausgefunden, dass die homogene Helligkeitsregelung die Ergebnisse in einer orthogonalen Positionierung zum Monitor nicht wesentlich verändert. Vor allem aber wird die Kalibrierung bei größeren Winkeln robuster. Neben der Implementierung wird eine Benutzeroberfläche eingebunden, die auch Anwenderfehler in Bezug auf die Linearführungsschiene verhindern soll.
Cloud computing is a combination of technologies, including grid computing and distributed computing, that use the Internet as a network for service delivery. Organizations can select the price and service models that best accommodate their demands and financial restrictions. Cloud service providers choose the pricing model for their cloud services, taking the size, usage, user, infrastructure, and service size into account. Thus, cloud computing’s economic and business advantages are driving firms to shift more applications to the cloud, boosting future development. It enlarges the possibilities of current IT systems.
Over the past several years, the ”cloud computing” industry has exploded in popularity, going from a promising business concept to one of the fastest expanding areas of the IT sector. Most enterprises are hosting or installing web services in a cloud architecture for management simplicity and improved availability. Virtual environments are applied to accomplish multi-tenancy in the cloud. A vulnerability in a cloud computing environment poses a direct threat to the users’ privacy and security. In our digital age, the user has many identities. At all levels, access rights and digital identities must be regulated and controlled.
Identity and access management(IAM) are the process of managing identities and regulating access privileges. It is considered as a front-line soldier of IT security. It is the goal of identity and access management systems to protect an organization’s assets by limiting access to just those who need it and in the appropriate cases. It is required for all businesses with thousands of users and is the best practice for ensuring user access control. It identifies, authenticates, and authorizes people to access an organization’s resources. This, in turn, enhances access management efficiency. Authentication, authorization, data protection, and accountability are just a few of the areas in which cloud-based web services have security issues. These features come under identity and access management.
The implementation of identity and access management(IAM) is essential for any business. It’s becoming more and more business-centric, so we need more than technical know-how to succeed. Organizations may save money on identity management and, more crucially, become much nimbler in their support of new business initiatives if they have developed sophisticated IAM capabilities. We used these features of identity and access management to validate the robustness of the cloud computing environment with a comparison of traditional identity and access management.
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
Als Bachelorarbeit wurde ein Drehbuch ausgearbeitet. Hierbei handelt es sich um die Pilotfolge einer selbst konzipierten Serie.
Kurzzusammenfassung:
Anna, Tess, Felix und Vincent sind in ihren Zwanzigern und treten alle zur gleichen Zeit in einem Unternehmen ihre erste Stelle an. Neben den Unsicherheiten und Problemen, die mit einer neuen Stelle auftreten, müssen sich die vier auch mit ihren privaten Konflikten auseinandersetzen.
Viralität auf TikTok
(2023)
Die Social Media Plattform TikTok erfreut sich spätestens seit der Corona-Pandemie einer immer größer werdenden Gemeinschaft. Mittlerweile verfügt die App über mehr als 20 Millionen Nutzer:innen - alleine in Deutschland. Virale Videos sprießen förmlich aus dem Boden. Diese Masterarbeit beschäftig sich mit der Frage, welche Faktoren der Viralität zu Grunde liegen und ob man die Viralität maßgeblich beeinflussen kann. Dies erfolgt mittels theoretischer Grundlagen, einer quantitativen Nutzerumfrage und Experteninterivews mit erfolgreichen deutschen Creatorn. Abschließend werden Videos für TikTok konzipiert und analysiert.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
The use of renewable energy sources for heating and cooling in buildings today offers the best opportunities to avoid the use of fossil fuels and the associated climate-damaging emissions. However, unlike fossil fuels, renewable energy sources such as solar radiation are not available at the push of a button, but occur uncontrollably depending on weather conditions, the location of the building and the time of year. Their use is free of charge. However, complex converters and systems usually have to be installed in order to use them. These must be carefully planned and operated in order to avoid unnecessary costs and to generate the maximum possible yield. The regenerative energy systems are usually integrated into existing conventional systems. When designing the control and regulation equipment, it is crucial to design the automation of the systems in such a way that primarily renewable energy sources are used and the share of fossil energy sources is minimized.
Automation devices or automation stations (AS) take on the task of controlling, regulating, monitoring and, if necessary, optimising building systems and their system components (e.g. pumps, compressors, fans) based on recorded process variables. For this purpose, a wide range of control and regulation methods are used, starting with simple on/off controllers, through classic PID controllers, to higher-order controllers such as adaptive, model-predictive, knowledge-based or adaptive controllers.
Starting with a brief introduction to automation technology (Sect. 7.1), the chapter goes into the structure and functionality of the usual compact controllers using the application examples of solar thermal systems and heat pump systems (Sect. 7.2). Finally, the integration of system automation into a higher-level building automation system and into the building management system is described using specific application examples (Sect. 7.3).
This central book chapter now details the implementation of automation of solar domestic hot water systems, solar assisted building heating, rooms, solar cooling systems, heat pump heating systems, geothermal systems and thermally activated building component systems. Hydraulic and automation diagrams are used to explain how the automation of these systems works. A detailed insight into the engineering and technical interrelationships involved in the use of these systems, as well as the use of simulation tools, enables effective control and regulation. System characteristic curves and systematic procedures support the automation engineer in his tasks.
Renewable energy sources such as solar radiation, geothermal heat and ambient heat are available for energy conversion. With the help of special converters, these resources can be put to use. These include solar collectors, geothermal probes and chillers. They collect the energy and convert it to a temperature level high enough to be suitable for heat purposes. In the case of refrigeration machines, a distinction is made between electrically and thermally driven machines.
Der vorliegende Leitfaden entstand im Rahmen der wissenschaftlichen Querspange »LowEx-Bestand Analyse« des thematischen Projektverbunds »LowEx-Konzepte für die Wärmeversorgung von Mehrfamilien-Bestandsgebäuden (LowEx-Bestand)« zusammen. In diesem Verbund arbeiteten die drei Forschungsinstitute Fraunhofer ISE, KIT und Universität Freiburg (INATECH) mit Herstellern von Heizungs- und Lüftungstechnik und mit Unternehmen der Wohnungswirtschaft zusammen. Gemeinsam wurden Lösungen entwickelt, analysiert und demonstriert, die den effizienten Einsatz von Wärmepumpen, Wärmeübergabesystemen und Lüftungssystemen bei der energetischen Modernisierung von Mehrfamiliengebäuden zum Ziel haben.