Refine
Year of publication
- 2023 (248)
- 2022 (200)
- 2021 (194)
- 2020 (164)
- 2015 (133)
- 2019 (128)
- 2016 (125)
- 2017 (125)
- 2018 (117)
- 2014 (102)
- 2013 (93)
- 2012 (68)
- 2011 (60)
- 2010 (42)
- 2009 (32)
- 2024 (30)
- 2008 (20)
- 2007 (18)
- 2005 (16)
- 2004 (11)
- 2006 (11)
- 2002 (8)
- 2003 (7)
- 2001 (5)
- 1995 (3)
- 1997 (3)
- 1998 (3)
- 2000 (3)
- 1986 (2)
- 1987 (2)
- 1989 (2)
- 1991 (2)
- 1992 (2)
- 1996 (2)
- 1994 (1)
- 1999 (1)
Document Type
- Conference Proceeding (934)
- Article (reviewed) (559)
- Article (unreviewed) (124)
- Master's Thesis (68)
- Part of a Book (65)
- Contribution to a Periodical (58)
- Book (30)
- Bachelor Thesis (29)
- Patent (29)
- Letter to Editor (28)
- Doctoral Thesis (19)
- Working Paper (19)
- Study Thesis (13)
- Report (3)
- Other (2)
- Image (1)
- Moving Images (1)
- Periodical Part (1)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (1983) (remove)
Keywords
- RoboCup (32)
- Dünnschichtchromatographie (27)
- COVID-19 (23)
- Export (23)
- Machine Learning (19)
- Gamification (17)
- Kommunikation (15)
- Finite-Elemente-Methode (13)
- Government Measures (13)
- TRIZ (13)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (591)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (501)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (373)
- Fakultät Wirtschaft (W) (282)
- INES - Institut für nachhaltige Energiesysteme (168)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (157)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (146)
- Fakultät Medien (M) (ab 22.04.2021) (82)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (58)
Open Access
- Open Access (833)
- Closed Access (666)
- Closed (294)
- Bronze (137)
- Gold (75)
- Diamond (68)
- Hybrid (45)
- Grün (12)
The objective of this project is to enhance the operations of a micro-enterprise that deals with food ingredients. The emphasis is on streamlining procedures and executing effective tactics. By utilizing tools like SWOT analysis, evaluations, and strategy development, the company's strengths, weaknesses, opportunities, and threats were assessed. The company developed business-level and functional-level strategies to expedite growth and attain objectives based on the findings. Moreover, precise suggestions were given to minimize the quantity of SKUs and optimize operations. The work highlighted the significance of developing a process map for streamlining operations, boosting efficiency, and elevating customer contentment. Through the implementation of said recommendations and strategies, the company can strategically position itself for success within the highly competitive food ingredients industry.
This thesis explores the feasibility and optimization of a solar-thermal sorption system mainly designed to provide cooling but also capable of heating functionalities. Through the development of a black-box model using Python programming, the study delves into the system's performance under various operation modes. Simulation results reveal the effectiveness of adaptive control strategies and pre-heating stages in optimizing efficiency, particularly in cooling modes. In heating assessments, superior performance is observed when utilizing the outdoor coil as the heat source for the heat pump. Challenges related to operational temperature bands are addressed, proposing parallel connections of the heat pump and outdoor coil to enhance performance. Future research directions include refining regression models and incorporating real-time measurement data for improved accuracy, as well as extending simulation duration for comprehensive evaluations. This study contributes valuable insights into the system’s capabilities and applications, laying the groundwork for advancements in heat-driven integrated sustainable energy systems.
Online grocery shopping (OGS) has significantly risen due to accelerated retail digitization and reshaped consumer shopping behaviors over the last years. Despite this trend, the German online grocery market lags behind its international counterparts. Notably, with almost half of the German population aged over 50 and the 55–64 age group emerging as the largest user segment in e-commerce, the over-50 demographic presents an attractive yet relatively overlooked audience for the expansion of the online grocery market. However, research on OGS behavior among German over-50s is scarce. This study addresses this gap, empirically investigating OGS adoption factors within this demographic through an online survey with 179 respondents. Our findings reveal that over a third of the over-50 demographic has embraced OGS, indicating a growing receptivity for OGS among the over-50s. Notably, home delivery, product variety, convenience, and curiosity emerged as primary drivers for OGS adoption among this demographic. Surprisingly, most adopters did not increase online grocery orders since 2020 and a not inconsiderable proportion have even stopped buying groceries online again. For potential OGS adopters, regional product availability turned out as a motivator, signaling substantial growth potential and providing online grocers with strategic opportunities to target this demographic. In light of our research, we offer practical suggestions to online grocery retailers, aiming to overcome barriers and capitalize on key drivers identified in our study for sustained growth in the over-50 market segment.
In a dynamic global landscape, the role of UK Export Finance (UKEF) and other export credit agencies (ECAs) has never been more important. Access to finance is critical for exporters as it enables them to invest in production, expand operations, manage cash flow and mitigate trade risks. However, businesses face challenges in securing export finance and trade credit insurance as geopolitical and trade megatrends lead to increased political, market and credit risks. Drawing on qualitative data from 35 semi-structured interviews and expert discussions and based on the Futures Triangle analytical framework, this white paper analyses the geopolitical and trade megatrends that UKEF and other ECAs will face in the coming years. It presents novel findings about the implications for ECA mandates, strategies, products and operations: The evolution of mandates towards a “growth promoter”, the need to further scale up operations, the use of big data and artificial intelligence for risk analysis and forecasting, and the need to balance multiple and conflicting priorities, including export growth, support for small and medium-sized exporters, inclusive trade, climate action, and positive impact in developing markets.
Strong security measures are required to protect sensitive data and provide ongoing service as a result of the rising reliance on online applications for a range of purposes, including e-commerce, social networking, and commercial activities. This has brought to light the necessity of strengthening security measures. There have been multiple incidents of attackers acquiring access to information, holding providers hostage with distributed denial of service attacks, or accessing the company’s network by compromising the application.
The Bundesamt für Sicherheit in der Informationstechnik (BSI) has published a comprehensive set of information security principles and standards that can be utilized as a solid basis for the development of a web application that is secure.
The purpose of this thesis is to build and construct a secure web application that adheres to the requirements established in the BSI guideline. This will be done in order to answer the growing concerns regarding the security of web applications. We will also evaluate the efficacy of the recommendations by conducting security tests on the prototype application and determining whether or not the vulnerabilities that are connected with a web application that is not secure have been mitigated.
The research employed HPTLC Pro System and other HPTLC instruments from CAMAG® to conduct various laboratory tests, aiming to compile a database for subsequent analyses. Utilizing MATLAB, distinct codes were developed to reveal patterns within analyzed biomasses and pyrolysis oils (sewage sludge, fermentation residue, paper sludge, and wood). Through meticulous visual and numerical analysis, shared characteristics among different biomasses and their respective pyrolysis oils were revealed, showcasing close similarities within each category. Notably, minimal disparity was observed in fermentation residue and wood biomasses with a similarity coefficient of 0.22. Similarly, for pyrolysis oils, the minimal disparity was found in fermentation residues 1 and 3, with a disparity coefficient of 1.41. Despite higher disparity coefficients in certain results, specific biomasses and pyrolysis oils, such as fermentation residue and sewage sludge, exhibited close similarities, with disparity coefficients of 0.18 and 0.55, respectively. The database, derived from triplicate experimentation, now serves as a valuable resource for rapid analysis of newly acquired raw materials. Additionally, the utility of HPTLC PRO as an investigation tool, enabling simultaneous analysis of up to five samples, was emphasized, although areas for improvement in derivatization methods were identified.
Though the basic concept of a ledger that anyone can view and verify has been around for quite some time, today’s blockchains bring much more to the table including a way to incentivize users. The coins given to the miner or validator were the first source of such incentive to make sure they fulfilled their duties. This thesis draws inspiration from other peer efforts and uses this same incentive to achieve certain goals. Primarily one where users are incentivised to discuss their opinions and find scientific or logical backing for their standpoint. While traditional chains form a consensus on a version of financial "truth", the same can be applied to ideological truths too. To achieve this, creating a modified or scaled proof of stake consensus mechanism is explored in this work. This new consensus mechanism is a Reputation Scaled - Proof of Stake. This reputation can be built over time by voting for the winning side consistently or by sticking to one’s beliefs strongly. The thesis hopes to bridge the gap in current consensus algorithms and incentivize critical reasoning.
In a randomized controlled cross-over study ten male runners (26.7 ± 4.9 years; recent 5-km time: 18:37 ± 1:07 min:s) performed an incremental treadmill test (ITT) and a 3-km time trial (3-km TT) on a treadmill while wearing either carbon fiber insoles with downwards curvature or insoles made of butyl rubber (control condition) in light road racing shoes (Saucony Fastwitch 9). Oxygen uptake, respiratory exchange ratio, heart rate, blood lactate concentration, stride frequency, stride length and time to exhaustion were assessed during ITT. After ITT, all runners rated their perceived exertion, perceived shoe comfort and perceived shoe performance. Running time, heart rate, blood lactate levels, stride frequency and stride length were recorded during, and shoe comfort and shoe performance after, the 3-km TT. All parameters obtained during or after the ITT did not differ between the two conditions [range: p = 0.188 to 0.948 (alpha value: 0.05); Cohen's d = 0.021 to 0.479] despite the rating of shoe comfort showing better scores for the control insoles (p = 0.001; d = −1.646). All parameters during and after the 3-km TT showed no differences (p = 0.200 to 1.000; d = 0.000 to 0.501) between both conditions except for shoe comfort showing better scores for control insoles (p = 0.017; d = −0.919). Running with carbon fiber insoles with downwards curvature did not change running performance or any submaximal or maximal physiological or biomechanical parameter and perceived exertion compared to control condition. Shoe comfort is impaired while running with carbon fiber insoles. Wearing carbon fiber insoles with downwards curvature during treadmill running is not beneficial when compared to running with control insoles.
Garbage in, Garbage out: How does ambiguity in data affect state-of-the-art pedestrian detection?
(2024)
This thesis investigates the critical role of data quality in computer vision, particularly in the realm of pedestrian detection. The proliferation of deep learning methods has emphasised the importance of large datasets for model training, while the quality of these datasets is equally crucial. Ambiguity in annotations, arising from factors like mislabelling, inaccurate bounding box geometry and annotator disagreements, poses significant challenges to the reliability and robustness of the pedestrian detection models and their evaluation. This work aims to explore the effects of ambiguous data on model performance with a focus on identifying and separating ambiguous instances, employing an ambiguity measure utilizing annotator estimations of object visibility and identity. Through accurate experimentation and analysis, trade-offs between data cleanliness and representativeness, noise removal and retention of valuable data emerged, elucidating their impact on performance metrics like the log average miss-rate, recall and precision. Furthermore, a strong correlation between ambiguity and occlusion was discovered with higher ambiguity corresponding to greater occlusion prevalence. The EuroCity Persons dataset served as the primary dataset, revealing a significant proportion of ambiguous instances with approximately 8.6% ambiguity in the training dataset and 7.3% in the validation set. Results demonstrated that removing ambiguous data improves the log average miss-rate, particularly by reducing the false positive detections. Augmentation of the training data with samples from neighbouring classes enhanced the recall but diminished precision. Error correction of wrong false positives and false negatives significantly impacts model evaluation results, as evidenced by shifts in the ECP leaderboard rankings. By systematically addressing ambiguity, this thesis lays the foundation for enhancing the reliability of computer vision systems in real-world applications, motivating the prioritisation of developing robust strategies to identify, quantify and address ambiguity.
The interest of scientists to study motion sequences exists in the fields of sports science, clinical analysis and computer animation for quite some time. While in the last decades mainly markerbased motion capture systems have been used to evaluate movements, the interest in markerless systems is growing more and more. Nevertheless, in the field of clinical analysis, markerless methods have not yet proven their value, partly due to a lack of studies evaluating the quality of the obtained data. Therefore, this study aims to validate two markerless motion capture softwares from Simi Reality Motion Systems. The software Simi Shape, which is a mixture of traditional image-based tracking supported by an artificial intelligence net (AI net), and the software Crush, that uses a completely AI-based method. For this purpose, all motion data was recorded with two in-house motion capture systems. One system for recording the movements for a markerbased evaluation as gold standard and one system for markerless tracking. Within a laboratory environment, eight cameras per system were mounted around the area of motion. By placing two cameras in the same position and using the same calibration, deviations in the image data between those for markerbased and markerless tracking were extremely minimal. Based on this data, markerbased tracking was performed using the Simi Motion program, markerless tracking was performed using the Simi Shape software system and the latest software from Simi Reality Motion Systems, Crush. When comparing the markerless data with the markerbased data, an average root mean square error of 0,038 m was calculated for Simi Shape and a deviation of 0,037 m for Crush. In a direct comparison of the two markerless systems, a root mean square error of 0,019 m was scored. Based on these data, conclusions could be drawn about the accuracies of the two markerless systems. The obtained kinematic data of the tracking are in the range of high accuracy, which is limited to a deviation of less than 0,05 m according to the literature.
As the Industry 4.0 is evolving, the previously separated Operational Technology (OT) and Information Technology (IT) is converging. Connecting devices in the industrial setting to the Internet exposes these systems to a broader spectrum of cyber-attacks. The reason is that since OT does not have much security measures as much as IT, it is more vulnerable from the attacker's perspective. Another factor contributing to the vulnerability of OT is that, when it comes to cybersecurity, industries have focused on protecting information technology and less prioritizing the control systems. The consequences of a security breach in an OT system can be more adverse as it can lead to physical damage, industrial accidents and physical harm to human beings. Hence, for the OT networks, certificate-based authentication is implemented. This involves stages of managing credentials in their communication endpoints. In the previous works of ivESK, a solution was developed for managing credentials. This involves a CANopen-based physical demonstrator where the certificate management processes were developed. The extended feature set involving certificate management will be based on the existing solution. The thesis aims to significantly improve such a solution by addressing two key areas that is enhancing functionality and optimizing real-time performance. Regarding the first goal, firstly, an analysis of the existing feature set shall be carried out, where the correct functionality shall be guaranteed. The limitations from the previously implemented system will be addressed and to make sure it can be applied to real world scenarios, it will be implemented and tested in the physical demonstrator. This will lay a concrete foundation that these certificate management processes can be used in the industries in large-scale networks. Implementation of features like revocation mechanism for certificates, automated renewal of the credentials and authorization attribute checks for the certificate management will be implemented. Regarding the second goal, the impact of credential management processes on the ongoing CANopen real-time traffic shall be a studied. Since in real life scenarios, mission-critical applications like Industrial control systems, medical devices, and transportation networks rely on real-time communication for reliable operation, delays or disruptions caused by credential management processes can have severe consequences. Optimizing these processes is crucial for maintaining system integrity and safety. The effect to minimize the disturbance of the credential management processes on the normal operation of the CANopen network shall be characterized. This shall comprise testing real-time parameters in the network such as CPU load, network load and average delay. Results obtained from each of these tests will be studied.
The last decades have seen the evolution of industrial production into more sophisticated processes. The development of specialized, high-end machines has increased the importance of predictive maintenance of mechanical systems to produce high-quality goods and avoid machine breakdowns. Predictive maintenance has two main objectives: to classify the current status of a machine component and to predict the maintenance interval by estimating its remaining useful life (RUL). Nowadays, both objectives are covered by machine learning and deep learning approaches and require large training datasets that are often not available. One possible solution may be transfer learning, where the knowledge of a larger dataset is transferred to a smaller one. This thesis is primarily concerned with transfer learning for predictive maintenance for fault classification and RUL estimation. The first part presents the state-of-the-art machine learning techniques with a focus on techniques applicable to predictive maintenance tasks (Chapter 2). This is followed by a presentation of the machine tool background and current research that applies the previously explained machine learning techniques to predictive maintenance tasks (Chapter 3). One novelty of this thesis is that it introduces a new intermediate domain that represents data by focusing on the relevant information to allow the data to be used on different domains without losing relevant information (Chapter 4). The proposed solution is optimized for rotating elements. Therefore, the presented intermediate domain creates different layers by focusing on the fault frequencies of the rotating elements. Another novelty of this thesis is its semi and unsupervised transfer learning-based fault classification approach for different component types under different process conditions (Chapter 5). It is based on the intermediate domain utilized by a convolutional neural network (CNN). In addition, a novel unsupervised transfer learning loss function is presented based on the maximum mean discrepancy (MMD), one of the state-of-the-art algorithms. It extends the MMD by considering the intermediate domain layers; therefore, it is called layered maximum mean discrepancy (LMMD). Another novelty is an RUL estimation transfer learning approach for different component types based on the data of accelerometers with low sampling rates (Chapter 6). It applies the feature extraction concepts of the classification approach: the presented intermediate domain and the convolutional layers. The features are then used as input for a long short-term memory (LSTM) network. The transfer learning is based on fixed feature extraction, where the trained convolutional layers are taken over. Only the LSTM network has to be trained again. The intermediate domain supports this transfer learning type, as it should be similar for different component types. In addition, it enables the practical usage of accelerometers with low sampling rates during transfer learning, which is an absolute novelty. All presented novelties are validated in detailed case studies using the example of bearings (Chapter 7). In doing so, their superiority over state-of-the-art approaches is demonstrated.
Increasing global energy demand and the need to transition to sustainable energy sources to mitigate climate change, highlights the need for innovative approaches to improve the resilience and sustainability of power grids. This study focuses on addressing these challenges in the context of Morocco's evolving energy landscape, where increasing energy demand and efforts to integrate renewable energy require grid reinforcement strategies. Using renewable energy sources such as photovoltaic systems and energy storage technologies, this study aims to develop a methodology for strengthening rural community grids in Morocco.
Traditional reinforcement measures such as line and transformer upgrades will be investigated as well as the integration of power generation from photovoltaic systems, which offer a promising way to utilise Morocco's abundant solar resources. In addition, energy storage systems will be analysed as potential solutions to the challenges of grid stability and resilience. Using comprehensive data analysis, scenario planning and simulation methods with the open-source simulation software Panda Power, this study aims to assess the impact of different grid reinforcement measures, including conventional methods, photovoltaic integration, and the use of energy storage, on grid performance and sustainability. The results of this study provide valuable insights into the challenges and opportunities of transitioning to a more resilient and sustainable energy future in Morocco.
Based on a rural medium-voltage grid in Souihla, Morocco, three scenarios were carried out to assess the impact of demand growth in 2030 and 2040. The first scenario focuses on conventional grid reinforcement measures, while the second scenario incorporates energy from residential photovoltaic systems. The third scenario analyses the integration of storage systems and their impact on grid reinforcement in 2030.
The simulations with energy from photovoltaic systems show a reduction in grid reinforcement measures compared to the scenario without solar energy. In addition, the introduction of a storage system in 2030 led to a significant reduction in the required installed transformer capacity and fewer congested lines. Furthermore, the results emphasized the role of storage in stabilizing grid voltage levels.
In summary, the results highlighted the potential benefits of integrating energy from photovoltaics and storage into the grid. This integration not only reduces the need for transformers and overall grid infrastructure but also promotes a more efficient and sustainable energy system.
With the expansion of IoT devices in many aspects of our life, the security of such systems has become an important challenge. Unlike conventional computer systems, any IoT security solution should consider the constraints of these systems such as computational capability, memory, connectivity, and power consumption limitations. Physical Unclonable Functions (PUFs) with their special characteristics were introduced to satisfy the security needs while respecting the mentioned constraints. They exploit the uncontrollable and reproducible variations of the underlying component for security applications such as identification, authentication, and communication security. Since IoT devices are typically low cost, it is important to reuse existing elements in their hardware (for instance sensors, ADCs, etc.) instead of adding extra costs for the PUF hardware. Micro-electromechanical system (MEMS) devices are widely used in IoT systems as sensors and actuators. In this thesis, a comprehensive study of the potential application of MEMS devices as PUF primitives is provided. MEMS PUF leverages the uncontrollable variations in the parameters of MEMS elements to derive secure keys for cryptographic applications. Experimental and simulation results show that our proposed MEMS PUFs are capable of generating enough entropy for a complex key generation, while their responses show low fluctuations in different environmental conditions.
Keeping in mind that the PUF responses are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In the second part of this thesis, we elaborate on different key generation schemes and their advantages and drawbacks. We propose the PUF output positioning (POP) and integer linear programming (ILP) methods, which are novel methods for grouping the PUF outputs in order to maximize the extracted entropy. To implement these methods, the key enrollment and key generation algorithms are presented. The proposed methods are then evaluated by applying on the responses of the MEMS PUF, where it can be practically shown that the proposed method outperforms other existing PUF key generation methods.
The final part of this thesis is dedicated to the application of the MEMS PUF as a security solution for IoT systems. We select the mutual authentication of IoT devices and their backend system, and propose two lightweight authentication protocols based on MEMS PUFs. The presented protocols undergo a comprehensive security analysis to show their eligibility to be used in IoT systems. As the result, the output of this thesis is a lightweight security solution based on MEMS PUFs, which introduces a very low overhead on the cost of the hardware.
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring.
Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview.
Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years.
Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted.
Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking.
Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
This report examines exporters’ challenges and possible solutions for public intervention to promote foreign trade. Based on fieldwork conducted in Georgia, we explore which policy approaches can help to stimulate Georgian exports further. Our outcomes show that exporters face substantial barriers such as navigating complex trade regulations, lack of knowledge about target markets, trade finance gaps, as well as new export promotion programs (EPPs) in competitor countries. Other upper-middle-income countries can learn from our results that exporters can significantly benefit from a comprehensive export promotion strategy combined with an ecosystem-based “team” approach. EPPs related to awareness and capacity building in Georgia should be part of this strategy, focusing on challenges such as a lack of knowledge about trade practices and international business skills. Other EPPs must help to mitigate related market failures, as information gathering is costly, and firms have no incentive to share this information with competitors. Furthermore, targeted marketing support and customer matchmaking can answer Georgian exporters’ challenges, such as lack of market access and low sector visibility. Our results also show that public intervention through financial support and risk mitigation is essential for firms with an international orientation. The high-quality, rich outcomes provide significant value for other upper-middle-income countries by exploring the example of Georgia’s contemporary circumstances in an in-depth manner based on extensive interviews and document analysis. Limitations include that our work primarily relies on qualitative data and further research could involve a quantitative study with a diverse range of sectors.
"Ad fontes!"
Francesco Petrarca (1301–1374)
In the beginning, there was an idea: the reconstruction of the first "Iron Hand" of the Franconian imperial knight Götz von Berlichingen (1480–1562). We found that with this historical prosthesis, simple actions for daily use, such as holding a wine glass, a mobile phone, a bicycle handlebar grip, a horse’s reins, or some grapes, are possible without effort. Controlling this passive artificial hand, however, is based on the help of a healthy second hand.
The growing threat posed by multidrug-resistant (MDR) pathogens, such as Klebsiella pneumoniae (Kp), represents a significant challenge in modern medicine. Traditional antibiotic therapies are often ineffective against these pathogens, leading to high mortality rates. MDR Kp infections pose a novel challenge in military medical contexts, particularly in Medical Biodefense, as they can be deliberately spread, leading to resource-intensive care in military centres. Recognizing this issue, the European Defence Agency initiated a prioritised research project in 2023 (EDF Resilience PHAGE- SGA 2023). To address this challenge, the Bundeswehr Institute of Microbiology (IMB) leads BMBF- (Federal Ministry of Education and Research) and EU-funded projects on the use of bacteriophages as adjuvant therapy alongside antibiotics. Since 2017, the IMB has isolated and characterised Kp phages, collecting over 600 isolates and optimizing their production for therapy, in compliance with the EMA (European Medicine Agency) guidelines. This involves in vitro phage genome packaging to minimize endotoxin load, reduce manufacturing costs, and shorten production times. The goal of this work was to establish MinION sequencing (Oxford Nanopore Technology) as a quick and reliable way for initial identification and characterisation of phage genomes. Especially as a quick screening method for isolated on Kp, prior to more precise but also more expensive and time consuming sequencing methods like Illumina. This characterisation is crucial for developing a personalized pipeline aimed at producing magistral or Good Manufacturing Practice (GMP) quality medicinal phage solutions tailored individually for each patient. DNA extraction methods were compared to identify suitable input DNA for sequencing purposes. Additionally, the quality of this DNA was as- sessed to determine its suitability for in vitro phage packaging, which was successfully done achieving a phage titer of 103, confirming that the DNA used for MinION sequencing could indeed be used for acellular packaging. The created genomes were annotated and compared with Illumina sequencing, revealing high similarity in all five individually tested cases. Between the generated sequences only a 4% maximal percentual difference in genome size was observed, while simultaneously showing high similarity in the actual sequence. Throughout the course of this study, a total of 645.15 GB of sequencing data were generated. In total, 38 phages were successfully characterised, with 21 phage genomes assembled and annotated, and saved in the IMB database.
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
This paper describes the authors' first experiments in creating an artificial dancer whose movements are generated through a combination of algorithmic and interactive techniques with machine learning. This approach is inspired by the time honoured practice of puppeteering. In puppeteering, an articulated but inanimate object seemingly comes to live through the combined effects of a human controlling select limbs of a puppet while the rest of the puppet's body moves according to gravity and mechanics. In the approach described here, the puppet is a machine-learning-based artificial character that has been trained on motion capture recordings of a human dancer. A single limb of this character is controlled either manually or algorithmically while the machine-learning system takes over the role of physics in controlling the remainder of the character's body. But rather than imitating physics, the machine-learning system generates body movements that are reminiscent of the particular style and technique of the dancer who was originally recorded for acquiring training data. More specifically, the machine-learning system operates by searching for body movements that are not only similar to the training material but that it also considers compatible with the externally controlled limb. As a result, the character playing the role of a puppet is no longer passively responding to the puppeteer but makes movement decisions on its own. This form of puppeteering establishes a form of dialogue between puppeteer and puppet in which both improvise together, and in which the puppet exhibits some of the creative idiosyncrasies of the original human dancer.
Generative machine learning models for creative purposes play an increasingly prominent role in the field of dance and technology. A particularly popular approach is the use of such models for generating synthetic motions. Such motions can either serve as source of ideation for choreographers or control an artificial dancer that acts as improvisation partner for human dancers. Several examples employ autoencoder-based deep-learning architectures that have been trained on motion capture recordings of human dancers. Synthetic motions are then generated by navigating the autoencoder's latent space. This paper proposes an alternative approach of using an autoencoder for creating synthetic motions. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors. As illustrative example of an artistic application, this latter method is used for an artificial dancer that plays a digital instrument. The paper presents the implementation of these two methods and provides some preliminary results.
Steroid hormones (SHs) are a rising concern due to their high bioactivity, ubiquitous nature, and prolonged existence as a micropollutants in water, they pose a potential risk to both human health and the environment, even at low concentrations. Estrogens, progesterone, and testosterone are the three important types of steroids essential for human development and maintaining multiorgan balance, are focus to this concern. These steroid hormones originate
from various sources, including human and livestock excretions, veterinary medications, agricultural runoff, and pharmaceuticals, contributing to their presence in the environment. According to the recommendation of WHO, the guidance value for estradiol (E2) is 1 ng/L. There are several methods been attempted to remove the SH micropollutant by conventional water and wastewater technologies which are still under research. Among the various methods, electrochemical membrane reactor (EMR) is one of the emerging technologies that can address the challenge of insufficient SHs removal from the aquatic environment by conventional treatment. The degradation of SHs can be significantly influenced by various factors when treated with EMR.
In this project, the removal of SH and the important mechanism for the removal using carbon nanotube CNT-EMR is studied and the efficiency of CNT-EMR in treating the SH micropollutant is identified. By varying different parameters this experiment is carried out with the (PES-CNTs) ultrafiltration membrane. The study is carried out depending upon the SH removal based on the limiting factor such as cell voltage, flux, temperature, concentration, and type of the SH.
Batteries typically consist of multiple individual cells connected in series. Here we demonstrate single-cell state of charge (SOC) and state of health (SOH) diagnosis in a 24 V class lithium-ion battery. To this goal, we introduce and apply a novel, highly efficient algorithm based on a voltage-controlled model (VCM). The battery, consisting of eight single cells, is cycled over a duration of five months under a simple cycling protocol between 20 % and 100 % SOC. The cell-to-cell standard deviations obtained with the novel algorithm were 1.25 SOC-% and 1.07 SOH-% at beginning of cycling. A cell-averaged capacity loss of 9.9 % after five months cycling was observed. While the accuracy of single-cell SOC estimation was limited (probably owed to the flat voltage characteristics of the lithium iron phosphate, LFP, chemistry investigated here), single-cell SOH estimation showed a high accuracy (2.09 SOH-% mean absolute error compared to laboratory reference tests). Because the algorithm does not require observers, filters, or neural networks, it is computationally very efficient (three seconds analysis time for the complete data set consisting of eight cells with approx. 780.000 measurement points per cell).
This thesis focuses on the development and implementation of a Datagram Transport Layer Security (DTLS) communication framework within the ns-3 network simulator, specifically targeting the LoRaWAN model network. The primary aim is to analyse the behaviour and performance of DTLS protocols across different network conditions within a LoRaWAN context. The key aspects of this work include the following.
Utilization of ns-3: This thesis leverages ns-3’s capabilities as a powerful discrete event network simulator. This platform enables the emulation of diverse network environments, characterized by varying levels of latency, packet loss, and bandwidth constraints.
Emulation of Network Challenges: The framework specifically addresses unique challenges posed by certain network configurations, such as duty cycle limitations. These constraints, which limit the time allocated for data transmission by each device, are crucial in understanding the real-world performance of DTLS protocols.
Testing in Multi-client-server Scenarios: A significant feature of this framework is its ability to test DTLS performance in complex scenarios involving multiple clients and servers. This is vital for assessing the behaviour of a protocol under realistic network conditions.
Realistic Environment Simulation: By simulating challenging network conditions, such as congestion, limited bandwidth, and resource constraints, the framework provides a realistic environment for thorough evaluation. This allows for a comprehensive analysis of DTLS in terms of security, performance, and scalability.
Overall, this thesis contributes to a deeper understanding of DTLS protocols by providing a robust tool for their evaluation under various and challenging network conditions.
Strings P
(2021)
Strings is an audiovisual performance for an acoustic violin and two generative instruments, one for creating synthetic sounds and one for creating synthetic imagery. The three instruments are related to each other conceptually , technically, and aesthetically by sharing the same physical principle, that of a vibrating string. This submission continues the work the authors have previously published at xCoAx 2020. The current submission briefly summarizes the previous publication and then describes the changes that have been made to Strings. The P in the title emphasizes, that most of these changes have been informed by experiences collected during rehearsals (in German Proben). These changes have helped Strings to progress from a predominantly technical framework to a work that is ready for performance.
In anisotropic media, the existence of leaky surface acoustic waves is a well-known phenomenon. Very recently, their analogs at the apex of an elastic silicon wedge have been found in experiments using laser-ultrasonics. In addition to a wedge-wave (WW) pulse with low speed, a pseudo-wedge wave (p-WW) pulse was found with a velocity higher than the velocity of shear bulk waves, propagating in the same direction. With a probe-beam-deflection technique, the propagation of the WW pulses was monitored on one of the faces of the wedge at variable distance from the apex. In this way, their depth structure and the leakage of the p-WW could be visualized directly. Calculations were carried out using a method based on a representation of the displacement field in Laguerre functions. This method has been validated by calculating the surface density of states in anisotropic media and comparing the results with those obtained from the surface Green's tensor. The approach has then been extended to the continuum of acoustic modes in infinite wedges with fixed wave-vector along the apex. These calculations confirmed the measured speeds of the WW and p-WW pulses.
Strings
(2020)
This article presents the currently ongoing development of an audiovisual performance work with the title Strings. This work provides an improvisation setting for a violinist, two laptop performers, and two generative systems. At the core of Strings lies an approach that establishes a strong correlation among all participants by means of a shared physical principle. The physical principle is that of a vibrating string. The article discusses how this principle is used in both natural and simulated forms as main interaction layer between all performers and as natural or generative principle for creating audio and video.
Implementation and Evaluation of an Assisting Fuzzer Harness Generation Tool for AUTOSAR Code
(2024)
The digitalization in vehicles tends to add more connectivity such as over-the-air (OTA) updates. To achieve this digitization, each ECU (Electronic Control Unit) becomes smarter and needs to support more and more different externally available protocols such as TLS, which increases the attack surface for attackers. To ensure the security of a vehicle, fuzzing has proven to be an effective method to discover memory-related security vulnerabilities. Fuzzing the software run- ning on a ECU is not an easy task and requires a harness written by a human. The author needs a deep understanding of the specific service and protocol, which is time consuming. To reduce the time needed by a harness author, this thesis aims to develop FuzzAUTO, the first assistant harness generation tool targeting the AUTOSAR (AUTomotive Open System ARchitecture) BSW (Basic Software) to support manual harness generation.
Anisotropy has been found to play an important role for the existence of edge-localized acoustic modes as well as for nonlinear effects in rectangular edges. For a certain propagation geometry in silicon, the effective second-order nonlinearity for wedge waves was determined numerically from second-order and third-order elastic moduli and compared with the nonlinearity for Rayleigh waves propagating in the direction of the apex on one of the two surfaces forming the edge. In the presence of weak dispersion resulting from modifications of the wedge tip or coating of the adjacent surfaces, solitary pulses are predicted to exist and their shape was calculated.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
Privacy is the capacity to keep some things private despite their social repercussions. It relates to a person’s capacity to control the amount, time, and circumstances under which they disclose sensitive personal information, such as a person’s physiology, psychology, or intelligence. In the age of data exploitation, privacy has become even more crucial. Our privacy is now more threatened than it was 20 years ago, outside of science and technology, due to the way data and technology highly used. Both the kinds and amounts of information about us and the methods for tracking and identifying us have grown a lot in recent years. It is a known security concern that human and machine systems face privacy threats. There are various disagreements over privacy and security; every person and group has a unique perspective on how the two are related. Even though 79% of the study’s results showed that legal or compliance issues were more important, 53% of the survey team thought that privacy and security were two separate things. Data security and privacy are interconnected, despite their distinctions. Data security and data privacy are linked with each other; both are necessary for the other to exist. Data may be physically kept anywhere, on our computers or in the cloud, but only humans have authority over it. Machine learning has been used to solve the problem for our easy solution. We are linked to our data. Protect against attackers by protecting data, which also protects privacy. Attackers commonly utilize both mechanical systems and social engineering techniques to enter a target network. The vulnerability of this form of attack rests not only in the technology but also in the human users, making it extremely difficult to fight against. The best option to secure privacy is to combine humans and machines in the form of a Human Firewall and a Machine Firewall. A cryptographic route like Tor is a superior choice for discouraging attackers from trying to access our system and protecting the privacy of our data There is a case study of privacy and security issues in this thesis. The problems and different kinds of attacks on people and machines will then be briefly talked about. We will explain how Human Firewalls and machine learning on the Tor network protect our privacy from attacks such as social engineering and attacks on mechanical systems. As a real-world test, we will use genomic data to try out a privacy attack called the Membership Inference Attack (MIA). We’ll show Machine Firewall as a way to protect ourselves, and then we’ll use Differential Privacy (DP), which has already been done. We applied the method of Lasso and convolutional neural networks (CNN), which are both popular machine learning models, as the target models. Our findings demonstrate a logarithmic link between the desired model accuracy and the privacy budget.
This study investigates the impact of global payroll outsourcing on organizational efficiency and cost reduction based on the analysis of diverse implications stemming from thirty one (31) survey results. The findings reveal multifaceted challenges and benefitsassociated with outsourcing global payroll processing.
The research also unveils the most benefits of global payroll outsourcing. Notably, there's a consensus on the reduction in time-to-process payroll, cost per payroll processed, and improved payroll accuracy rate. Outsourcing streamlines processes, enhances operational efficiency, and contributes to faster, more accurate financial reporting.
Despite these benefits and challenges, statistical analysis reveals weak correlations between outsourcing global payroll and cost reduction or improved efficiency in various parameters, indicating a lack of a significant relationship. Consequently, the results, suggest no substantial correlation between global payroll outsourcing and enhanced efficiency or cost reduction based on this study's data.
Decarbonisation Strategies in Energy Systems Modelling: APV and e-tractors as Flexibility Assets
(2023)
This work presents an analysis of the impact of introducing Agrophotovoltaic technologies and electric tractors into Germany’s energy system. Agrophotovoltaics involves installing photovoltaic systems in agricultural areas, allowing for dual usage of the land for both energy generation and food production. Electric tractors, which are agricultural machinery powered by electric motors, can also function as energy storage units, providing flexibility to the grid. The analysis includes a sensitivity study to understand how the availability of agricultural land influences Agrophotovoltaic investments, followed by the examination of various scenarios that involve converting diesel tractors to electric tractors. These scenarios are based on the current CO2 emission reduction targets set by the German Government, aiming for a 65% reduction below 1990 levels by 2030 and achieving zero emissions by 2045. The results indicate that approximately 3% of available agricultural land is necessary to establish a viable energy mix in Germany. Furthermore, the expansion of electric tractors tends to reduce the overall system costs and enhances the energy-cost-efficiency of Agrophotovoltaic investments.
Previous studies of the hyphenation of gas chromatographic separation and spectrophotometric detection in the ultraviolet wavelength range between 168 and 330 nm showed a high potential for applications where the analysis of complex samples is required. Within this paper the development of a state-of-the-art detection system for compounds in the vapour phase is described, offering an improved behaviour compared to previous systems: Dependent on the requirements of established detection systems hyphenated with gas chromatography, the main components of the system have to be designed for optimum performance and reliability of the spectrophotometric detector: A deuterium lamp as a broadband light source has been selected for improved stability in the measurements. A new-type absorption cell based on fiber-optics has been developed considering the dynamic necessary to compete with existing techniques. In addition, the influence of the volume of the cell on the chromatogram needs to be analyzed. Tests for determining the performance of the absorption cell in terms of chemical and thermal influences have been carried out. A new spectrophotometer with adequate spectral resolution in the wavelength range, offering improved stability and dynamic for an efficient use in this application was developed. Furthermore, the influence of each component on the performance, reliability and stability of the sensor system will be discussed. An overview and outlook over the potential applications in the environmental, scientific and medical field will be given.
Bluetooth personal area networks (PANs) share the 2.4 GHz ISM spectrum with the IEEE 802.11b wireless local area networks (WLANs). With the popularity of wireless devices, this ISM spectrum is becoming more and more crowded. As a result of this interference between WLANs and PANs, the performance of each network is decreased. Current research has not significantly covered the degrading impact of an 802.11b interferer on Bluetooth voice transmission. Within this project, simulations were carried out to precisely study the impact of an 802.11b interferer on the performance of Bluetooth voice transmission at different ratio levels of Bluetooth power to WLAN power at the receiver side. Furthermore, the impact of SNR on the Bluetooth voice performance and the benefit of using the SCORT packet type was analysed as well. Based on the results presented, network performance can be evaluated at the desired activity level.
In thin-layer chromatography, fiber-bundle arrays have been introduced for spectral absorption measurements in the UV-region. Using all-silica fiber bundles, the exciting light will be detected after re-emission on the plate with a fiberoptic spectrometer. In addition, fluorescence light can be detected which will be masked by the re-emitted light. Therefore, it is helpful to separate the absorption and fluorescence on the TLC-plate. A modified three-array assembly has been developed: using one array for detection, the two others are used for excitation with broadband band deuterium-light and with UV-LEDs adjusted to the substances under test. As an example, the quantification of glucosamine in nutritional supplements or spinach leaf extract will be described. Using simply heating of the amino-plate for derivation, the reaction product of Glucosamine can be detected sensitively either by light absorption or by fluorescence, using the new fiber-optic assembly. In addition, the properties of the new 3-row fiber-optic array and the commercially available UV-LEDs will be shown, in the interesting wavelength region for excitation of fluorescence, from 260 nm to 360 nm. The squint angle having an influence on coupling efficiency and spatial resolution will be measured with the inverse farfield method. Some properties of UV-LEDs for analytical applications will be described and discussed, too.
Most E-Learning projects tend to separate learning activities from everyday work. This paper presents an approach where closer integration between learning and work is achieved by integrating multimedia services into manufacturing processes. The goal of E-Learning services integration in manufacturing is, through the development of new multimedia solutions, to accelerate and enhance the ability of manufacturing industry to capitalise on the emergence of a powerful global information infrastructure. In this paper we suggest to combine the areas of media streaming services and manufacturing processes, by providing electronic learning offerings as collections of media streaming services. The key components of our approach are 1) an xml based streaming service specification language, 2) automated configuration of distributed E-Learning streaming applications, 3) web services for searching, registration, and creation of E-Learning streaming services.
Integrating voice / video communication into business processes can accelerate resolution time, reduce mistakes, and establish a full audit-trail of the interactions. Some VoIP service providers offer website based or plugin based solutions, which are, however, difficult to integrate with other applications. A promising approach to overcome these disadvantages is the development of appropriate Web Services to allow applications interacting with a VoIP system. We propose a generic framework for VoIP applications consisting of an XML-based service specification language and a set of reusable Web Service components. Service providers using the proposed service-oriented architecture can offer to their customers a protocol-neutral Web Service interface, thus enabling the deployment of a general and integrated VoIP solution.
This paper presents a streaming-based E-Learning environment where closer integration between learning and work is achieved by integrating multimedia services into manufacturing processes. It contains a comprehensive and detailed explanation of the proposed E-Learning streaming framework, especially the adaption of streaming services to mobile environments. We first analyze several scenarios where E-Learning streaming services can be integrated into manufacturing processes. To allow systematic and tailor-made integration, we develop a model and a specification language for E-Learning streaming services and apply the model using practical scenarios from real manufacturing processes. Adaption of multimedia streaming services to mobile devices is discussed based on Synchronized Multimedia Integration Language (SMIL). Last, we comment on the benefits of using E-Learning streaming services as part of manufacturing processes and analyze the acceptance of the developed system. The key components of our E-Learning environment are 1) an xml based streaming service specification language, 2) adaption of multimedia E-Learning services to mobile environments, and 3) Web Services for searching, registration, and creation of E-Learning streaming services.
In this paper we suggest to combine the areas of media streaming services, mobile devices, and manufacturing processes to support monitoring, controlling and supervising production processes in order to achieve high levels of efficiency and environmentally friendly production. It contains a comprehensive and detailed explanation of the proposed E-Learning streaming framework, especially the adaption of streaming services to mobile environments. The key components of our approach are 1) an XML-based streaming service specification language, 2) adaption of multimedia E-Learning services to mobile environments, and 3) a media delivery platform for searching, registration, and creation of streaming services for mobile devices.
The need of suitable system of records in gaining ground as companies seek to maximize performance by harnessing the knowledge of their businesses, is discussed. Focused systems of record deliver a clear and consistent view even as they address a range of functions. Enterprise resource planning (ERP), as the financial system of record, embodies that view of manufacturing, inventory management, accounting and order processing. Customer relationship management (CRM), as a system of record, taps not only into the marketing and sales and service, but also into product development.
The central purpose of this paper is to present a novel framework supporting the specification and the implementation of media streaming services using XML and Java Media Framework (JMF). It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed JMF application and can be used for different streaming paradigms, e.g. push and pull services.
This paper presents an approach where closer integration between learning and work is achieved by integrating multimedia services into manufacturing processes. The goal of E-Learning services integration in manufacturing processes is, through the development of new multimedia services, to accelerate and enhance the ability of manufacturing industry to capitalise on the emergence of a powerful global information infrastructure. In this paper we suggest to combine the areas of media streaming services and manufacturing processes, by providing electronic learning offerings as collections of media streaming services. The key components of our approach are 1) an xml based streaming service specification language, 2) automated configuration of distributed E-Learning streaming applications, 3) Web Services for searching, registration, and creation of E-Learning streaming services.
We propose a new streaming media service development environment comprising of a streaming media service model, a XML based service specification language and several implementation and configuration management tools. In our project, the described approach is used for integration of streaming based eLearning services in manufacturing processes of a subcontractor to the automotive industry. The key components of our approach are 1) an xml based streaming service specification language, 2) a set of web services for searching, registration, and creation of streaming services, 3) caching and replication policies based on timing information derived from the service specifications.
The goal of eLearning services integration in manufacturing is, through the development of new multimedia solutions, to accelerate and enhance the ability of the manufacturing industry to capitalise on the emergence of a powerful global information infrastructure. The key components of our approach are: (1) an XML based streaming service specification language; (2) automatic configuration of distributed eLearning streaming service implementations; (3) a set of Web services for searching, registration, and creation of streaming services; (4) caching and replication policies based on timing information derived from the service specifications. We also introduce a new concept for cache management during runtime, e.g., content is distributed to cache servers located at the edge of a network close to the client.
The central purpose of this paper is to present a novel framework supporting the specification, the implementation and retrieval of media streaming services. It provides an integrated service development environment comprising of a streaming service model, a service specification language and several implementation and retrieval tools. Our approach is based on a clear separation of a streaming service specification, and its implementation by a distributed application and can be used for different streaming paradigms, e.g. push and pull services.