Refine
Year of publication
Document Type
- Conference Proceeding (29)
- Article (reviewed) (19)
- Article (unreviewed) (17)
- Bachelor Thesis (10)
- Master's Thesis (7)
- Doctoral Thesis (5)
- Report (4)
- Letter to Editor (1)
- Study Thesis (1)
Conference Type
- Konferenzartikel (23)
- Konferenz-Abstract (3)
- Konferenz-Poster (1)
- Sonstiges (1)
Keywords
- Deep Leaning (3)
- Deep Learning (3)
- Hochschuldidaktik (3)
- Simulation (3)
- machine learning (3)
- Advanced Footwear Technology (2)
- Alexander von Humboldt (2)
- Artificial Intelligence (2)
- Biomechanics (2)
- Computersicherheit (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (31)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (28)
- Fakultät Medien (M) (ab 22.04.2021) (19)
- INES - Institut für nachhaltige Energiesysteme (14)
- Fakultät Wirtschaft (W) (12)
- IMLA - Institute for Machine Learning and Analytics (9)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (4)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (3)
- ACI - Affective and Cognitive Institute (2)
- IfTI - Institute for Trade and Innovation (2)
Open Access
- Diamond (93) (remove)
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
In den letzten Jahren haben Recommender Systeme zunehmend an Bedeutung gewonnen. Diese Systeme sind meist für Bereiche des E-Commerce konzipiert und berücksichtigen oftmals nicht den aktuellen Kontext der nutzenden Person. Recommender Systeme können allerdings nicht nur im E-Commerce zum Einsatz kommen, sondern finden ihren Anwendungszweck auch im Gesundheitswesen. Ziel dieser Bachelorarbeit ist es, ein Recommender System zu entwickeln, das den aktuellen Kontext der nutzenden Person (Chatverlauf, demografische Daten) besser berücksichtigen kann. Dazu befasst sich diese Arbeit mit der Konzeption und prototypischen Umsetzung eines kontextsensitiven Recommender Systems für einen bereits existierenden Chatbot aus dem Gesundheitswesen. Das in dieser Arbeit konzipierte und entwickelte Recommender System soll Mitarbeitende aus dem Gesundheits- und Sozialwesen entlasten und ihnen hilfreiche sowie thematisch sinnvolle Informationen zur Verfügung stellen. Basierend auf festgelegten Anforderungen wurde ein Konzept für das Recommender System entwickelt und zu Teilen als Prototyp umgesetzt. Abschließend wurde der Prototyp im Hinblick auf die Anforderungen evaluiert. Zudem fand eine technische Evaluation und eine Evaluation mithilfe von Anwendenden statt, welche den implementierten Prototypen bereits existierenden Systemen gegenüberstellte. Die von dem Prototyp empfohlenen Textausschnitte erzielten in der Evaluation mit nutzenden Personen eine thematisch signifikant höhere Übereinstimmung mit den Chatdaten.
This thesis evaluates and compares current Full-Stack JavaScript Technologies. Through extensive research on the state of the art of JavaScript and its related frameworks, different aspects of FullStack Development are analysed to judge the popularity of technologies.
The language JavaScript and the idea of Full-Stack Development are presented with the functionality of different frameworks. The JavaScript runtime Node.js was examined and marked as the most influential JavaScript technology, which opened up many opportunities.
As technology stacks MERN, MEAN and MEVN were investigated, featuring the base technologies Node.js, MongoDB and Express.js. It was discovered that front-end frameworks have the most influence on which variant of Full-Stack can be chosen. Comparison criteria between the technology stacks were the learning curve, the maintainability, modularity and media integration. These criteria were extracted from research and a questionnaire conducted with students of the University of Applied Sciences Offenburg.
For the purposes of testing and experiencing a Full-Stack JavaScript application, the game RemArrow, based on the 1979s game Simon, was designed and implemented. The comparison with predefined criteria shows the result that the MERN stack with React.js is the best to learn and promises the most potential. Arising JavaScript technologies and their popularity are very dependent on the industry and skill set of the developer.
In conclusion, it can be established that the concept of Full-Stack Development is currently very interesting and more than just a trend. It has potential of becoming a new kind of web development, and part of the curriculum taught at universities. Expert knowledge is needed but there is a high demand and much potential for Full-Stack JavaScript Developers.
In modernen Industrieautomatisierungssysteme kann die IT-Sicherheit nicht mehr ignoriert werden. Um dem Datenverkehr Schutz zu bieten, sind kryptografische Schutzmaßnahmen notwendig. Eine gängige Schutzmaßnahme ist die Verwendung von digitalen Zertifikaten zur Autorisierung und Authentifizierung. Um Zertifikate sicher und geregelt auf Endgeräte zu bringen, ist jedoch eine Public-Key-Infrastructure notwendig. Solche PKIs sind bisher wenig im Umfeld der Industrieautomatisierung untersucht. Das Institut für verlässliche Embedded-Systems der Hochschule Offenburg bietet hierfür eine mögliche Lösung, welche auf einer zentralen Einheit, genannt Credentialing Entity, basiert. Ein Demonstrator dieses Konzepts wurde bereits in den weit verbreiteten Systemprogrammier-sprachen C und C++ implementiert.
Im Rahmen dieser Arbeit wird die Verwendung der modernen speichersicheren Programmiersprache Rust in der Systemprogrammierung als Alternative zu den Domänenführern C/C++ am Beispiel der Implementierung der Credentialing-Entity untersucht. Hierbei werden Aspekte wie die Vorzüge Rusts, dessen Ökosystem und Interoperabilität mit den Marktführern C/C++ untersucht.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features which address the tactile experience of working in real-world labs.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features that address the tactile experience of working in real-world labs.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
Garbage in, Garbage out: How does ambiguity in data affect state-of-the-art pedestrian detection?
(2024)
This thesis investigates the critical role of data quality in computer vision, particularly in the realm of pedestrian detection. The proliferation of deep learning methods has emphasised the importance of large datasets for model training, while the quality of these datasets is equally crucial. Ambiguity in annotations, arising from factors like mislabelling, inaccurate bounding box geometry and annotator disagreements, poses significant challenges to the reliability and robustness of the pedestrian detection models and their evaluation. This work aims to explore the effects of ambiguous data on model performance with a focus on identifying and separating ambiguous instances, employing an ambiguity measure utilizing annotator estimations of object visibility and identity. Through accurate experimentation and analysis, trade-offs between data cleanliness and representativeness, noise removal and retention of valuable data emerged, elucidating their impact on performance metrics like the log average miss-rate, recall and precision. Furthermore, a strong correlation between ambiguity and occlusion was discovered with higher ambiguity corresponding to greater occlusion prevalence. The EuroCity Persons dataset served as the primary dataset, revealing a significant proportion of ambiguous instances with approximately 8.6% ambiguity in the training dataset and 7.3% in the validation set. Results demonstrated that removing ambiguous data improves the log average miss-rate, particularly by reducing the false positive detections. Augmentation of the training data with samples from neighbouring classes enhanced the recall but diminished precision. Error correction of wrong false positives and false negatives significantly impacts model evaluation results, as evidenced by shifts in the ECP leaderboard rankings. By systematically addressing ambiguity, this thesis lays the foundation for enhancing the reliability of computer vision systems in real-world applications, motivating the prioritisation of developing robust strategies to identify, quantify and address ambiguity.
This work addresses the conceptualization, design, and implementation of an Application Programming Interface (API) for the Common Security Advisory Framework (CSAF) 2.0, introducing another method for distributing CSAF documents in addition to two already existing methods. These don't allow the use of flexible queries as well as filtering, which makes it difficult for operators of software and hardware to use CSAF. An API is intended to simplify this process and thus advance the automation goal of CSAF.
First, it is evaluated whether the current standard allows the implementation of an API. Any conflicts are highlighted and suggestions for standard adaptations are made. Based on these results, the API is designed to meet the previously defined requirements. Subsequently, a proof of concept is successfully developed according to the design and extensively tested with specially prepared test data. Finally, the results and the necessary standard adjustments are summarized and justified.
The conceptual design and the implementation were successfully completed. However, during the implementation of the proof of concept, some routes could not be fully implemented.