Refine
Year of publication
Document Type
- Bachelor Thesis (29) (remove)
Language
- English (29) (remove)
Has Fulltext
- yes (29)
Is part of the Bibliography
- no (29) (remove)
Keywords
- Analysis (2)
- Leichtbau (2)
- Modellieren (2)
- Rust (2)
- Schluckspecht (2)
- Unity (2)
- Web Development (2)
- API (1)
- AUTOSAR (1)
- AVD (1)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (12)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (6)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (4)
- Fakultät Medien (M) (ab 22.04.2021) (4)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (2)
- Fakultät Wirtschaft (W) (1)
Open Access
- Closed Access (16)
- Open Access (8)
- Closed (5)
- Diamond (3)
Die Arbeit beschäftigt sich mit dem Thema der Progressive Web App, dabei wird auf die Entwicklung und das Wirtschaftliche Potential eingegangen. Die Arbeit kann als Hilfestellung bei der Entscheidung, ob eine Progressive Web App in einem Unternehmen eingesetzt werden soll, dienen.
G.R.E.C is a adventure game, set in an dystopien industrial world, where you are a scavenger for hire. Explore the village of Vankhart Valley and grab everything valuable you can get your hands on.
Your trusty old jump boots will help you avoiding the nasty and deadly spores that changed the world of G.R.E.C forever.
Schluckspecht project
(2022)
This thesis deals with the implementation of the SUBSCALE algorithm in the Python programming language. First, the current state of research and the needs of the target group are considered. Then, the choice of language is decided based on the findings. On the basis of self-generated requirements, the implementation is carried out.
Finally, the code is evaluated for accuracy, consistency, and execution time, as well as its applicability in practice.
Since the implementation of the current work proved to be unconvincing, an approach is tested in which Python is used only as a front-end.
How can manufacturers or service companies provide better services with connected products, without having acquired a powerful IT infrastructure nor the competences for software development?
Today companies can appeal to a relocated-IT-infrastructure provider, which is called Cloud.
Consequently, they do not have to manage and take care of the safety/security aspect, the updates and the breakdown of the infrastructure internally, as those are all managed by the provider.
It is possible to outsource the development of the software of the connected product to an external company. However, the question now is how fast this company can juggle from one Cloud to another in order to fulfil their clients wishes?
neverMind offers a solution based on a multi-protocols-platform linking the different connected products to a multitude of Clouds without having to redesign the whole communication stack/building block for each change in the Cloud-solution. This is the object of my thesis.
The development follows the V-Model, the first steps to understand the complexity of the project were the realisation of the product technical and architectural specifications. The last step before the Implementation was to design in details the progress and the process of every parts of the platform.
The outcome of the requirements analysis led me to divide the project in two parts:
• a “General Interface” acting as a gateway between the Client-application and “Cloud-modules”
• the “Cloud-modules” themselves.
So far, the specifications are drown up; the General Interface and a client example are coded, as well as a first Cloud-module template.
This paper describes a project absolved to increase the material flow through the LTCC production of the Bosch Anderson Plant in South Carolina, USA. To archive this goal the regarded value stream is introduced first. The bottleneck, which is limiting the material flow is found and eliminated in order to increase the output of the machine and consequently improve the material flow through the whole value stream. The completed projects made for this purpose result in a 13% increase. To control the material flow the inventory sizes are determined. The inventories, from which the size is desired to be determined, include climatization processes to dry the pastes that are applied in the previous process steps. Therefore, a separation of the parts in the production process climatization and the buffer is necessary first. After that the buffer can be eliminated and the inventory areas minimized. The results are smaller and controlled buffer sizes that make part of the floor space unnecessary. A welcomed side effect is the solution to a production problem of warped parts because of too long climatization times. Observations over time show that the results of the buffer limitations are just right to improve the material flow through the LTCC production.
In the field of network security, the detection of intrusions is an important task to prevent and analyse attacks.
In recent years, an increasing number of works have been published on this subject, which perform this detection based on machine learning techniques.
Thereby not only the well-studied detection of intrusions, but also the real-time capability must be considered.
This thesis addresses the real-time functionality of machine learning based network intrusion detection.
For this purpose we introduce the network feature generator library PyNetFlowGen, which is designed to allow real-time processing of network data.
This library generates 83 statistical features based on reassembled data flows.
The introduced performant Cython implementation allows processing individual packets within 4.58 microseconds.
Based on the generated features, machine learning models were examined with regard to their runtime and real-time capabilities.
The selected Decision-Tree-Classifier model created in Python was further optimised by transpiling it into C-Code, what reduced the prediction time of a single sample to 3.96 microseconds on average.
Based on the feature generator and the machine learning model, an basic IDS system was implemented, which allows a data throughput between 63.7 Mbit/s and 2.5 Gbit/s.
When a patient with hearing aids needs to partake in audiometry procedures they need to visit a specialist which costs both time and money. Ideally, the patient should be able to conduct these tests alone, during their own time, and without additional costs. With this idea comes the question of if whether this is possible or not, and, if it is, how.
This thesis explores the throughput of Bluetooth Low Energy and if it is configurable to have a high enough data rate to send high quality audio data with a lossless audio codec while communicating with a low end device. Additionally, this thesis will show that using Rust to develop embedded software is possible and how using it can make the process of doing so easier.
The core logging and tracing facility in Windows operating system is called Event Tracing for Windows (ETW).
Data sources providing events for ETW are instrumented all over the operating system.
That means most hard- and software assets in a Windows system are instrumented with ETW and so are able to contribute low-level information.
ETW can be used by developers and administrators to get low-level information about operating system's activity.
We describe existing tools to interact with the ETW faciltity and evaluate them based on defined criteria.
Based on relevant application scenarios, we show the richness of informational content for debugging or detecting security incidents with ETW.
The widely used instrumentation of ETW in the operating system and its application results also in security risks according to confidentiality.
Based on common ETW providers we show the impact to confidentiality what ETW offers an adversary.
At the end we evaluate solutions and approaches for a customizable telemetry infrastructure using ETW in large-scale environments.
This thesis deals with the implementation of character controls and combat system of the Action Adventure 'Scout 3D'. The game development was realized with the game engine Unity 3D. In the first part, the architecture of a typical game engine is explained. The single components are describes step by step. Then, five well-known game engines are compared and evaluated. In the next chapter, a short overview about design and architecture patterns is worked out. The features of Unity, that are used for the implementation, and Unity's animation system 'Mecanim, are described finally. The second part includes the requirement definitions for the game 'Scout COD' which define player input, different conditions that allow or disallow several activities and the behaviour of enemies. With the help of patterns the architecture of the game is designed. Then, the implementation is explained by means of code snippets.
Implementation and Evaluation of an Assisting Fuzzer Harness Generation Tool for AUTOSAR Code
(2024)
The digitalization in vehicles tends to add more connectivity such as over-the-air (OTA) updates. To achieve this digitization, each ECU (Electronic Control Unit) becomes smarter and needs to support more and more different externally available protocols such as TLS, which increases the attack surface for attackers. To ensure the security of a vehicle, fuzzing has proven to be an effective method to discover memory-related security vulnerabilities. Fuzzing the software run- ning on a ECU is not an easy task and requires a harness written by a human. The author needs a deep understanding of the specific service and protocol, which is time consuming. To reduce the time needed by a harness author, this thesis aims to develop FuzzAUTO, the first assistant harness generation tool targeting the AUTOSAR (AUTomotive Open System ARchitecture) BSW (Basic Software) to support manual harness generation.
Garbage in, Garbage out: How does ambiguity in data affect state-of-the-art pedestrian detection?
(2024)
This thesis investigates the critical role of data quality in computer vision, particularly in the realm of pedestrian detection. The proliferation of deep learning methods has emphasised the importance of large datasets for model training, while the quality of these datasets is equally crucial. Ambiguity in annotations, arising from factors like mislabelling, inaccurate bounding box geometry and annotator disagreements, poses significant challenges to the reliability and robustness of the pedestrian detection models and their evaluation. This work aims to explore the effects of ambiguous data on model performance with a focus on identifying and separating ambiguous instances, employing an ambiguity measure utilizing annotator estimations of object visibility and identity. Through accurate experimentation and analysis, trade-offs between data cleanliness and representativeness, noise removal and retention of valuable data emerged, elucidating their impact on performance metrics like the log average miss-rate, recall and precision. Furthermore, a strong correlation between ambiguity and occlusion was discovered with higher ambiguity corresponding to greater occlusion prevalence. The EuroCity Persons dataset served as the primary dataset, revealing a significant proportion of ambiguous instances with approximately 8.6% ambiguity in the training dataset and 7.3% in the validation set. Results demonstrated that removing ambiguous data improves the log average miss-rate, particularly by reducing the false positive detections. Augmentation of the training data with samples from neighbouring classes enhanced the recall but diminished precision. Error correction of wrong false positives and false negatives significantly impacts model evaluation results, as evidenced by shifts in the ECP leaderboard rankings. By systematically addressing ambiguity, this thesis lays the foundation for enhancing the reliability of computer vision systems in real-world applications, motivating the prioritisation of developing robust strategies to identify, quantify and address ambiguity.
The present document is aimed to propose a suitable thermal model for the cooling down process of a one piston air cooled reciprocating compressor. In order to achieve this, a thermographic camera is used to record the temperature of different measuring points throughout different operating conditions. This data is later analyzed, with statistical tools and graphical visualization. The thermal phenomena present in the thermal process is characterized according to the compressors' geometry. Finally, using the analysis and taking into consideration the thermal phenomena the optimal thermal model is selected. This paper belongs to a bigger project and the last step is to simulate the compressor and the accuracy of the proposed model.
This thesis evaluates and compares current Full-Stack JavaScript Technologies. Through extensive research on the state of the art of JavaScript and its related frameworks, different aspects of FullStack Development are analysed to judge the popularity of technologies.
The language JavaScript and the idea of Full-Stack Development are presented with the functionality of different frameworks. The JavaScript runtime Node.js was examined and marked as the most influential JavaScript technology, which opened up many opportunities.
As technology stacks MERN, MEAN and MEVN were investigated, featuring the base technologies Node.js, MongoDB and Express.js. It was discovered that front-end frameworks have the most influence on which variant of Full-Stack can be chosen. Comparison criteria between the technology stacks were the learning curve, the maintainability, modularity and media integration. These criteria were extracted from research and a questionnaire conducted with students of the University of Applied Sciences Offenburg.
For the purposes of testing and experiencing a Full-Stack JavaScript application, the game RemArrow, based on the 1979s game Simon, was designed and implemented. The comparison with predefined criteria shows the result that the MERN stack with React.js is the best to learn and promises the most potential. Arising JavaScript technologies and their popularity are very dependent on the industry and skill set of the developer.
In conclusion, it can be established that the concept of Full-Stack Development is currently very interesting and more than just a trend. It has potential of becoming a new kind of web development, and part of the curriculum taught at universities. Expert knowledge is needed but there is a high demand and much potential for Full-Stack JavaScript Developers.
This thesis deals with the creation of a cross-platform application using Xamarin.Forms. The cross-platform application will cover three different platforms android, iOS, and UWP.
The application is the first concept of a possible feature for a companion application for LS telcom. There, the user can identify cell antennas using a map-view and a camera-view making the application an augmented reality application. Thus, the user can search for a specific cell and access various information that he would not be able to see with his eyes like for example the frequency of the transmitting cells.
The cell data is generated from three different sources, Cartoradio, OpenCelliD, and the LS telcom databrowser. Eventually, the decision was taken, that the main source should be the LS telcom databrowser which has multiple advantages over the other cell sources.
The cells on the map-view are placed using the extracted coordinates from the source data. However, the cells on the camera-view are placed with complex calculations using different formulas like the Haversine formula to calculate the distance between the cell and the user and the bearing to calculate the angle between the cell and the user. Various settings will allow the user to personalize the application according to his wishes.
This work addresses the conceptualization, design, and implementation of an Application Programming Interface (API) for the Common Security Advisory Framework (CSAF) 2.0, introducing another method for distributing CSAF documents in addition to two already existing methods. These don't allow the use of flexible queries as well as filtering, which makes it difficult for operators of software and hardware to use CSAF. An API is intended to simplify this process and thus advance the automation goal of CSAF.
First, it is evaluated whether the current standard allows the implementation of an API. Any conflicts are highlighted and suggestions for standard adaptations are made. Based on these results, the API is designed to meet the previously defined requirements. Subsequently, a proof of concept is successfully developed according to the design and extensively tested with specially prepared test data. Finally, the results and the necessary standard adjustments are summarized and justified.
The conceptual design and the implementation were successfully completed. However, during the implementation of the proof of concept, some routes could not be fully implemented.
The objective of this thesis is the quantification and qualification of neonicotinoid insecticides using thin-layer chromatography (TLC). Neonicotinoids are a relatively new form of pesticides, which have been proven to be extremely lethal to the honey bee, Apis mellifera. In this paper six forms of neonicotinoid insecticides (i.e. Acetamiprid, Thiacloprid, Imidacloprid, Clothianidin, Thaimethoxam, and Nitenpyram) are analysed. The initial steps are to first find a suitable mobile phase eluent, followed by the search for a reagent causing a luminescence effect of the neonicotinoids on a TLC plate. Subsequently, a calibration method is then used to find the detection limit of this TLC experiment. The aim is, therefore, to achieve a standard method of quantifying and qualifying neonicotinoids via TLC. Whilst a suitable mobile phase has been established, an optimal fluorescent reagent has yet to be found and more research on the subject must be carried out.
The Timed-Up-and-Go (TUG) test aims to assess mobility, balance, walking ability, and fall risk during walking. The instrumentalization of the TUG is already described in the literature and is beginning to be implemented in the industry. The products proposed by Zhortech and Digitsole, namely connected insoles, as well as additional sensors placed on the sternum and the right and eventually left femur allow the instrumentalization of the test.
An algorithm of detection and evaluation of the TUG has been developed in two versions. The first one (V1) aiming simply to calculate the total duration of the test. A second version is an improvement of V1, allowing to segment the TUG in three sub-phases: Sit-Stand, walking, Stand-Sit. These algorithms have been declined in a variant with the five sensors mentioned, and one without the sensor of the left femur.
The performance of the algorithms was compared to manual labeling performed on video. The comparison includes a bland-Altman plot and a correlation for the total test duration, but also for the sub-phase’s duration according to the two variants.
The TUG duration shows very good results regarding the limits of agreements (lLoA = -0.33 s and uLoA+0.6 s). The bias of 0.13 s indicated that the algorithm overrates the duration of the TUG. The results of the TUG subphases are less accurate. Although the correlation coefficient is between 0.76 and 0.96 for the different subphases, the limits of agreements are still very high, between -0.71 s and -0.5 s for the lLoA and +0.39 s and +0.58 s for the uLoA. These limits of agreements indicated that the Sit-Stand and Stand-Sit transition are not accurate enough yet. The dispersion is high for a transition that could last between about one and six seconds. The two variants, with and without a sensor on the left femur, present similar results.
The Project "Schluckspecht" of the University of Offenburg consists of participating in the European marathon called "Shell Eco-Marathon"(SEM) which consists of designing and building from the beginning a vehicle with the greatest possible energy efficiency. The University of Offenburg has participated in this project since 1998.
The team that forms the Schluckspecht project is made up of around 30 students from the faculties of mechanical engineering, process engineering, electrical engineering, medical technology and computer science, as well as the degree in Audiovisual Communication. The team was founded in 1998 and since then students have been developing and building high efficiency vehicles to participate in the European marathon Shell Eco.
In this project, students can put into practice all the theoretical knowledge obtained during their studies. Also can be learned how to work interdisciplinarity as a team, a skill that for now, many companies or require or seek.
The following topics are discussed in the Schluckspecht project, which are also ideal for the work of students:
-Conception construction and production of high efficiency vehicles.
-Computational design and manufacture of lightweight components and sets.
-Development of lightweight components and sets from renewable raw materials.
-Construction and development of special test benches, for example: motor test bench.
-Implementation and optimization of control strategies for autonomous driving
-Mechanical and electrical integration of sensors for autonomous driving
-Ergonomic studies and optimization of the driver's cabin.
The objective of the project is to develop and manufacture research vehicles that make individual mobility as efficient as possible from an energy point of view. To achieve this, current and future issues of the industry are discussed. In this project, both the theoretical and practical part of the light construction of vehicles and the reduction of friction, the variety of propulsion concepts (electric thrusters, fuel cells, diesel/petrol engines, Stirling engines) and autonomous driving are investigated. The services of the University of Offenburg together with some external partners are grouped together to make this wonderful project work.