Refine
Year of publication
Document Type
- Conference Proceeding (926) (remove)
Conference Type
- Konferenzartikel (730)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (8)
Language
- English (926) (remove)
Is part of the Bibliography
- yes (926) (remove)
Keywords
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Kommunikation (9)
- Assistive Technology (8)
- TRIZ (8)
- Deep Leaning (7)
- Produktion (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (299)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (196)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (194)
- Fakultät Wirtschaft (W) (131)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (109)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (96)
- INES - Institut für nachhaltige Energiesysteme (51)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Closed Access (376)
- Open Access (358)
- Closed (180)
- Bronze (105)
- Diamond (26)
- Grün (11)
- Gold (6)
- Hybrid (6)
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology.
Ripple: Overview and Outlook
(2015)
Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple.
In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
Many different methods, such as screen printing, gravure, flexography, inkjet etc., have been employed to print electronic devices. Depending on the type and performance of the devices, processing is done at low or high temperature using precursor- or particle-based inks. As a result of the processing details, devices can be fabricated on flexible or non-flexible substrates, depending on their temperature stability. Furthermore, in order to reduce the operating voltage, printed devices rely on high-capacitance electrolytes rather than on dielectrics. The printing resolution and speed are two of the major challenging parameters for printed electronics. High-resolution printing produces small-size printed devices and high-integration densities with minimum materials consumption. However, most printing methods have resolutions between 20 and 50 μm. Printing resolutions close to 1 μm have also been achieved with optimized process conditions and better printing technology.
The final physical dimensions of the devices pose severe limitations on their performance. For example, the channel lengths being of this dimension affect the operating frequency of the thin-film transistors (TFTs), which is inversely proportional to the square of channel length. Consequently, short channels are favorable not only for high-frequency applications but also for high-density integration. The need to reduce this dimension to substantially smaller sizes than those possible with today’s printers can be fulfilled either by developing alternative printing or stamping techniques, or alternative transistor geometries. The development of a polymer pen lithography technique allows scaling up parallel printing of a large number of devices in one step, including the successive printing of different materials. The introduction of an alternative transistor geometry, namely the vertical Field Effect Transistor (vFET), is based on the idea to use the film thickness as the channel length, instead of the lateral dimensions of the printed structure, thus reducing the channel length by orders of magnitude. The improvements in printing technologies and the possibilities offered by nanotechnological approaches can result in unprecedented opportunities for the Internet of Things (IoT) and many other applications. The vision of printing functional materials, and not only colors as in conventional paper printing, is attractive to many researchers and industries because of the added opportunities when using flexible substrates such as polymers and textiles. Additionally, the reduction of costs opens new markets. The range of processing techniques covers laterally-structured and large-area printing technologies, thermal, laser and UV-annealing, as well as bonding techniques, etc. Materials, such as conducting, semiconducting, dielectric and sensing materials, rigid and flexible substrates, protective coating, organic, inorganic and polymeric substances, energy conversion and energy storage materials constitute an enormous challenge in their integration into complex devices.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
NEXCODE is a project promoted by the European Space Agency aimed at research design development and demonstration of a receiver chain for telecomm and links in space missions including the presence of new short low-density parity-check codes for error correction. These codes have excellent performance from the error rate viewpoint but also put new challenges as regards synchronization issues and implementation. In this paper after a short review of the results obtained through numerical simulations we present an overview of the breadboard designed for practical testing and the test-plan proposed for the verification of the breadboard and the validation of the new codes and novel synchronization techniques under relevant operation conditions.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Im Rahmen einer Master Thesis wurde ausgehend von einem vorhandenen System On Chip Design, welches eingehende EKG-Datensignale verarbeitet, das bestehende System so erweitert dass es komplett über den standardisierten SPI-Bus steuerbar und auslesbar ist.
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
Accelerated transformation of the society and industry through digi-talization, artificial intelligence and other emerging technologies has intensified the need for university graduates that are capable of rapidly finding breakthrough solutions to complex problems, and can successfully implement innovation con-cepts. However, there are only few universities making significant efforts to com-prehensively incorporate creative and systematic tools of TRIZ (theory of in-ventive problem solving) and KBI (knowledge-based innovation) into their de-gree structure. Engineering curricula offer little room for enhancing creativity and inventiveness by means of discipline‐specific subjects. Moreover, many ed-ucators mistakenly believe that students are either inherently creative, or will in-evitably obtain adequate problem-solving skills as a result of their university study. This paper discusses challenges of intelligent integration of TRIZ and KBI into university curricula. It advocates the need for development of standard guidelines and best-practice recommendations in order to facilitate sustainable education of ambitious, talented, and inventive specialists. Reflections of educa-tors that teach TRIZ and KBI to students from mechanical, electrical, process engineering, and business administration are presented.
The design of control systems of concentrator photovoltaic power plants will be more challenging in the future. Reasons are cost pressure, the increasing size of power plants, and new applications for operation, monitoring and maintenance required by grid operators, manufacturers and plant operators. Concepts and products for fixed-mounted photovoltaic can only partly be adapted since control systems for concentrator photovoltaic are considerable more complex due to the required high accurate sun-tracking. In order to assure reliable operation during a lifetime of more than 20 years, robustness of the control system is one crucial design criteria. This work considers common engineering technics for robustness, safety and security. Potential failures of the control system are identified and their effects are analyzed. Different attack scenarios are investigated. Outcomes are design criteria that encounter both: failures of system components and malicious attacks on the control system of future concentrator photovoltaic power plants. Such design criteria are a transparent state management through all system layers, self-tests and update capabilities for security concerns. The findings enable future research to develop a more robust and secure control system for concentrator photovoltaics when implementing new functionalities in the next generation.
The communication system of a large-scale concentrator photovoltaic power plant is very challenging. Manufacturers are building power plants having thousands of sun tracking systems equipped with communication and distributed over a wide area. Research is necessary to build a scalable communication system enabling modern control strategies. This poster abstract describes the ongoing work on the development of a simulation model of such power plants in OMNeT++. The model uses the INET Framework to build a communication network based on Ethernet. First results and problems of timing and data transmission experiments are outlined. The model enables research on new communication and control approaches to improve functionality and efficiency of power plants based on concentrator photovoltaic technology.