Refine
Year of publication
Document Type
- Conference Proceeding (934) (remove)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (934) (remove)
Keywords
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Kommunikation (9)
- Assistive Technology (8)
- TRIZ (8)
- Deep Leaning (7)
- Produktion (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (303)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (196)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (194)
- Fakultät Wirtschaft (W) (131)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (109)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (99)
- INES - Institut für nachhaltige Energiesysteme (51)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Closed Access (376)
- Open Access (361)
- Closed (185)
- Bronze (105)
- Diamond (26)
- Grün (11)
- Gold (6)
- Hybrid (6)
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
In the present work, nonlinearities in temperature compensating (TC) SAW devices are investigated. The materials used are LiNbO₃-rot128YX as the substrate and Copper electrodes covered with a SiO₂-layer as the compensating layer. In order to understand the role of these materials for the nonlinearities in such acoustic devices, a FEM simulation model in combination with a perturbation approach is applied. The nonlinear tensor data of the different materials involved in TC-SAW devices have been taken from literature, but were partially modified to fit experimental data by introducing scaling factors. An effective nonlinearity constant is determined by comparison of nonlinear P-matrix simulations to IMD3 measurements of test filters. By employing these constants in nonlinear periodic P-matrix simulations a direct comparison to nonlinear periodic FEM-simulations yields the scaling factors for the material used. Thus, the contribution of different materials to the nonlinear behavior of TC-SAW devices is obtained and the role of metal electrodes is discussed in detail.
Flashcards are a well known and proven method to learn and memorise. Such a way of learning is perfectly suited for “learning on the way,” but carrying all the flashcards could be awkward. In this scenario, a mobile device (mobile phone) is an adequate solution. The new mobile device operating system Android from Google allows for writing multimedia-enriched applications.
The developed solution enables the presentation of animations and 3D virtual reality (VR) on mobile devices and is well suited for mobile learning, thus creating new possibilities in the area of e-learning worldwide. Difficult relations in physics as well as intricate experiments in optics can be visualised on mobile devices without need for a personal computer.
“Today’s network landscape consists of quite different network technologies, wide range of end-devices with large scale of capabilities and power, and immense quantity of information and data represented in different formats” [9]. A lot of efforts are being done in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including mobile considering individual situation of the end user. This is very difficult because various kinds of devices used by different users or in different times/parallel by the same user which is not predictable and have to be recognized by the system in order to know device capabilities. Not only the devices but also Content and User Interfaces are big issues because they could include different kinds of data format like text, image, audio, video, 3D Virtual Reality data and upcoming other formats. Language Learning Game (LLG) is such an example of a device independent application where different kinds of devices and data formats, as a content of a flashcard is used for a collaborative learning. The idea of this game is to create a short story in a foreign language by using mobile devices. The story is developed by a group of participants by exchanging sentences/data via a flashcard system. This way the participants can learn from each other by knowledge sharing without fear of making mistakes because the group members are anonymous. Moreover they do not need a constant support from a teacher.
Structures for interconnecting active microwave semiconductor-devices, e.g. FET's and MIC's, with the electrical surrounding or with each other have to be designed more and more carefully when increasing the desired upper frequency limit. Therefore, several connecting structures for device embedding have been examined. Mainly, their applicability for the frequency range from 10 GHz to 100 GHz was considered. Additionally, different equivalent circuits were developed to approximately describe their behaviour for CAD-applications.
The title expresses goals the Kansas Geological Survey (KGS) has been working toward for some time. This report extends concepts and objectives developed while working on an earlier effort for effective interactive digital maps on the Internet. That work was reported to the 1998 DMT Workshop in Champaign, Illinois (Ross, 1998). The current project goes beyond previous efforts that focused on methods for serving the contents of a geographic information system (GIS); the points, lines, and polygons representing features of the digital geologic map and the data in the attribute tables of the GIS describing those features.
The paper will focus on the activities of the International Year of Light and Optical Technologies 2015 (IYL) with their impact in life, science, art, culture, education and outreach as well as the importance in promoting the objectives for sustainable development. It describes our activities carried out in the run-up to or during the IYL, as well as reports on the generic projects that led to the success of the IYL. The success of the IYL is illustrated by examples and statistics. Relating to the potential and success of the IYL, the impact and the genesis of the International Day of Light (IDL) is presented. Impressions from the opening ceremony of the IYL in Paris at UNESCO headquarters and the Inaugural Ceremony of the IDL will then be covered. A second focus is placed on the interdisciplinary media projects realized by the students of our university dedicated to these events. Finally, an analysis of the impact and legacy of IYL and IDL will be presented.
The University for Children is a very successful event aiming to spark children‧s interest in science, in this particular lecture in Optics and Photonics. It is from brain research that we know about the significant dependence of successful learning on the fun factor. Researchers in this field have shown that knowledge acquired with fun is stored for a longer time in the long-term memory and can be used both more efficiently and more creatively [1], [2]. Such an opportunity to inspire the young generation for science must not be wasted. The world of Photonics and Optics provides us with a nearly inexhaustible source of opportunities of this kind.
The United Nations have declared 2015 as the International Year of Light (IYL2015) and light-based technologies [1]. As a main result, the public interest is focused on both the achievements and the new frontiers of optics and photonics. This opens up new perspectives in the teaching and training of optics and photonics. In the first part of the paper, the author presents the numerous anniversaries occurring in the International Year of Light 2015 together with their importance to the development of science and technology. In the second part, we report on an interactive video projection at the opening ceremony of the IYL2015 in Paris on January 19-20, 2015. Students of Offenburg University have established an interactive video projection which visualizes Twitter and Facebook messages posted with the hashtag #iyl2015 in a mapping technique. Thus, the worldwide community can be interactively part of the opening ceremony. Finally, upcoming global community projects related to optics and astronomy events are presented.
Mobile learning (m-learning) can be considered as a new paradigm of e-learning. The developed solution enables the presentation of animations and 3D virtual reality (VR) on mobile devices and is well suited for mobile learning. Difficult relations in physics as well as intricate experiments in optics can be visualised on mobile devices without need for a personal computer. By outsourcing the computational power to a server, the coverage is worldwide.
Photonics meet digital art
(2014)
The paper focuses on the work of an interdisciplinary project between photonics and digital art. The result is a poster collection dedicated to the International Year of Light 2015. In addition, an internet platform was created that presents the project. It can be accessed at http://www.magic-of-light.org/iyl2015/index.htm. From the idea to the final realization, milestones with tasks and steps will be presented in the paper. As an interdisciplinary project, students from technological degree programs were involved as well as art program students. The 2015 Anniversaries: Alhazen (1015), De Caus (1615), Fresnel (1815), Maxwell (1865), Einstein (1905), Penzias Wilson, Kao (1965) and their milestone contributions in optics and photonics will be highlighted.
Theoretical details about optics and photonics are not common knowledge nowadays. Physicists are keen to scientifically explain ‘light,’ which has a huge impact on our lives. It is necessary to examine it from multiple perspectives and to make the knowledge accessible to the public in an interdisciplinary, scientifically well-grounded and appealing medial way. To allow an information exchange on a global scale, our project “Invisible Light” establishes a worldwide accessible platform. Its contents will not be created by a single instance, but user-generated, with the help of the global community. The article describes the infotainment portal “Invisible Light,” which stores scientific articles about light and photonics and makes them accessible worldwide. All articles are tagged with geo-coordinates, so they can be clearly identified and localized. A smartphone application is used for visualization, transmitting the information to users in real time by means of an augmented reality application. Scientific information is made accessible for a broad audience and in an attractive manner.
The paper focuses on a numerical model which describes the radial temperature evolution in an optical fiber during the heating and cooling process according to the SP1 approximation. Based on this model, experimental methods for temperature measurement with optical fibers and for splice process optimization can be developed.
In the brain-cell microenvironment, diffusion plays an important role: apart from delivering glucose and oxygen from the vascular system to brain cells, it also moves informational substances between cells. The brain is an extremely complex structure of interwoven, intercommunicating cells, but recent theoretical and experimental works showed that the classical laws of diffusion, cast in the framework of porous media theory, can deliver an accurate quantitative description of the way molecules are transported through this tissue. The mathematical modeling and the numerical simulations are successfully applied in the investigation of diffusion processes in tissues, replacing the costly laboratory investigations. Nevertheless, modeling must rely on highly accurate information regarding the main parameters (tortuosity, volume fraction) which characterize the tissue, obtained by structural and functional imaging. The usual techniques to measure the diffusion mechanism in brain tissue are the radiotracer method, the real time iontophoretic method and integrative optical imaging using fluorescence microscopy. A promising technique for obtaining the values for characteristic parameters of the transport equation is the direct optical investigation using optical fibers. The analysis of these parameters also reveals how the local geometry of the brain changes with time or under pathological conditions. This paper presents a set of computations concerning the mass transport inside the brain tissue, for different types of cells. By measuring the time evolution of the concentration profile of an injected substance and using suitable fitting procedures, the main parameters characterizing the tissue can be determined. This type of analysis could be an important tool in understanding the functional mechanisms of effective drug delivery in complex structures such as the brain tissue. It also offers possibilities to realize optical imaging methods for in vitro and in vivo measurements using optical fibers. The model also may help in radiotracer biomarker models for the understanding of the mechanism of action of new chemical entities.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
We present our twenty years of experience in the live broadcasting of astronomical events, with the main focus on total lunar eclipses. Our efforts were motivated by the great impact and high number of viewers of these events. Visitors from over a hundred countries watched our live broadcasts. Our viewer record was set on July 27, 2018, with the live transmission of the total lunar eclipse from the Feldberg, the highest mountain in the Black Forest, attracting nearly half a million viewers in five hours.
An especially challenging activity was the live observing of the Mercury transit on 9 May 2016, which we presented as ‘live astronomy’ with hands-on telescope. The main goal of this event was to awake our students enthusiasm for optics and astronomy.
Furthermore, we report on our experiences with the photography of optical phenomena such as polar lights and green flash.
Art and Photonics
(2019)
In this paper we report on our continuous efforts to apply optics and photonics in art. This results in interdisciplinary projects which sometimes lead to concrete art installations.
We presented some of these projects at the UNESCO headquarters in Paris, at the opening ceremony of the International Year of Light and the inaugural ceremony of the International Day of Light.
Some newer projects, such as “A Maze: Ingenious Pipes” and “The Power of Your Eyes,” are also presented in this paper.
After the successful International Year of Light 2015, the idea of sustainability became increasingly imminent. After a preparatory year on 16 May 2018, the International Day of Light was launched for the first time. This event was celebrated with a public celebration in Paris at the UNESCO headquarters. In this paper we will present our projects dedicated to the International Day of Light in Paris. Together with a group of students from our university, we had the special opportunity to be integrated in the program of the opening ceremony at UNESCO in Paris. With our interdisciplinary projects we have tried to build a bridge between optics, photonics, art and media installations.
Astronomical phenomena fascinate people from the very beginning of mankind up to today. In this paper the authors will present their experience with photography of astronomical events. The main focus will be on aurora borealis, comet Neowise, total lunar eclipses and how mobile devices open up new possibilities to observe the green flash. Our efforts were motivated by the great impact and high number of viewers of these events. Visitors from over a hundred countries watched our live broadcasts.
Furthermore, we report on our experiences with the photography of optical phenomena such as polar lights Fig. 1, comet Neowise with a Delta Aquariids meteor Fig. 11, and lunar eclipses Fig. 12.
Teaching and learning concepts that are adapted to the constantly evolving requirements due to rapid technological progress are essential for teaching in media photonics technology. After the development of a concept for research-oriented education in optics and photonics, the next step will be a conceptual restructuring and redesign of the entire curriculum for education in media photonics technology. By including typical research activities as essential components of the learning process, a broad platform for practical projects and applied research can be created, offering a variety of new development opportunities.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
A former remote area power supply was converted to a smart cogeneration subnet with combined heat and power to develop and validate a forecast based energy management at the University of Applied Sciences in Offenburg/Germany. Locally processed weather forecasts and forecasted demand profiles are integrated to allow a precise reaction to changes of fluctuating power sources, changes in scheduled demand profiles and to improve the energy efficiency of the supply. The management of the electrical and thermal storages is influenced by the forecasted energy contributions and the forecasted demand. Further approaches should improve the accuracy of forecasting algorithms and integrate parameter models gained of a detailed monitoring to realize predictive controllers.
In this paper we report on further success of our work to develop a multi-method energy optimization which works with a digital twin concept. The twin concept serves to replicate production processes of different kinds of production companies, including complex energy systems and test market interactions to then use them for model predictive optimizing. The presented work finally reports about the performed flexibility assessment leading to a flexibility audit with a list of measures and the impact of energy optimizations made related to interactions with the local power grid i.e., the exchange node of the low voltage distribution grid. The analysis and continuous exploration of flexibilities as well as the exchange with energy markets require a “guide” leading to continuous optimization with a further tool like the Flexibility Survey and Control Panel helping decision-making processes on the day-ahead horizon for real production plants or the investment planning to improve machinery, staff schedules and production
infrastructure.
Meeting the requirements of smart grids local, decentralized subnets will offer additional potentials to stabilize and compensate the utility grid mainly on the low voltage level. In a quite complex configuration these decentralized energy systems are combined power, heat and cooling power distributions. According to the regional and local availability of renewable energy sources advanced energy management concepts should consider climatic conditions as well as the state of the interacting utility grid and consumption profiles. The approach uses demonstrational setups to develop a forecast based energy management for trigeneration subnets by taking into account the running conditions of local electrical and thermal energy conversion units. This should lead to the best coverage of the demand and supporting/stabilizing the utility grid at the same time. For the first of three demonstrational projects the priorities of the subnet are given with the maximization of the CHP operation to substitute a major part of the heating and cooling power delivered by electric heaters or compression chillers.
Sustainable Aspects force a building manager to continuous observation of actual states and developments concerning building use, energy and media flows.In the presented approach a communication structure was built up to use different software applications and tools in order to optimize the operation of the building.
The PHOTOPUR project aims to develop a photocatalytic process as a type of AOPs (Advanced Oxidation Processes) for the elimination of plant protection products (PPP) of the cleaning water used to wash sprayers. At INES a PV based energy supply for the photocatalytic cleaning system was developed within the framework of two bachelor theses and assembled as a demonstration unit. Then the system was step by step extended with further process automation features and pushed to a remote operating device. The final system is now available as a mobile unit mounted on a lab table. The latest step was the photocatalytic reactor module which completed the first PHOTOPUR prototype. The system is actually undergoing an intensive testing phase with performance checks at the consortium partners. First results give an overview about the successful operation.
An energy oriented design concept was developed within the research project PHOTOPUR which has the development of a PV powered water cleaning system as main focus. During a wine season Plant Protection Products (PPP) are several times sprayed on plants to protect them of undesired insects and herbs or avoid hazardous fungus
types. A work package of the project partner INES in Offenburg led to a design introducing energy profiling already in the early beginning of a product design. The concept is based on three pillars respecting first the
requirements of the core process making up filtering and cleaning and secondary aspects which run, support, maintain and monitor the system to secure availability and product reliability.
The presented paper shows that the results of the design tools guided the developers to assemble a functional model of the water decontamination unit which was manually tested with its concatenated steps of the water cleaning process.
Three real-lab trigeneration microgrids are investigated in non-residential environments (educational, office/administrational, companies/production) with a special focus on domain-specific load characteristics. For accurate load forecasting on such a local level, à priori information on scheduled events have been combined with statistical insight from historical load data (capturing information on not explicitly-known consumer behavior). The load forecasts are then used as data input for (predictive) energy management systems that are implemented in the trigeneration microgrids. In real-world applications, these energy management systems must especially be able to carry out a number of safety and maintenance operations on components such as the battery (e.g. gassing) or CHP unit (e.g. regular test runs). Therefore, energy management systems should combine heuristics with advanced predictive optimization methods. Reducing the effort in IT infrastructure the main and safety relevant management process steps are done on site using a Smart & Local Energy Controller (SLEC) assisted by locally measured signals or operator given information as default and external inputs for any advanced optimization. Heuristic aspects for local fine adjustment of energy flows are presented.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
Cardiac resynchronization therapy (CRT) with hemodynamic optimized biventricular pacing is an established therapy for heart failure patients with sinus rhythm, reduced left ventricular ejection fraction and wide QRS complex. The aim of the study was to evaluate electrical right and left cardiac atrioventricular delay and left atrial delay in CRT responder and non-responder with sinus rhythm.
Methods: Heart failure patients with New York Heart Association class 3.0 ± 0.3, sinus rhythm and 27.7 ± 6.1% left ventricular ejection fraction were measured by surface ECG and transesophageal bipolar left atrial and left ventricular ECG before implantation of CRT devices. Electrical right cardiac atrioventricular delay was measured between onset of P wave and onset of QRS complex in the surface ECG, left cardiac atrioventricular delay between onset of left atrial signal and onset of left ventricular signal in the transesophageal ECG and left atrial delay between onset and offset of left atrial signal in the transesophageal ECG.
Results: Electrical atrioventricular and left atrial delay were 196.9 ± 38.7 ms right and 194.5 ± 44.9 ms left cardiac atrioventricular delay, and 47.7 ± 13.9 ms left atrial delay. There were positive correlation between right and left cardiac atrioventricular delay (r = 0.803 P < 0.001) and negative correlation between left atrial delay and left ventricular ejection fraction (r = −0.694 P = 0.026) with 67% CRT responder.
Conclusions: Transesophageal electrical left cardiac atrioventricular delay and left atrial delay may be useful preoperative atrial desynchronization parameters to improve CRT optimization.
As part of the design education at Offenburg University, the teaching in technical documentation is continuously optimised. In this study, numerous mechanical engineering students, ages 19 to 29, are observed using the eye tracking technology and a video camera while performing various design exercises. The aim of the study is to enhance the students’ ability to read, understand and analyse complex engineering drawings. In one experiment, the students are asked to perform the “cube perspective test” after Stumpf and Fay to assess their ability for mental rotation as part of spatial visualization ability. Furthermore, the students are asked to prepare and give micro presentations on a topic related to their studies. Students have a maximum of 100 s time for these presentations. Thus, they can practise presenting important information in a short amount of time, show their rhetorical skills and demonstrate their acquisition of basic knowledge. During the presentation, the eye movement of a few selected students is recorded to analyse their information acquisition. In a further test, the students’ eye movements are analysed while reading an engineering drawing that consists of multiple views. All the spatial connections have to be included based on the different component views. Including these and their acquired knowledge, the students are asked to identify the correct representation of a component view. Furthermore the subjects are describing the function of an assembly, a parallel gripper and then they are to mentally disassemble the assembly to replace a damaged cylindrical pin. Simultaneously, they are filmed using a video camera to see which terms the students use for the individual technical terms. The evaluation of the eye movements shows that the increasing digitalisation of society and the use of electronic devices in everyday life lead to fast and only selective perceptual behaviour and that students feel insecure when dealing with technical drawings. The analysis of the videos shows a mostly non-technical and inaccurate manner of expression and a poor use of technical terms. The transferability of the achieved results to other technical tasks is part of further investigations.
Peer-to-peer energy trading and local electricity markets have been widely discussed as new options for the transformation of the energy system from the traditional centralized scheme to the novel decentralized one. Moreover, it has also been proposed as a more favourable alternative for already expiring feed in tariff policies that promote investment in renewable energy sources. Peer-to-peer energy trading is usually defined as the integration of several innovative technologies, that enable both prosumers and consumers to trade electricity, without intermediaries, at a consented price. Furthermore, the techno-economic aspects go hand in hand with the socio-economic aspects, which represent at the end significant barriers that need to be tackled to reach a higher impact on current power systems. Applying a qualitative analysis, two scalable peer-to-peer concepts are presented in this study and the possible participant´s entry probability into such concepts. Results show that consumers with a preference for environmental aspects have in general a higher willingness to participate in peer-to-peer energy trading. Moreover, battery storage systems are a key technology that could elevate the entry probability of prosumers into a peer-to-peer market.
Gaps in basic math knowledge are among the biggest obstacles to a successful start in university. Students starting their studies in STEM disciplines display significant diversity, “math anxiety” is a widespread phenomenon, and the transition to a self-determined way of studying presents a huge challenge. Universities offer support measures such as preparatory courses. Over the years, Offenburg University realized that with increased diversity, traditional ways of teaching in front of the class have become inefficient. The majority of the students remained inactive and just listened to the teachers’ explanations and the few active participants’ answers.
Since 2013 our new course concept fosters a shift from teaching to active learning on a large scale, involving several hundred participants of our on-site preparatory math courses. This switch to broad active practicing, however, must go hand in hand with providing individual support for an increasingly diverse student body. Meanwhile students bring along their mobile devices, and the training App TeachMatics serves as a facilitator. The course concept has been very well received by both students and teachers.
The present work describes an extension of current slope estimation for parameter estimation of permanent magnet synchronous machines operated at inverters. The area of operation for current slope estimation in the individual switching states of the inverter is limited due to measurement noise, bandwidth limitation of the current sensors and the commutation processes of the inverter's switching operations. Therefore, a minimum duration of each switching state is necessary, limiting the final area of operation of a robust current slope estimation. This paper presents an extension of existing current slope estimation algorithms resulting in a greater area of operation and a more robust estimation result.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
A Novel Approach of High Dynamic Current Control of Interior Permanent Magnet Synchronous Machines
(2019)
Harmonic-afflicted effects of permanent magnet synchronous machines with high power density are hardly faced by traditional current PI controllers, due to limited controller bandwidth. As a consequence, currents and lastly torque ripples appear. In this paper, a new deadbeat current controller architecture has been presented, which is capable to encounter the effects of these harmonics. This new control algorithm, here named “Hybrid-Deadbeat-Controller”, combines the stability and the low steady-state errors offered by common PI regulators with the high dynamic offered by the deadbeat control. Therefore, a novel control algorithm is proposed, capable of either compensating the current harmonics in order to get smoother currents or to control a varying reference value to achieve a smoother torque. The information needed to calculate the optimal reference currents is based on an online parameter estimation feeding an optimization algorithm to achieve an optimal torque output and will be investigated in future research. In order to ensure the stability of the controller over the whole area of operation even under the influence of effects changing the system’s parameter, this work as well focusses on the robustness of the “hybrid” dead beat controller.
Due to the increasing aging of the population, the number of elderly people requiring care is growing in most European countries. However, the number of caregivers working in nursing homes and on daily care services is declining in countries like Germany or Italy. This limits the time for interpersonal communication. Furthermore, as a result of the Covid-19 pandemic, social distancing during contact restrictions became more important, causing an additional reduction of personal interaction. This social isolation can strongly increase emotional stress. Robotic assistance could contribute to addressing this challenge on three levels: (1) supporting caregivers to respond individually to the needs of patients and residents in nursing homes; (2) observing patients’ health and emotional state; (3) complying with high hygiene standards and minimizing human contact if required. To further the research on emotional aspects and the acceptance of robotic assistance in care, we conducted two studies where elderly participants interacted with the social robot Misa. Facial expression and voice analysis were used to identify and measure the emotional state of the participants during the interaction. While interpersonal contact plays a major role in elderly care, the findings reveal that robotic assistance generates added value for both caregivers and patients and that they show emotions while interacting with them.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
In order to attract new students, German universities must provide quick and easy access to relevant information. A chatbot can help increase the efficiency in academic advising for prospective students. In this study we evaluate the acceptance and effects of chatbots in German student-university communication. We conducted a qualitative UX-Study with the chatbot prototype of Offenburg University of Applied Sciences (HSO), in order to determine which features are particularly relevant and which requirements are made by the users. The results show that acceptance increases if the chatbot offers quick and adequate assistance, furthermore, our participants preferred an informal communication style and valued friendly and helpful personality traits for chatbots.
Existing approaches solving multi-vehicle pickup and delivery problems with soft time windows typically use common benchmark sets to verify their performance. However, there is a gap from these benchmark sets to real world problems with respect to instance size and problem complexity. In this paper we show that a combination of existing approaches together with improved heuristics is able to deal with the instance sizes and complexity of real world problems. The cost savings potential of the heuristics is compared to human dispatching plans generated from the data of a European carrier.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
This paper describes the new Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2016 adult size humanoid competition. Based on experiences during RoboCup 2014, the Sweaty robot has been completely redesigned to a new robot Sweaty II. A major change is the use of linear actuators for the legs. Another characteristic is its indirect actuation by means of rods. This allows a variable transmission ratio depending on the angle of a joint.
This paper describes the new Sweaty humanoid adult size robot trying to qualify for the RoboCup 2014 adult size humanoid competition. The robot is built from scratch to eventually allow it to run. One characteristic is that to prevent the motors from overheating, water evaporation is used for cooling. The robot is literally sweating which has given it its name. Another characteristic is, that the motors are not directly connected to the frame but by means of beams. This allows a variable transmission ratio depending on the angle.
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2017 adult size humanoid competition. Sweaty came 2nd in RoboCup 2016 adult size league. The paper describes the main characteristics of Sweaty that made this success possible, and improvements that have been made or are planned to be implemented for RoboCup 2017.
Alexander von Humboldt, a German scientist and explorer of the 19th century, viewed the natural world holistically and described the harmony of nature among the diversity of the physical world as a conjoining between all physical disciplines. He noted in his diary: “Everything is interconnectedness.”
The main feature of Humboldt’s pioneering work was later named “Humboldtian science”, meaning the accurate study of interconnected real phenomena in order to find a definite law and a dynamic cause.
Following Humboldt's idea of nature, an Internet edition of his works must preserve the author’s original intention, retain an awareness of all relevant works, and still adhere to the requirements of scholarly edition.
At the present time, however, the highly unconventional form of his publications has undermined the awareness and a comprehensive study of Humboldt’s works.
Digital libraries should supply dynamic links to sources, maps, images, graphs and relevant texts. New forms of interaction and synthesis between humanistic texts and scientific observation need to be created.
Information technology is the only way to do justice to the broad range of visions, descriptions and the idea of nature of Humboldt’s legacy. It finally leads to virtual research environments as an adequate concept to redesign our digital archives, not only for Humboldt’s documents, but for all interconnected data.
Technology and computer applications influence our daily lives and questions arise concerning the role of artificial intelligence and decision-making algorithms. There are warning voices, that computers can, in theory, emulate human intelligence-and exceed it. This paper points out that a replacement of humans by computers is unlikely, because human thinking is characterized by cognitive heuristics and emotions, which cannot simply be implemented in machines operating with algorithms, procedural data processing or artificial neural networks. However, we are going to share our responsibilities with superior computer systems, which are tracking and surveying all of our digital activities, whereas we have no idea of the decision-making processes inside the machines. It is shown that we need a new digital humanism defining rules of computer responsibilities to avoid digital totalism and comprehensive monitoring and controlling of individuals within the planet Earth.
This article sets the focus on methods of information technology in the Humboldt Portal, which represents an ongoing research project to develop a virtual research environment on the Internet for the legacy of Alexander von Humboldt. Based on the experiences of developing and providing the Humboldt Digital Library (www.avhumboldt.net) for more than a decade, we defined a working plan to create an Internet portal for comprehensive access to Humboldt’s writings, no matter if documents are provided as PDF files, scan images or XML-TEI documents on external archives (Google Books, Internet Archive, Deutsches Textarchiv, Bibliotheque National de France). Going far beyond services of a digital library we will provide an information network with multimedia assets, which are containing objects like terms, paragraphs, data tables, scan images, or illustrations, together with correlated properties like thematic linkage to other objects, relevant keywords with optional synonyms and dynamic hyperlinks to related translations in different languages. So the Humboldt Portal can contribute to the key question, how to present interconnected data in an appropriate form using information technologies on the Web.
More than 200 years ago, the scientist Alexander von Humboldt noted in his travel diaries that "everything is interconnectedness", when he was fascinated by nature and the phenomena observed. The view of nature has become much more detailed through the knowledge of phenomena and natural processes, which led to a more precise view of nature shaped by Humboldt. Technological progress and the artificial intelligence of highly developed computer systems are upsetting this view and changing the established world view through a new, unprecedented interaction between man and machinery. Thus we need digital axioms and comprehensive rules and laws for such autonomous acting systems that determine human interaction between cybernetic systems and biological individuals. This digital humanism should encompass our relationship to nature, our handling of the complexity and diversity of nature and the technological influences on society in order to avoid technical colonialism through supercomputers.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
The energy system is changing since some years in order to achieve the climate goals from the Paris Agreement which wants to prevent an increase of the global temperature above 2 °C [1]. Decarbonisation of the energy system has become for governments a big challenge and different strategies are being stablished. Germany has set greenhouse gas reduction limits for different years and keeps track of the improvement made yearly. The expansion of renewable energy systems (RES) together with decarbonisation technologies are a key factor to accomplish this objective.
This research is done to analyse the effect of introducing biochar, a decarbonisation technology, and study how it will affect the energy system. Pyrolysis is the process from which biochar is obtained and it is modelled in an open-source energy system model. A sensibility analysis is done in order to assess the effect of changing the biomass potential and the costs for pyrolysis.
The role of pyrolysis is analysed in the form of different future scenarios for the year 2045 to evaluate the impact when the CO2 emission limit is zero. All scenarios are compared to the reference scenario, where pyrolysis is not considered.
Results show that biochar can be used to compensate the emissions from other conventional power plant and achieve an energy transition with lower costs. Furthermore, it was also found that pyrolysis can also reduce the need of flexibility. This study also shows that the biomass potential and the pyrolysis costs can strongly affect the behaviour of pyrolysis in the energy system.
The increase in households with grid connected Photovoltaic (PV) battery system poses challenge for the grid due to high PV feed-in as a result of mismatch in energy production and load demand. The purpose of this paper is to show how a Model Predictive Control (MPC) strategy could be applied to an existing grid connected household with PV battery system such that the use of battery is maximized and at the same time peaks in PV energy and load demand are reduced. The benefits of this strategy are to allow increase in PV hosting capacity and load hosting capacity of the grid without the need for external signals from the grid operator. The paper includes the optimal control problem formulation to achieve the peak shaving goals along with the experiment set up and preliminary experiment results. The goals of the experiment were to verify the hardware and software interface to implement the MPC and as well to verify the ability of the MPC to deal with the weather forecast deviation. A prediction correction has also been introduced for a short time horizon of one hour within this MPC strategy to estimate the PV output power behavior.
This paper presents the use of model predictive control (MPC) based approach for peak shaving application of a battery in a Photovoltaic (PV) battery system connected to a rural low voltage gird. The goals of the MPC are to shave the peaks in the PV feed-in and the grid power consumption and at the same time maximize the use of the battery. The benefit to the prosumer is from the maximum use of the self-produced electricity. The benefit to the grid is from the reduced peaks in the PV feed-in and the grid power consumption. This would allow an increase in the PV hosting and the load hosting capacity of the grid.
The paper presents the mathematical formulation of the optimal control problem
along with the cost benefit analysis. The MPC implementation scheme in the
laboratory and experiment results have also been presented. The results show
that the MPC is able to track the deviation in the weather forecast and operate
the battery by solving the optimal control problem to handle this deviation.
In their famous work on prospect theory Kahneman and Tversky have presented a couple of examples where human decision making deviates from rational decision making as defined by decision theory. This paper describes the use of extended behavior networks to model human decision making in the sense of prospect theory. We show that the experimental findings of non-rational decision making described by Kahneman and Tversky can be reproduced using a slight variation of extended behavior networks.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2009. It focuses on two distinctive features of the team: decisions making using extended behavior networks and its software architecture and implementation in Java to open the simulation for the Java community.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2010. While last year’s TDP focused on decisions making using extended behavior networks and on its software architecture and implementation in this year we describe the tool set that was created for RoboCup 3D. It contians a GUI for agent- and world state visualization, for evaluation of localization algorithms and benchmarks in general, a visual editor for Extended Behavior Networks creation and debugging, a live movement tool to interact with the joints and finally a tool for editing behavior motor files.
After having described many different aspects of our team software in previous years, in this paper we take the freedom to describe the magmaChallenge framework provided by the magmaOffenburg team. The framework is used as a benchmark tool to run different challenges like the running challenge in 2014 or the kick accuracy challenge in 2015. This description should serve as a documentation to simplify the maintenance by the community and to add new benchmarks in the future.
Due to the Covid-19 pandemic, the RoboCup WorldCup 2021 was held completely remotely. For this competition the Webots simulator (https://cyberbotics.com/) was used, so all teams needed to transfer their robot to the simulation. This paper describes our experiences during this process as well as a genetic learning approach to improve our walk engine to allow a more stable and faster movement in the simulation. Therefore we used a docker setup to scale easily. The resulting movement was one of the outstanding features that finally led to the championship title.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2012. While last year’s TDP focused on the tool set created for 3D simulation and the support for heterogeneous robot models, this year we focus on the different ways how robot behavior can be defined in the magmaOffenburg framework and how those behaviors can be improved by learning.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2013. While last year’s TDP focused on different ways how robot behavior can be defined in the magmaOffenburg framework this year we focus on how we statistically evaluate new features on distributed systems. We also show some results gained through such analysis.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2011. While last year’s TDP focused on the tool set created for 3D simulation in this year we describe the further improvement in this tools as well as some new features we implemented focusing on heterogeneous robot models which seem to be used in RoboCup 2012.
An additional tool was written to simply generate situation-dependent strategies. Furthermore some tools, described last year, are now integrated in one single GUI to easy things up.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Sweaty has already participated four times in RoboCup soccer competitions (Adult Size) and came second three times. While 2016 Sweaty needed a lot of luck to be finalist, 2017 Sweaty was a serious adversary in the preliminary rounds. In 2018 Sweaty showed up in the final with some lack of experience and room for improvements, but not without any chance. This paper describes the intended improvements of the humanoid adult size robot Sweaty in order to qualify for the RoboCup 2019 adult size competition.
In previous work we [1] and other authors (e.g. [2]) have shown that agent-based systems are successful in optimizing delivery plans of single logistics companies and are meanwhile successfully productive in industry. In this paper we show that agent-based systems are particularly useful to also optimize transport across logistics companies. In intercompany optimization, privacy is of major importance between the otherwise competing companies. Some data has to be treated strictly private like the cost model or the constraint model. Other data like order information has to be shared. However, typically the amount of orders released to other companies has also to be limited. We show that our agent-based approach can be easily fine tuned to trade off privacy against the benefit of cooperation.
Non-esterified plant oils gain ecological and economical importance, particularly in the EU where it is intended to increase the share of renewable energies. Plant oils do not require any chemical treatment so do not cause secondary pollution. The importance of plant oil will increase in Germany for mobile and stationary applications. The generation co-generation of heat and power is subsidized by the German “Erneuerbares Energiegesetz” and the “Kraft-Wärme-Kopplungsgesetz” when renewable fuels are used such as plant oils..
Plant oils have a much higher viscosity than conventional gas oil. It is mandatory to decrease the oil viscosity by heating prior to injection to assure proper injection and to avoid engine damage due to coke formation in the combustion chamber and at the injection nozzle. The German quality standard of Weihenstephan (RK-Qualitätsstandard 05/2000) for rape seed oil should be followed for use as diesel fuel. The chemical composition of plant oils is appreciably different in comparison to diesel fuels derived from mineral oils suggesting also different emission behavior.
Particle and Gaseous Emissions of Diesel Engines Fuelled by Different Non-Esterified Plant Oils
(2007)
The particulate matter and gas emissions of several plant oils are analyzed in the hot exhaust gas under various engine conditions at different speeds and loads The measurement data are compared to the emission values of conventional diesel fuel (gas oil). The investigation concentrates on a modern common rail TDI light duty diesel, four cylinders, for passenger cars. The differences in the gas and particulate matter emission - compared to conventional diesel fuel - are remarkably low for the diesel engine which is properly adjusted for the plant oils. Emission data of an old heavy duty diesel engine are also shown for comparison reasons and reveals large differences. Differences are found in the pressures of the indicator diagram, time resolved over the crank angle. Plant oils consistently exhibit a higher cylinder pressure. The TEM investigation confirms the differences found by the LPME (long path multi-wavelength extinction) on-line analysis.
Plant oils may be used as a sustainable, nearly CO2neutral fuel for diesel engines. This work investigates experimentally the particulate and gaseous emissions of diesel engines fuelled with different non-esterified, pure plant oils. The data are collected from three engines: a) Common rail 1.7 liter passenger car engine from Opel AG b) 12.8 liter truck engine from VOLVO c) Truck engine from MAN AG.
The emissions of the MAN engine have been used to perform AMES tests to analyze possible health impacts of plant oil operation. Finally, all emission results with plant oils have been compared to traditional gas oils.
Non-Esterified Plant Oils as Fuel -Engine Characteristics, Emissions and Mutagenic effects of PM-
(2009)
Plant oils may be used as a sustainable, nearly CO2 neutral fuel for diesel engines. This work investigates experimentally the particulate and gaseous emissions of diesel engines fuelled with non-esterified, pure plant oils with the quality standard of DIN V 51605 (Weihen-stephan RK-Qualitätsstandard 05/2000). The data are collected from three engines:
Common rail passenger car engine from OPEL AG
Truck engine from VOLVO
Truck engine from MAN AG
All engines have been correctly adjusted to plant oil operation.
The OPEL and VOLVO engines served for the basic investigations. The emissions of the MAN engine have been used to perform AMES tests to analyze possible health impacts of plant oil operation.
The experimental data show a reduction of particulate matter compared to traditional gasoil which may yield up to 50 % for. The particulate matter shows same primary particle sizes but the agglomerates as collected on TEM grids are different - the plant oil soot particles tend to form larger aggregates [4]. The gaseous emissions of CO and hydrocarbons HC are generally lower compared to the operation with gasoil. However, the NOX emissions are slightly higher. This may be contributed to the measured higher combustion chamber pressures and temperatures when fuelled by plant oils.
Emission samples have been extracted from ESC cycles of 13 step tests to perform the AMES test which give indication on carcinogen substances. The AMES test results gave no indication of mutagenic effects exceeding the detection limits. No significant differences could be found comparing the emissions of plant oil and gasoil operation. Thus, it can be stated that the emission from plant oil operation does not have a health impact different to traditional gas oil. This is in contrast to some other publications — a deeper insight shows that these investigations did not properly modify the engine for plant oils. It is mandatory to make the engine modification to pre-warm the plant oils to approx. 90°C prior to injection. The engine's warm-up phase needs special care to avoid any coking at the injection system and combustion chamber surfaces. The publications where a higher health risk was claimed to be found in the exhaust of plant oil fuels, did not pre-warm the plant oils — cold plant oils have been injected in the combustion chamber instead. This results in incomplete atomization and incomplete combustion with a lot of hazardous emission species (see also [4,11]. Such an operation will damage the engine after relatively short times and is, therefore, not realistic.
The investigated fuels had some influence on the engine characteristics. Higher temperatures and pressures in the cylinder have been detected for some plant oils compared to gasoil. This increase is explained by the higher oxygen content within the plant oils.
Not only is the number of new devices constantly increasing, but so is their application complexity and power. Most of their applications are in optics, photonics, acoustic and mobile devices. Working speed and functionality is achieved in most of media devices by strategic use of digital signal processors and microcontrollers of the new generation. Considering all these premises of media development dynamics, the authors present how to integrate microcontrollers and digital signal processors in the curricula of media technology lectures by using adequate content. This also includes interdisciplinary content that consists of using the acquired knowledge in media software. These entries offer a deeper understanding of photonics, acoustics and media engineering.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
Recent deep learning based approaches have shown remarkable success on object segmentation tasks. However, there is still room for further improvement. Inspired by generative adversarial networks, we present a generic end-to-end adversarial approach, which can be combined with a wide range of existing semantic segmentation networks to improve their segmentation performance. The key element of our method is to replace the commonly used binary adversarial loss with a high resolution pixel-wise loss. In addition, we train our generator employing stochastic weight averaging fashion, which further enhances the predicted output label maps leading to state-of-the-art results. We show, that this combination of pixel-wise adversarial training and weight averaging leads to significant and consistent gains in segmentation performance, compared to the baseline models.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.