Refine
Year of publication
- 2015 (72) (remove)
Document Type
- Conference Proceeding (72) (remove)
Conference Type
- Konferenzartikel (61)
- Konferenz-Abstract (7)
- Sonstiges (3)
- Konferenz-Poster (1)
Has Fulltext
- no (72) (remove)
Is part of the Bibliography
- yes (72) (remove)
Keywords
- Applikation (5)
- Kommunikation (5)
- Ausbildung (4)
- Funktechnik (3)
- Licht (3)
- Physik (3)
- Sicherheit (3)
- Abtragung (2)
- Alexander von Humboldt (2)
- Datensicherung (2)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (36)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (16)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (14)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (12)
- Fakultät Wirtschaft (W) (10)
- ACI - Affective and Cognitive Institute (7)
- WLRI - Work-Life Robotics Institute (5)
- INES - Institut für nachhaltige Energiesysteme (3)
- IUAS - Institute for Unmanned Aerial Systems (1)
- POIM - Peter Osypka Institute of Medical Engineering (1)
Open Access
- Closed Access (41)
- Open Access (29)
- Bronze (3)
- Diamond (1)
Security in IT systems, particularly in embedded devices like Cyber Physical Systems (CPSs), has become an important matter of concern as it is the prerequisite for ensuring privacy and safety. Among a multitude of existing security measures, the Transport Layer Security (TLS) protocol family offers mature and standardized means for establishing secure communication channels over insecure transport media. In the context of classical IT infrastructure, its security with regard to protocol and implementation attacks has been subject to extensive research. As TLS protocols find their way into embedded environments, we consider the security and robustness of implementations of these protocols specifically in the light of the peculiarities of embedded systems. We present an approach for systematically checking the security and robustness of such implementations using fuzzing techniques and differential testing. In spite of its origin in testing TLS implementations we expect our approach to likewise be applicable to implementations of other cryptographic protocols with moderate efforts.
This article sets the focus on methods of information technology in the Humboldt Portal, which represents an ongoing research project to develop a virtual research environment on the Internet for the legacy of Alexander von Humboldt. Based on the experiences of developing and providing the Humboldt Digital Library (www.avhumboldt.net) for more than a decade, we defined a working plan to create an Internet portal for comprehensive access to Humboldt’s writings, no matter if documents are provided as PDF files, scan images or XML-TEI documents on external archives (Google Books, Internet Archive, Deutsches Textarchiv, Bibliotheque National de France). Going far beyond services of a digital library we will provide an information network with multimedia assets, which are containing objects like terms, paragraphs, data tables, scan images, or illustrations, together with correlated properties like thematic linkage to other objects, relevant keywords with optional synonyms and dynamic hyperlinks to related translations in different languages. So the Humboldt Portal can contribute to the key question, how to present interconnected data in an appropriate form using information technologies on the Web.
Alexander von Humboldt, a German scientist and explorer of the 19th century, viewed the natural world holistically and described the harmony of nature among the diversity of the physical world as a conjoining between all physical disciplines. He noted in his diary: “Everything is interconnectedness.”
The main feature of Humboldt’s pioneering work was later named “Humboldtian science”, meaning the accurate study of interconnected real phenomena in order to find a definite law and a dynamic cause.
Following Humboldt's idea of nature, an Internet edition of his works must preserve the author’s original intention, retain an awareness of all relevant works, and still adhere to the requirements of scholarly edition.
At the present time, however, the highly unconventional form of his publications has undermined the awareness and a comprehensive study of Humboldt’s works.
Digital libraries should supply dynamic links to sources, maps, images, graphs and relevant texts. New forms of interaction and synthesis between humanistic texts and scientific observation need to be created.
Information technology is the only way to do justice to the broad range of visions, descriptions and the idea of nature of Humboldt’s legacy. It finally leads to virtual research environments as an adequate concept to redesign our digital archives, not only for Humboldt’s documents, but for all interconnected data.
Autonomous humanoid robots require light weight, high torque and high speed actuators to be able to walk and run. For conventional gears with a fixed gear ratio the product of torque and velocity is constant. On the other hand desired motions require maximum torque and speed. In this paper it is shown that with a variable gear ratio it is possible to vary the relation between torque and velocity. This is achieved by introducing systems of rods and levers to move the joints of our humanoid robot ”Sweaty II”. On the basis of a variable gear ratio low speed and high torque can be achieved for those joint angles, which require this motion mode, whereas high speed and low torque can be realized for those joint angles, where it is favorable for the desired motion.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
With projectors and depth cameras getting cheaper, assistive systems in industrial manufacturing are becoming increasingly ubiquitous. As these systems are able to continuously provide feedback using in-situ projection, they are perfectly suited for supporting impaired workers in assembling products. However, so far little research has been conducted to understand the effects of projected instructions on impaired workers. In this paper, we identify common visualizations used by assistive systems for impaired workers and introduce a simple contour visualization. Through a user study with 64 impaired participants we compare the different visualizations to a control group using no visual feedback in a real world assembly scenario, i.e. assembling a clamp. Furthermore, we introduce a simplified version of the NASA-TLX questionnaire designed for impaired participants. The results reveal that the contour visualization is significantly better in perceived mental load and perceived performance of the participants. Further, participants made fewer errors and were able to assemble the clamp faster using the contour visualization compared to a video visualization, a pictorial visualization and a control group using no visual feedback.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
The Effect of Gamification on Emotions - The Potential of Facial Recognition in Work Environmentsns
(2015)
Gamification means using video game elements to improve user experience and user engagement in non-game services and applications. This article describes the effects when gamification is used in work contexts. Here we focus on industrial production. We describe how facial recognition can be employed to measure and quantify the effect of gamification on the users’ emotions.
The quantitative results show that gamification significantly reduces both task completion time and error rate. However, the results concerning the effect on emotions are surprising. Without gamification there are not only more unhappy expressions (as to expect) but surprisingly also more happy expressions. Both findings are statistically highly significant.
We think that in redundant production work there are generally more (negative) emotions involved. When there is no gamification happy and unhappy balance each other. In contrast gamification seems to shift the spectrum of moods towards “relaxed”. Especially for work environments such a calm attitude is a desirable effect on the users. Thus our findings support the use of gamification.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
Wireless sensor networks have recently found their way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researchers. Such monitoring applications, in general, don way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researc latency requirements regarding to the energy efficiency. Also a challenge of this application is the network topology as the application should be able to be deployed in very large scale. Nevertheless low power consumption of the devices making up the network must be on focus in order to maximize the lifetime of the whole system. These devices are usually battery-powered and spend most of their energy budget on radio transceiver module. A so-called Wake-On-Radio (WoR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, some designs for integration of WOR into IEEE 802.1.5.4 are to be discussed, providing an overview of trade-offs in energy consumption while deploying the WoR schemes in a monitoring system.
Environmental Monitoring is an attractive application field for Wireless Sensor Network (WSN). Water Level Monitoring helps to increase the efficiency of water distribution and management. In Pakistan, the world’s largest irrigation system covers 90.000 km of channels which needs to be monitored and managed on different levels. Especially the sensor systems for the small distribution channels need to be low energy and low cost. The distribution presents a technical solution for a communication system which is developed in a research project being co-funded by German Academic Exchange Service (DAAD). The communication module is based on IEEE-802.15.4 transceivers which are enhanced through Wake-On-Radio (WOR) to combine low-energy and real-time behavior. On higher layers, IPv6 (6LoWPAN) and corresponding routing protocols like Routing Protocol for Low power and Lossy Networks (RPL) can extend range of the network. The data are stored in a database and can be viewed online via a web interface. Of course, also automatic data analysis can be performed.
We report the use of the Raman spectral information of the chemical compound toluene C7H8 as a reference on the analysis of laboratory-prepared and commercially acquired gasoline-ethanol blends. The rate behavior of the characteristic Raman lines of toluene and gasoline has enabled the approximated quantification of this additive in commercial gasoline-ethanol mixtures. This rate behavior has been obtained from the Raman spectra of gasoline-ethanol blends with different proportions of toluene.
All these Raman spectra have been collected by using a self-designed, frequency precise and low-cost Fourier-transform Raman spectrometer (FT-Raman spectrometer) prototype. This FT-Raman prototype has helped to accurately confirm the frequency position of the main characteristic Raman lines of toluene present on the different gasoline-ethanol samples analyzed at smaller proportions than those commonly found in commercial gasoline-ethanol blends. The frequency accuracy validation has been performed by analyzing the same set of toluene samples with two additional state-of-the-art commercial FT-Raman devices. Additionally, the spectral information has been contrasted, with highly-correlated coefficients as a result, with the values of the standard Raman spectrum of toluene.
The application of leaky feeder (radiating) cables is a common solution for the implementation of reliable radio communication in huge industrial buildings, tunnels and mining environment. This paper explores the possibilities of leaky feeders for 1D and 2D localization in wireless systems based on time of flight chirp spread spectrum technologies. The main focus of this paper is to present and analyse the results of time of flight and received signal strength measurements with leaky feeders in indoor and outdoor conditions. The authors carried out experiments to compare ranging accuracy and radio coverage area for a point-like monopole antenna and for a leaky feeder acting as a distributed antenna. In all experiments RealTrac equipment based on nanoLOC radio standard was used. The estimation of the most probable path of a chirp signal going through a leaky feeder was calculated using the ray tracing approach. The typical non-line-of-sight errors profiles are presented. The results show the possibility to use radiating cables in real time location technologies based on time-of-flight method.
We provide a privacy-friendly cloud-based smart metering storage architecture which provides few-instance storage on encrypted measurements by at the same time allowing SQL queries on them. Our approach is most flexible with respect to two axes: on the one hand it allows to apply filtering rules on encrypted data with respect to various upcoming business cases; on the other hand it provides means for a storage-efficient handling of encrypted measurements by applying server-side deduplication techniques over all incoming smart meter measurements. Although the work at hand is purely dedicated to a smart metering architecture we believe our approach to have value for a broader class of IoT cloud storage solutions. Moreover, it is an example for Privacy-by-design supporting the positive-sum paradigm.
In der Wertanalyse ist die Methodik TRIZ (Theorie der Lösung erfinderischer Problemstellungen) seit vielen Jahren als Werkzeug zur Kostensenkung oder zur Steigerung der Funktionalität von Produkten bekannt. Seit ihrem ersten Bekanntwerden in Westeuropa hat sich auch TRIZ weiterentwickelt. So wurden Methoden zur Modellierung von Systemen inzwischen erweitert und um Werkzeuge zur schnellen Lösungsfindung, zur Fehlervoraussage und zur Produktplanung neu entwickelt. Durch den weltweiten wissenschaftlichen Fortschritt, die Verwendung unterschiedlicher Sprachen und neue Literatur ist andererseits auch die verwendete Terminologie angewachsen und nicht mehr eindeutig. Die neue VDI-Richtlinie 4521, von deren erstem Teil nun der Gründruck vorliegt, zielt deswegen auf eine Standardisierung der Terminologie und eine vereinheitlichte Beschreibung der Methoden ab. Mit ihrer Hilfe sollen das Studium der Methodik erleichtert, die Benutzung von Literatur vereinfacht und Inhalte der TRIZ klarer darstellbar werden.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things. It can be observed that most of the available solutions are following an open source approach, which significantly leads to a fast development of technologies and of markets. Although the currently available implementations are in a pretty good shape, all of them come with some significant drawbacks. It was therefore decided to start the development of an own implementation, which takes the advantages from the existing solutions, but tries to avoid the drawbacks. This paper discussed the reasoning behind this decision, describes the implementation and its characteristics, as well as the testing results. The given implementation is available as open-source project under [15].
Distribution of esophageal interventricular conduction delays in CRT patients and healthy subjects
(2015)
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things (IoT). Whereas the lower layers (IEEE802.15.4 and 6LoWPAN) are already well defined and consolidated with regard to frame formats, header compression, routing protocols and commissioning procedures, there is still an abundant choice of possibilities on the application layer. Currently, various groups are working towards standardization of the application layer, i.e. the ETSI Technical Committee on M2M, the IP for Smart Objects (IPSO) Alliance, Lightweight M2M (LWM2M) protocol of the Open Mobile Alliance (OMA), and OneM2M. This multitude of approaches leaves the system developer with the agony of choice. This paper selects, presents and explains one of the promising solutions, discusses its strengths and weaknesses, and demonstrates its implementation.
This paper presents a practice and science orientated education approach for freshman students of interdisciplinary bachelor engineering degree programs. This approach is meant to enhance the motivation and success of freshman students during their whole study. The education approach is called Fit4PracSis (Fit for Practice and Sciences) It was started to develop, set up and establish an education approach, which is building a relationship to students' future profession and to scientific working during the introductory study phase. The freshman students will be trained early in important skills, which are necessary for achieving the final degree successfully and handling of future business and research activities.
In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
In online analytical processing (OLAP), filtering elements of a given dimensional attribute according to the value of a measure attribute is an essential operation, for example in top-k evaluation. Such filters can involve extremely large amounts of data to be processed, in particular when the filter condition includes “quantification” such as ANY or ALL, where large slices of an OLAP cube have to be computed and inspected. Due to the sparsity of OLAP cubes, the slices serving as input to the filter are usually sparse as well, presenting a challenge for GPU approaches which need to work with a limited amount of memory for holding intermediate results. Our CUDA solution involves a hashing scheme specifically designed for frequent and parallel updates, including several optimizations exploiting architectural features of Nvidia’s Fermi and Kepler GPUs.
Im Projekt bwLehrpool wurde ein verteiltes System für die flexible Nutzung von Rechnerpools durch Desktop-Virtualisierung entwickelt. Auf Basis eines zentral gebooteten Linux- Grundsystems können beliebige virtualisierbare Betriebssysteme für Lehrund Prüfungszwecke zentral bereitgestellt und lokal auf den Maschinen aus-gewählt werden. Die verschiedenen Ar- beitsumgebungen müssen nicht mehr auf den PCs installiert werden und erlauben so eine multifunktionale Nutzung von PCs und Räumen für vielfältige Lehrund Lernszenarien sowie für elektronische Prüfungen. bwLehrpool abstrahiert von der PC-Hardware vor Ort und ermöglicht den Dozenten die eigene Gestaltung und Verwaltung ihrer Softwareumgebungen als Self-Service. Darüber hinaus fördert bwLehrpool den hochschulübergreifenden Austausch von Kursumgebungen.
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus.
The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves.
The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop “Liquid Crystal Display in the do-it-yourself”.
Combined heat and power production (CHP) based on solid oxide fuel cells (SOFC) is a very promising technology to achieve high electrical efficiency to cover power demand by decentralized production. This paper presents a dynamic quasi 2D model of an SOFC system which consists of stack and balance of plant and includes thermal coupling between the single components. The model is implemented in Modelica® and validated with experimental data for the stack UI-characteristic and the thermal behavior. The good agreement between experimental and simulation results demonstrates the validity of the model. Different operating conditions and system configurations are tested, increasing the net electrical efficiency to 57% by implementing an anode offgas recycle rate of 65%. A sensitivity analysis of characteristic values of the system like fuel utilization, oxygen-to-carbon ratio and electrical efficiency for different natural gas compositions is carried out. The result shows that a control strategy adapted to variable natural gas composition and its energy content should be developed in order to optimize the operation of the system.
The transformation of the building energy sector to a highly efficient, clean, decentralised and intelligent system requires innovative technologies like microscale trigeneration and thermally activated building structures (TABS) to pave the way ahead. The combination of such technologies however presents a scientific and engineering challenge. Scientific challenge in terms of developing optimal thermo-electric load management strategies based on overall energy system analysis and an engineering challenge in terms of implementing these strategies through process planning and control. Initial literature research has pointed out the need for a multiperspective analysis in a real life laboratory environment. To this effect an investigation is proposed wherein an analytical model of a microscale trigeneration system integrated with TABS will be developed and compared with a real life test-rig corresponding to building management systems. Data from the experimental analysis will be used to develop control algorithms using model predictive control for achieving the thermal comfort of occupants in the most energy efficient and grid reactive manner. The scope of this work encompasses adsorption cooling based microscale trigeneration systems and their deployment in residential and light commercial buildings.
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university’s laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based.
Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one’s perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content.
The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
The demand of wireless solutions in industrial applications increases since the early nineties. This trend is not only ongoing, it is further pushed by developments in the area of software stacks like the latest Bluetooth Low Energy Stack. It is also pushed by new chip-designs and powerful and highly integrated electronic hardware. The acceptance of wireless technologies as a possible solution for industrial applications, has overcome the entry barrier [1]. The first step to see wireless as standard for many industrial applications is almost accomplished. Nevertheless there is nearly none acceptance of wireless technology for Safety applications. One highly challenging and demanding requirement is still unsolved: The aspect safety and robustness. Those topics have been addressed in many cases but always in a similar manner. WirelessHART as an example addresses this topic with redundant so called multiple propagation paths and frequency hopping to handle with interferences and loss of network participants. So far the pure peer to peer link is rarely investigated and there are less safety solutions available. One product called LoRa™ can be seen as one possible solution to address this lack of safety within wireless links. This paper focuses on the safety performance evaluation of a modem-chip-design. The use of diverse and redundant wireless technologies like LoRa can lead to an increase acceptance of wireless in safety applications. Many measurements in real industrial application have been carried out to be able to benchmark the new chip in terms of the safety aspects. The content of this research results can help to raise the level of confidence in wireless. In this paper, the term “safety” is used for data transmission reliability.
In addition to traditional methods in product development, the increasing availability of two new 3D digital technologies, namely digital manufacturing (3D-printing) and digitizing of surfaces (3D-scanning), offer new opportunities in product development processes today. With regard to the systematic implementation of these technologies in the education of students in the field of product development, however, only a small number of approaches exist so far. This paper explores several ways in which 3D digital technologies can productively be used in design education. The innovative aspects here include that the students assemble and install the 3D-printers themselves, and that they are introduced to an approach that combines 3D-scanning followed by 3D-printing.
This paper presents a new approach for the teaching of competence in additive manufacturing to engineering students in product development. Particularly new to this approach is the combination of the students' autonomous assembly and commissioning of a 3D-printer, and the independent development of guidelines for this new technology regarding the design of components. This way the students will be able to gain first practical experiences with the data preparation, the additive manufacturing process itself and also the required post-treatment of the 3D-printed parts. To allow the students a significantly deeper insight into the functioning of 3D-printing, the workshop Rapid Prototyping developed a new approach in the course of which the students first assemble a construction kit for a 3D-printer themselves and then commission the printer. This enables the students to gain a better understanding of the functionality and configuration of additive manufacturing. In a next step, the students used the 3D-printers they constructed themselves to produce components which they take from a database. Finally, the experiences of the students in the course of the workshop will be evaluated to review the effectiveness of the new approach.
Application of Polymer Plaster Composites in Additive Manufacturing of High-Strength Components
(2015)
Today, 3D-printing with polymer plaster composites is a common method in Additive Manufacturing. This technique has proven to be especially suitable for the production of presentation models, due to the low cost of materials and the possibility to produce color-models. But nowadays it requires refinishing through the manual application of a layer of resin. However, the strength of these printed components is very limited, as the applied resin only penetrates a thin edge layer on the surface. This paper develops a new infiltration technique that allows for a significant increase in the strength of the 3D-printed component. For this process, the components are first dehydrated in a controlled two-tier procedure, before they are then penetrated with high-strength resin. The infiltrate used in this process differs significantly from materials traditionally used for infiltration. The result is an almost complete penetration of the components with high-strength infiltrate. As the whole process is computer-integrated, the results are also easier to reproduce, compared to manual infiltration. On the basis of extensive material testing with different testing specimen and testing methods, it can be demonstrated that a significant increase in strength and hardness can be achieved. Finally, this paper also considers the cost and energy consumption of this new infiltration method. As a result of this new technology, the scope of applicability of 3D-printing can be extended to cases that require significantly more strength, like the production of tools for the shaping of metals or used for the molding of plastics. Furthermore, both the process itself and the parameters used are monitored and can be optimized to individual requirements and different fields of application.