Refine
Year of publication
Document Type
- Conference Proceeding (455)
- Article (reviewed) (242)
- Part of a Book (200)
- Article (unreviewed) (156)
- Other (75)
- Book (60)
- Contribution to a Periodical (34)
- Report (8)
- Letter to Editor (4)
- Doctoral Thesis (3)
Conference Type
- Konferenzartikel (397)
- Konferenz-Abstract (38)
- Sonstiges (9)
- Konferenz-Poster (8)
- Konferenzband (3)
Language
- English (622)
- German (615)
- Other language (2)
- Russian (2)
Has Fulltext
- no (1241) (remove)
Is part of the Bibliography
- yes (1241)
Keywords
- Dünnschichtchromatographie (19)
- Digitalisierung (12)
- Gamification (10)
- Adsorption (9)
- Assistive Technology (9)
- Industrie 4.0 (9)
- Haustechnik (8)
- Data Analytics (7)
- Human Computer Interaction (7)
- Kommunikation (7)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (363)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (268)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (244)
- Fakultät Wirtschaft (W) (182)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (136)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (78)
- INES - Institut für nachhaltige Energiesysteme (63)
- Fakultät Medien (M) (ab 22.04.2021) (50)
- ACI - Affective and Cognitive Institute (39)
- WLRI - Work-Life Robotics Institute (20)
Open Access
- Closed Access (1241) (remove)
In recent years, light-weight cryptography has received a lot of attention. Many primitives suitable for resource-restricted hardware platforms have been proposed. In this paper, we present a cryptanalysis of the new stream cipher A2U2 presented at IEEE RFID 2011 [9] that has a key length of 56 bit. We start by disproving and then repairing an extremely efficient attack presented by Chai et al. [8], showing that A2U2 can be broken in less than a second in the chosen-plaintext case. We then turn our attention to the more challenging known-plaintext case and propose a number of attacks. A guess-and-determine approach combined with algebraic cryptanalysis yields an attack that requires about 249 internal guesses. We also show how to determine the 5-bit counter key and how to reconstruct the 56-bit key in about 238 steps if the attacker can freely choose the IV. Furthermore, we investigate the possibility of exploiting the knowledge of a “noisy keystream” by solving a Max-PoSSo problem. We conclude that the cipher needs to be repaired and point out a number of simple measures that would prevent the above attacks.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
Erfinderisches Problemlösen mit TRIZ : Zielbeschreibung, Problemdefinition und Lösungspriorisierung
(2017)
Die Theorie des erfinderischen Problemlösens, TRIZ, ist eine Systematik von Annahmen, Regeln, Methoden und Werkzeugen zur innovativen Systemverbesserung z.B. von Produkten, Prozessen, Dienstleistungen oder Organisationen. Diese Richtlinie erläutert TRIZ-Werkzeuge und -Methoden, die insbesondere in den Phasen "Zielbeschreibung", "Problemdefinition" und "Lösungspriorisierung" des Problemlösungsprozesses eingesetzt werden. Die Detailtiefe der Beschreibung erlaubt eine Einschätzung der Werkzeuge und Methoden hinsichtlich Einsatzzwecken, Ergebnissen und Funktionsweise. Die jeweilige Beschreibung der Methoden und Werkzeuge enthält konkrete Aussagen über Zielsetzung und Ergebnis ihres Einsatzes.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
A systematic toxicological analysis procedure using high-performance thin layer chromatography in combination with fibre optical scanning densitometry for identification of drugs in biological samples is presented. Two examples illustrate the practicability of the technique. First, the identification of a multiple intake of analgesics: codeine, propyphenazone, tramadol, flupirtine and lidocaine, and second, the detection of the sedative diphenhydramine. In both cases, authentic urine specimens were used. The identifications were carried out by an automatic measurement and computer-based comparison of in situ UV spectra with data from a compiled library of reference spectra using the cross-correlation function. The technique allowed a parallel recording of chromatograms and in situ UV spectra in the range of 197–612 nm. Unlike the conventional densitometry, a dependency of UV spectra by concentration of substance in a range of 250–1000 ng/spot was not observed.
Soccer simulation league is one of the founding leagues of RoboCup. In this paper we discuss the past, present and planned future achievements and changes. Also we summarize the connections and inter-league achievements of this league and provide an overview of the community contributions that made this league successful.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Ripple: Overview and Outlook
(2015)
Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple.
In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.
In this paper we integrate the ideas of network coding and relays into an existing practical network architecture used in a wireless network scenario. Specifically, we use the COPE architecture to test our ideas. Since previous works have focused on the communication aspect at the physical layer level, we attempt to take it one step further by including the MAC layer. Our idea is based on information theoretic concepts developed by Shannon in order to reliably apply network coding to increase the net throughput.
We present a 3D simulation approach utilising the diffuse interface representation of the phase-field method combined with a heat transfer equation to analyse the thermal conductivity in air-filled aluminium foams with complex cellular structures of different porosity. Algorithmic methods are introduced to create synthetic open-cell foam structures and to compute the thermal conductivity by means of phase-field modelling. A material law for the effective thermal conductivity is derived by determining the appropriate exponent depending on the relative density in the system. The results are compared with the thermal conductivity in massive aluminium and in pure air.
Gas adsorption studies of CO2 and N2 in spatially aligned double-walled carbon nanotube arrays
(2013)
Gas adsorption studies (CO2 and N2) over a wide pressure range on vertically, highly aligned dense double-walled carbon nanotube arrays of high purity and high specific surface area are reported. At high pressures, the adsorption capacity of these materials was found to be comparable to those of metal organic frameworks and mesoporous molecular sieves. These highly aligned CNT arrays were chemically modified by treating with oxygen plasma and structurally modified by decreasing the diameter of individual carbon nanotubes. Oxygen plasma treatment led to grafting of a large number of C–O functional groups onto the CNT surface, which further increased the gas adsorption capacity. It was found that gas adsorption is dependent on tube diameter and increases with decrease of the individual CNT diameter in the CNT bundles. As results of our studies we have found that at lower pressure regimes, plasma functionalized carbon nanotubes exhibit better adsorption characteristics whereas at higher pressures, lower diameter carbon nanotube structures exhibited better gas adsorption characteristics.
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
NEXCODE is a project promoted by the European Space Agency aimed at research design development and demonstration of a receiver chain for telecomm and links in space missions including the presence of new short low-density parity-check codes for error correction. These codes have excellent performance from the error rate viewpoint but also put new challenges as regards synchronization issues and implementation. In this paper after a short review of the results obtained through numerical simulations we present an overview of the breadboard designed for practical testing and the test-plan proposed for the verification of the breadboard and the validation of the new codes and novel synchronization techniques under relevant operation conditions.
This work provides a series of methane adsorption isotherms and breakthrough curves on one 5A zeolite and one activated carbon. Breakthrough curves of CH4 were obtained from dynamic column measurements at different temperature and pressure conditions for concentrations of 4.4 – 17.3 mol.‐% in H2/CH4 mixtures. A simple model was developed to simulate the curves using measured and calculated data inputs. The results show that the model predictions agree very well with the experiments.
The separation of nitrogen and methane from hydrogen-rich mixtures is systematically investigated on a recently developed binder-free zeolite 5A. For this adsorbent, the present work provides a series of experimental data on adsorption isotherms and breakthrough curves of nitrogen and methane, as well as their mixtures in hydrogen. Isotherms were measured at temperatures of 283–313 K and pressures of up to 1.0 MPa. Breakthrough curves of CH4, N2, and CH4/N2 in H2 were obtained at temperatures of 300–305 K and pressures ranging from 0.1 to 6.05 MPa with different feed concentrations. An LDF-based model was developed to predict breakthrough curves using measured and calculated data as inputs. The number of parameters and the use of correlations were restricted to focus on the importance of measured values. For the given assumptions, the results show that the model predictions agree satisfactorily with the experiments under the different operating conditions applied.
As a basis for the evaluation of hydrogen storage by physisorption, adsorption isotherms of H2 were experimentally determined for several porous materials at 77 K and 298 K at pressures up to 15 MPa. Activated carbons and MOFs were studied as the most promising materials for this purpose. A noble focus was given on how to determine whether a material is feasible for hydrogen storage or not, dealing with an assessment method and the pitfalls and problems of determining the viability. For a quantitative evaluation of the feasibility of sorptive hydrogen storage in a general analysis, it is suggested to compare the stored amount in a theoretical tank filled with adsorbents to the amount of hydrogen stored in the same tank without adsorbents. According to our results, an “ideal” sorbent for hydrogen storage at 77 K is calculated to exhibit a specific surface area of >2580 m2 g−1 and a micropore volume of >1.58 cm3 g−1.
Die Coronakrise hat weltweit das Wirtschafts- und Gesellschaftsleben in bisher ungekannter Weise verändert. Die ohnehin bereits komplexen Herausforderungen in Zeiten des Klimawandels sind damit noch gestiegen. Genossenschaftliche Innovationsökosysteme können Lösungsansätze für die gravierenden Veränderungen im unternehmerischen, kommunalen und gesellschaftlichen Umfeld schaffen.
Wurzeln, Werte und Visionen
(2018)
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
Background: Increasing awareness of the importance of evidence-based medicine is demonstrated not only by an increasing number of articles addressing it but also by a specialty-wide evidence-based medicine initiative. The authors critically analyzed the quality of reporting of randomized controlled trials published in this Journal over a 21-year period (1990 to 2010).
Methods: A hand search was conducted, including all issues of Plastic and Reconstructive Surgery from January of 1990 to December of 2010. All randomized controlled trials published during this time period were identified with the Cochrane decision tree for identification of randomized controlled trials. To assess the quality of reporting, a modification of the checklist of the Consolidated Standard of Reporting Trials Statement was used.
Results: Of 7121 original articles published from 1990 to 2010 in the Journal, 159 (2.23 percent) met the Cochrane criteria. A significant increase in the absolute number of randomized controlled trials was seen over the study period (p < 0.0001). The median quality of these trials from 1990 to 2010 was "fair," with a trend toward improved quality of reporting over time (p = 0.127).
Conclusions: A favorable trend is seen with respect to an increased number of published randomized controlled trials in Plastic and Reconstructive Surgery. Adherence to standard reporting guidelines is recommended, however, to further improve the quality of reporting. Consideration may be given to providing information regarding the quality of reporting in addition to the "level of evidence pyramid," thus facilitating critical appraisal.
It is the purpose of this paper to address ethical issues concerning the development and application of Assistive Technology at Workplaces (ATW). We shall give a concrete technical concept how such technology might be constructed and propose eight technical functions it should adopt in order to serve its purpose. Then, we discuss the normative questions why one should use ATW, and by what means. We argue that ATW is good to the extent that it ensures social inclusion and consider four normative domains in which its worth might consists in. In addition, we insist that ATW must satisfy two requirements of good workplaces, which we specify as (a) an exploitation restraint and (b) a duty of care.
The design of control systems of concentrator photovoltaic power plants will be more challenging in the future. Reasons are cost pressure, the increasing size of power plants, and new applications for operation, monitoring and maintenance required by grid operators, manufacturers and plant operators. Concepts and products for fixed-mounted photovoltaic can only partly be adapted since control systems for concentrator photovoltaic are considerable more complex due to the required high accurate sun-tracking. In order to assure reliable operation during a lifetime of more than 20 years, robustness of the control system is one crucial design criteria. This work considers common engineering technics for robustness, safety and security. Potential failures of the control system are identified and their effects are analyzed. Different attack scenarios are investigated. Outcomes are design criteria that encounter both: failures of system components and malicious attacks on the control system of future concentrator photovoltaic power plants. Such design criteria are a transparent state management through all system layers, self-tests and update capabilities for security concerns. The findings enable future research to develop a more robust and secure control system for concentrator photovoltaics when implementing new functionalities in the next generation.
The communication system of a large-scale concentrator photovoltaic power plant is very challenging. Manufacturers are building power plants having thousands of sun tracking systems equipped with communication and distributed over a wide area. Research is necessary to build a scalable communication system enabling modern control strategies. This poster abstract describes the ongoing work on the development of a simulation model of such power plants in OMNeT++. The model uses the INET Framework to build a communication network based on Ethernet. First results and problems of timing and data transmission experiments are outlined. The model enables research on new communication and control approaches to improve functionality and efficiency of power plants based on concentrator photovoltaic technology.
In the present study, in vitro toxicity as well as biopersistence and photopersistence of four artificial sweeteners (acesulfame, cyclamate, saccharine, and sucralose) and five antibiotics (levofloxacin, lincomycin, linezolid, marbofloxacin, and sarafloxacin) and of their phototransformation products (PTPs) were investigated. Furthermore, antibiotic activity was evaluated after UV irradiation and after exposure to inocula of a sewage treatment plant. The study reveals that most of the tested compounds and their PTPs were neither readily nor inherently biodegradable in the Organisation for Economic Co-operation and Development (OECD)-biodegradability tests. The study further demonstrates that PTPs are formed upon irradiation with an Hg lamp (UV light) and, to a lesser extent, upon irradiation with a Xe lamp (mimics sunlight). Comparing the nonirradiated with the corresponding irradiated solutions, a higher chronic toxicity against bacteria was found for the irradiated solutions of linezolid. Neither cytotoxicity nor genotoxicity was found in human cervical (HeLa) and liver (Hep-G2) cells for any of the investigated compounds or their PTPs. Antimicrobial activity of the tested fluoroquinolones was reduced after UV treatment, but it was not reduced after a 28-day exposure to inocula of a sewage treatment plant. This comparative study shows that PTPs can be formed as a result of UV treatment. The study further demonstrated that UV irradiation can be effective in reducing the antimicrobial activity of antibiotics, and consequently may help to reduce antimicrobial resistance in wastewaters. Nevertheless, the study also highlights that some PTPs may exhibit a higher ecotoxicity than the respective parent compounds. Consequently, UV treatment does not transform all micropollutants into harmless compounds and may not be a large-scale effluent treatment option.
The formation and analysis of ten microporous triazolyl isophthalate based MOFs, including nine isomorphous and one isostructural compound is presented. The compounds 1 M – 3 M with the general formula [ M ( R 1 - R 2 - trz - ia ) ] ∞ 3 ·x H 2 O (M 2+ = Co 2+ , Cu 2+ , Zn 2+ , Cd 2+ ; R 1 = H, Me; R 2 = 2py, 2pym, prz (2py = 2-pyridinyle; 2pym = 2-pyrimidinyle; prz = pyrazinyle)) crystallize with rtl topology. They are available as single crystals and also easily accessible in a multi-gram scale via refluxing the metal salts and the protonated ligands in a solvent. Their isomorphous structures facilitate the synthesis of heteronuclear MOFs; in case of 2 M , Co 2+ ions could be gradually substituted by Cu 2+ ions. The Co 2+ :Cu 2+ ratios were determined by ICP-OES spectroscopy, the distribution of Co 2+ and Cu 2+ in the crystalline samples are investigated by SEM-EDX analysis leading to the conclusions that Cu 2+ is more favorably incorporated into the framework compared to Co 2+ and, moreover, that the distribution of the two metal ions between the crystals and within the crystals is inhomogeneous if the crystals were grown slowly. The various compositions of the heteronuclear materials lead to different colors and the sorption properties for CO 2 and N 2 are dependent on the integrated metal ions.
Several cloud schedulers have been proposed in the literature with different optimization goals such as reducing power consumption, reducing the overall operational costs or decreasing response times. A less common goal is to enhance the system security by applying specific scheduling decisions. The security risk of covert channels is known for quite some time, but is now back in the focus of research because of the multitenant nature of cloud computing and the co-residency of several per-tenant virtual machines on the same physical machine. Especially several cache covert channels have been identified that aim to bypass a cloud infrastructure's sandboxing mechanism. For instance, cache covert channels like the one proposed by Xu et. al. use the idealistic scenario with two alternately running colluding processes in different VMs accessing the cache to transfer bits by measuring cache access time. Therefore, in this paper we present a cascaded cloud scheduler coined C 3 -Sched aiming at mitigating the threat of a leakage of customers data via cache covert channels by preventing processes to access cache lines alternately. At the same time we aim at maintaining the cloud performance and minimizing the global scheduling overhead.
Covert channels have been known for a long time because of their versatile forms of appearance. For nearly every technical improvement or change in technology, such channels have been (re-)created or known methods have been adapted. For example, the introduction of hyperthreading technology has introduced new possibilities for covert communication between malicious processes because they can now share the arithmetic logical unit as well as the L1 and L2 caches, which enable establishing multiple covert channels. Even virtualization, which is known for its isolation of multiple machines, is prone to covert- and side-channel attacks because of the sharing of resources. Therefore, it is not surprising that cloud computing is not immune to this kind of attacks. Moreover, cloud computing with multiple, possibly competing users or customers using the same shared resources may elevate the risk of illegitimate communication. In such a setting, the “air gap” between physical servers and networks disappears, and only the means of isolation and virtual separation serve as a barrier between adversary and victim. In the work at hand, we will provide a survey on vulnerable spots that an adversary could exploit trying to exfiltrate private data from target virtual machines through covert channels in a cloud environment. We will evaluate the feasibility of example attacks and point out proposed mitigation solutions in case they exist.
This chapter portrays the historical and mathematical background of dynamic and procedural content generation (PCG). We portray and compare various PCG methods and analyze which mathematical approach is suited for typical applications in game design. In the next step, a structural overview of games applying PCG as well as types of PCG is presented. As abundant PCG content can be overwhelming, we discuss context-aware adaptation as a way to adapt the challenge to individual players’ requirements. Finally, we take a brief look at the future of PCG.