Refine
Year of publication
Document Type
- Contribution to a Periodical (164)
- Article (reviewed) (156)
- Conference Proceeding (100)
- Working Paper (97)
- Periodical Part (50)
- Report (14)
- Article (unreviewed) (8)
- Book (7)
- Part of a Book (6)
- Doctoral Thesis (5)
Conference Type
- Konferenzartikel (94)
- Konferenz-Abstract (3)
- Sonstiges (2)
- Konferenz-Poster (1)
Has Fulltext
- yes (612) (remove)
Is part of the Bibliography
- yes (612) (remove)
Keywords
- Offenburg / Fachhochschule (12)
- COVID-19 (10)
- Social Media (9)
- E-Learning (8)
- Marketing (8)
- 3D printing (7)
- Angewandte Forschung (7)
- Analyse (5)
- Chromatography (5)
- Export (5)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (160)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (91)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (87)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (74)
- Zentrale Einrichtungen (62)
- INES - Institut für nachhaltige Energiesysteme (61)
- Fakultät Wirtschaft (W) (57)
- Fakultät Medien (M) (ab 22.04.2021) (46)
- Rektorat/Verwaltung (26)
- CRT - Campus Research & Transfer (25)
Open Access
- Open Access (599)
- Bronze (95)
- Gold (65)
- Hybrid (38)
- Diamond (23)
- Grün (9)
- Closed (7)
- Closed Access (4)
In recent years, physically unclonable functions (PUFs) have gained significant attraction in IoT security applications, such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of different devices to generate unique fingerprints for security applications. When generating PUF-based secret keys, the reliability and entropy of the keys are vital factors. This study proposes a novel method for generating PUF-based keys from a set of measurements. Firstly, it formulates the group-based key generation problem as an optimization problem and solves it using integer linear programming (ILP), which guarantees finding the optimum solution. Then, a novel scheme for the extraction of keys from groups is proposed, which we call positioning syndrome coding (PSC). The use of ILP as well as the introduction of PSC facilitates the generation of high-entropy keys with low error correction costs. These new methods have been tested by applying them on the output of a capacitor network PUF. The results confirm the application of ILP and PSC in generating high-quality keys.
This paper will introduce the open-source model MyPyPSA-Ger, a myopic optimization model developed to represent the German energy system with a detailed mapping of the electricity sector, on a highly disaggregated level, spatially and temporally, with regional differences and investment limitations. Furthermore, this paper will give new outlooks on the German federal government 2050 emissions goals of the electricity sector to become greenhouse gas neutral by proposing new CO2 allowance strategies. Moreover, the regional differences in Germany will be discussed, their role and impact on the energy transition, and which regions and states will drive the renewable energy utilization forward.
Following a scenario-based analysis, the results point out the major keystones of the energy transition path from 2020 to 2050. Solar, onshore wind, and gas-fired power plants will play a fundamental role in the future electricity systems. Biomass, run of river, and offshore wind technologies will be utilized in the system as base-load generation technologies. Solar and onshore wind will be installed almost everywhere in Germany. However, due to the nature of Germany’s weather and geographical features, the southern and northern regions will play a more important role in the energy transition.
Higher CO2 allowance costs will help achieve the 1.5-degree-target of the electricity system and will allow for a rapid transition. Moreover, the more expensive, and the earlier the CO2 tax is applied to the system, the less it will cost for the energy transition, and the more emissions will be saved throughout the transition period. An earlier phase-out of coal power plants is not necessary with high CO2 taxes, due to the change in power plant’s unit commitment, as they prioritize gas before coal power plants. Having moderate to low CO2 allowance cost or no clear transition policy will be more expensive and the CO2 budget will be exceeded. Nonetheless, even with no policy, renewables still dominate the energy mix of the future.
However, maintaining the maximum historical installation rates of both national and regional levels, with the current emissions reduction strategy, will not be enough to reach the level of climate-neutral electricity system. Therefore, national and regional installation requirements to achieve the federal government emission reduction goals are determined. Energy strategies and decision makers will have to resolve great challenges in order to stay in line with the 1.5-degree-target.
Most recently, the federal government in Germany published new climate goals in order reach climate neutrality by 2045. This paper demonstrates a path to a cost optimal energy supply system for the German power grid until the year 2050. With special regard to regionality, the system is based on yearly myopic optimization with the required energy system transformation measures and the associated system costs. The results point out, that energy storage systems (ESS) are fundamental for renewables integration in order to have a feasible energy transition. Moreover, the investment in storage technologies increased the usage of the solar and wind technologies. Solar energy investments were highly accompanied with the installation of short-term battery storage. Longer-term storage technologies, such as H2, were accompanied with high installations of wind technologies. The results pointed out that hydrogen investments are expected to overrule short-term batteries if their cost continues to decrease sharply. Moreover, with a strong presence of ESS in the energy system, biomass energy is expected to be completely ruled out from the energy mix. With the current emission reduction strategy and without a strong presence of large scale ESS into the system, it is unlikely that the Paris agreement 2° C target by 2050 will be achieved, let alone the 1.5° C.
An import ban of Russian energy sources to Germany is currently being increasingly discussed. We want to support the discussion by showing a way how the electricity system in Germany can manage low energy imports in the short term and which measures are necessary to still meet the climate protection targets. In this paper, we examine the impact of a complete stop of Russian fossil fuel imports on the electricity sector in Germany, and how this will affect the climate coals of an earlier coal phase-out and climate neutrality by 2045.
Following a scenario-based analysis, the results gave a point of view on how much would be needed to completely rely on the scarce non-renewable energy resources in Germany. Huge amounts of investments would be needed in order to ensure a secure supply of electricity, in both generation energy sources (RES) and energy storage systems (ESS). The key findings are that a rapid expansion of renewables and storage technologies will significantly reduce the dependence of the German electricity system on energy imports. The huge integration of renewable energy does not entail any significant imports of the energy sources natural gas, hard coal, and mineral oil, even in the long term. The results showed that a ban on fossil fuel imports from Russia outlines huge opportunities to go beyond the German government's climate targets, where the 1.5-degree-target is achieved in the electricity system.
Method and system for extractin metal and oxygen from powdered metal oxides (EP000004170066A2)
(2023)
A method for extracting metal and oxygen from powdered metal oxides in electrolytic cell is proposed, the electrolytic cell comprising a container, a cathode, an anode and an oxygen-ion-conducting membrane, the method comprising providing a solid oxygen ion conducting electrolyte powder into a container, providing a feedstock comprising at least one metal oxide in powdered form into the container, applying an electric potential across the cathode and the anode, the cathode being in communication with the electrolyte powder and the anode being in communication with the membrane in communication with the electrolyte powder, such that at least one respective metallic species of the at least one metal oxide is reduced at the cathode and oxygen is oxidized at the anode to form molecular oxygen, wherein the potential across the cathode and the anode is greater than the dissociation potential of the at least one metal oxide and less than the dissociation potential of the solid electrolyte powder and the membrane.
A novel peptidyl-lys metalloendopeptidase (Tc-LysN) from Tramates coccinea was recombinantly expressed in Komagataella phaffii using the native pro-protein sequence. The peptidase was secreted into the culture broth as zymogen (~38 kDa) and mature enzyme (~19.8 kDa) simultaneously. The mature Tc-LysN was purified to homogeneity with a single step anion-exchange chromatography at pH 7.2. N-terminal sequencing using TMTpro Zero and mass spectrometry of the mature Tc-LysN indicated that the pro-peptide was cleaved between the amino acid positions 184 and 185 at the Kex2 cleavage site present in the native pro-protein sequence. The pH optimum of Tc-LysN was determined to be 5.0 while it maintained ≥60% activity between pH values 4.5—7.5 and ≥30% activity between pH values 8.5—10.0, indicating its broad applicability. The temperature maximum of Tc-LysN was determined to be 60 °C. After 18 h of incubation at 80 °C, Tc-LysN still retained ~20% activity. Organic solvents such as methanol and acetonitrile, at concentrations as high as 40% (v/v), were found to enhance Tc-LysN’s activity up to ~100% and ~50%, respectively. Tc-LysN’s thermostability, ability to withstand up to 8 M urea, tolerance to high concentrations of organic solvents, and an acidic pH optimum make it a viable candidate to be employed in proteomics workflows in which alkaline conditions might pose a challenge. The nano-LC-MS/MS analysis revealed bovine serum albumin (BSA)’s sequence coverage of 84% using Tc-LysN which was comparable to the sequence coverage of 90% by trypsin peptides.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
Germany was considered the world's export champion for a long time, until it was overtaken by China in 2009. Both nations provide officially supported export credits to national exporting organizations, but the two systems operate differently. German export credit guarantees serve as a substitute when the private market is unable to assume the risks of exporting companies. The German Export Credit Agency Euler Hermes is responsible for processing applications on behalf of the Federal Government. China belongs to the largest providers of export finance with the institutions China EXIM and Sinosure. While Germany is bound by the OECD consensus, which defines the level playing field, Chinese export credit agencies have greater flexibility not being bound by international rules or agreements.
Deutsche Banken begleiten vielfältige Geschäfte mit Auslandsbezug. Vor allem Kreditgeschäfte und Akkreditive sind die häufigsten Geschäftsarten, an denen deutsche Banken als Finanzierungspartei gemeinschaftlich mit anderen ausländischen Finanzinstituten auftreten. Im Rahmen solcher Geschäfte verlangen ausländische Geschäftspartner häufig die Einhaltung von ausländischen Sanktionsvorschriften und verankern dies in den vertraglichen Dokumenten. Beteiligen sich Finanzinstitute, beispielsweise als Kreditnehmer, so wird die Einhaltung der ausländischen Sanktionsvorschriften direkt von den Finanzinstituten verlangt. Treten jedoch Finanzinstitute als Kreditgeber auf, was eher häufiger der Fall ist, so fordern die Finanzinstitute den Kreditnehmer auf, ausländisches Sanktionsrecht einzuhalten. Die Verpflichtung zur Einhaltung von ausländischen Sanktionsvorschriften widerspricht den Anti-Boykott-Regelungen auf nationaler und gegebenenfalls auf europäischer Ebene.
In this paper, a concept for an anthropomorphic replacement hand cast with silicone with an integrated sensory feedback system is presented. In order to construct the personalized replacement hand, a 3D scan of a healthy hand was used to create a 3D-printed mold using computer-aided design (CAD). To allow for movement of the index and middle fingers, a motorized orthosis was used. Information about the applied force for grasping and the degree of flexion of the fingers is registered using two pressure sensors and one bending sensor in each movable finger. To integrate the sensors and additional cavities for increased flexibility, the fingers were cast in three parts, separately from the rest of the hand. A silicone adhesive (Silpuran 4200) was examined to combine the individual parts afterwards. For this, tests with different geometries were carried out. Furthermore, different test series for the secure integration of the sensors were performed, including measurements of the registered information of the sensors. Based on these findings, skin-toned individual fingers and a replacement hand with integrated sensors were created. Using Silpuran 4200, it was possible to integrate the needed cavities and to place the sensors securely into the hand while retaining full flexion using a motorized orthosis. The measurements during different loadings and while grasping various objects proved that it is possible to realize such a sensory feedback system in a replacement hand. As a result, it can be stated that the cost-effective realization of a personalized, anthropomorphic replacement hand with an integrated sensory feedback system is possible using 3D scanning and 3D printing. By integrating smaller sensors, the risk of damaging the sensors through movement could be decreased.
The evolution of cellular networks from its first generation (1G) to its fourth generation (4G) was driven by the demand of user-centric downlink capacity also technically called Mobile Broad-Band (MBB). With its fifth generation (5G), Machine Type Communication (MTC) has been added into the target use cases and the upcoming generation of cellular networks is expected to support them. However, such support requires improvements in the existing technologies in terms of latency, reliability, energy efficiency, data rate, scalability, and capacity.
Originally, MTC was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc. Nowadays there is an additional demand around applications with low-latency requirements. Among other well-known challenges for recent cellular networks such as data rate energy efficiency, reliability etc., latency is also not suitable for mission-critical applications such as real-time control of machines, autonomous driving, tactile Internet etc. Therefore, in the currently deployed cellular networks, there is a necessity to reduce the latency and increase the reliability offered by the networks to support use cases such as, cooperative autonomous driving or factory automation, that are grouped under the denomination Ultra-Reliable Low-Latency Communication (URLLC).
This thesis is primarily concerned with the latency into the Universal Terrestrial Radio Access Network (UTRAN) of cellular networks. The overall work is divided into five parts. The first part presents the state of the art for cellular networks. The second part contains a detailed overview of URLLC use cases and the requirements that must be fulfilled by the cellular networks to support them. The work in this thesis is done as part of a collaboration project between IRIMAS lab in Université de Haute-Alsace, France and Institute for Reliable Embedded Systems and Communication Electronics (ivESK) in Offenburg University of Applied Sciences, Germany. The selected use cases of URLLC are part of the research interests of both partner institutes. The third part presents a detailed study and evaluation of user- and control-plane latency mechanisms in current generation of cellular networks. The evaluation and analysis of these latencies, performed with the open-source ns-3 simulator, were conducted by exploring a broad range of parameters that include among others, traffic models, channel access parameters, realistic propagation models, and a broad set of cellular network protocol stack parameters. These simulations were performed with low-power, low-cost, and wide-range devices, commonly called IoT devices, and standardized for cellular networks. These devices use either LTE-M or Narrowband-IoT (NB-IoT) technologies that are designed for connected things. They differ mainly by the provided bandwidth and other additional characteristics such as coding scheme, device complexity, and so on.
The fourth part of this thesis shows a study, an implementation, and an evaluation of latency reduction techniques that target the different layers of the currently used Long Term Evolution (LTE) network protocol stack. These techniques based on Transmission Time Interval (TTI) reduction and Semi-Persistent Scheduling (SPS) methods are implemented into the ns-3 simulator and are evaluated through realistic simulations performed for a variety of low-latency use cases focused on industry automation and vehicular networking. For testing the proposed latency reduction techniques in cellular networks, since ns-3 does not support NB-IoT in its current release, an NB-IoT extension for LTE module was developed. This makes it possible to explore deployment limitations and issues.
In the last part of this thesis, a flexible deployment framework called Hybrid Scheduling and Flexible TTI for the proposed latency reduction techniques is presented, implemented and evaluated through realistic simulations. With help of the simulation evaluation, it is shown that the improved LTE network proposed and implemented in the simulator can support low-latency applications with low cost, higher range, and narrow bandwidth devices. The work in this thesis points out the potential improvement techniques, their deployment issues and paves the way towards the support for URLLC applications with upcoming cellular networks.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
Elektronische Türschilder zur Darstellung von Informationen sind insbesondere in öffentlichen Gebäuden zwischenzeitlich weit verbreitet. Die Varianz dieser elektronischen Türschilder reicht vom Tablet-basierten Türschild bis hin zum PC-basierten Türschild mit externem Bildschirm. Zumeist werden die Systeme mit 230 V betrieben. Bei einer großen Summe von Türschildern in öffentlichen Gebäuden kann dies zu einem signifikanten Umsatz an Energie führen. Im Rahmen dieses Papers wird die Entwicklung eines energieautarken arbeiten Türschildes vorgestellt, bei dem ein E-Paper-Display zum Einsatz kommt. Das Türschild lässt sich per Smartphone-App und NFC-Schnittstelle konfigurieren. Es wird insbesondere auf das Low-Power-Hardware-Design der Elektronik und energetische Aspekte eingegangen.
When people with hearing loss are provided with different devices in each ear, these devices usually have different processing latencies. This leads to static temporal offsets between both ears in the order of several milliseconds. This thesis measured effects of such offsets in stimulation timing on mechanisms of binaural hearing, such as sound localization and speech understanding in noise in hearing-impaired and normal-hearing listeners.
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3–4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
Users of a cochlear implant (CI) in one ear, who are provided with a hearing aid (HA) in the contralateral ear, so-called bimodal listeners, are typically affected by a constant and relatively large interaural time delay offset due to differences in signal processing and differences in stimulation. For HA stimulation, the cochlear travelling wave delay is added to the processing delay, while for CI stimulation, the auditory nerve fibers are stimulated directly. In case of MED-EL CI systems in combination with different HA types, the CI stimulation precedes the acoustic HA stimulation by 3 to 10 ms. A self-designed, battery-powered, portable, and programmable delay line was applied to the CI to reduce the device delay mismatch in nine bimodal listeners. We used an A-B-B-A test design and determined if sound source localization improves when the device delay mismatch is reduced by delaying the CI stimulation by the HA processing delay (τ HA ). Results revealed that every subject in our group of nine bimodal listeners benefited from the approach. The root-mean-square error of sound localization improved significantly from 52.6° to 37.9°. The signed bias also improved significantly from 25.2° to 10.5°, with positive values indicating a bias toward the CI. Furthermore, two other delay values (τ HA –1 ms and τ HA +1 ms) were applied, and with the latter value, the signed bias was further reduced in some test subjects. We conclude that sound source localization accuracy in bimodal listeners improves instantaneously and sustainably when the device delay mismatch is reduced.
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
Supporting the COVID-19 response in Asia and the Pacific—The role of the Asian Development Bank.
(2020)
The COVID-19 pandemic has affected all countries of the Asia Pacific region over the last few months with far reaching economic, health and social consequences. To counter the impact, governments have accelerated their health spending and announced large macroeconomic stabilization and stimulus policy packages. As with past disasters and crises in the region, the Asian Development Bank has reacted with a number of targeted support interventions since the very early stages of the outbreak. In mid- April 2020, the Bank then put forward a comprehensive COVID-19 Response Package totalling $20 billion to support its member countries which rests on four pillars.
The last few months have proven that multilateral development banks like the Asian Development Bank have the ability to respond quickly and to mobilize significant resources for a global emergency like COVID-19. Whilst this financial supported is urgently needed at this point, attention will need to be paid on how debt sustainability for low- and middle-income countries can be ensured in the coming years. Given the unprecedented scale of and uncertainty around the COVID-19 pandemic, it may offer a window of opportunity to redesign the way developmental finance is coordinated and the way it is delivered. This also includes a chance to “build back better” and to focus on a sustainable, resilient and green recovery.