COVID-19 is affecting healthcare resources worldwide, with lower and middle-income countries being particularly disadvantaged to mitigate the challenges imposed by the disease, including the availability of a sufficient number of infirmary/ICU hospital beds, ventilators, and medical supplies. Here, we use mathematical modelling to study the dynamics of COVID-19 in Bahia, a state in northeastern Brazil, considering the influences of asymptomatic/non-detected cases, hospitalizations, and mortality. The impacts of policies on the transmission rate were also examined. Our results underscore the difficulties in maintaining a fully operational health infrastructure amidst the pandemic. Lowering the transmission rate is paramount to this objective, but current local efforts, leading to a 36% decrease, remain insufficient to prevent systemic collapse at peak demand, which could be accomplished using periodic interventions. Non-detected cases contribute to a ∽55% increase in R0. Finally, we discuss our results in light of epidemiological data that became available after the initial analyses.
Q-learning is an off-policy reinforcement learning technique, which has the main advantage of obtaining an optimal policy interacting with an unknown model environment. This paper proposes a parallel fixed-point Q-learning algorithm architecture implemented on field programmable gate arrays (FPGA) focusing on optimizing the system processing time. The convergence results are presented, and the processing time and occupied area were analyzed for different states and actions sizes scenarios and various fixed-point formats. The studies concerning the accuracy of the Q-learning technique response and resolution error associated with a decrease in the number of bits were also carried out for hardware implementation. The architecture implementation details were featured. The entire project was developed using the system generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA.
In this paper we present the concept of Emergeables-mobile surfaces that can deform or 'morph' to provide fully-actuated, tangible controls. Our goal in this work is to provide the flexibility of graphical touchscreens, coupled with the affordance and tactile benefits offered by physical widgets. In contrast to previous research in the area of deformable displays, our work focuses on continuous controls (e.g., dials or sliders), and strives for fully-dynamic positioning, providing versatile widgets that can change shape and location depending on the user's needs. We describe the design and implementation of two prototype emergeables built to demonstrate the concept, and present an in-depth evaluation that compares both with a touchscreen alternative. The results show the strong potential of emergeables for on-demand, eyes-free control of continuous parameters, particularly when comparing the accuracy and usability of a high-resolution emergeable to a standard GUI approach. We conclude with a discussion of the level of resolution that is necessary for future emergeables, and suggest how high-resolution versions might be achieved.
Deep learning techniques have been gaining prominence in the research world in the past years; however, the deep learning algorithms have high computational cost, making them hard to be used to several commercial applications. On the other hand, new alternatives have been studied and some methodologies focusing on accelerating complex algorithms including those based on reconfigurable hardware has been showing significant results. Therefore, the objective of this paper is to propose a neural network hardware implementation to be used in deep learning applications. The implementation was developed on a fieldprogrammable gate array (FPGA) and supports deep neural network (DNN) trained with the stacked sparse autoencoder (SSAE) technique. In order to allow DNNs with several inputs and layers on the FPGA, the systolic array technique was used in the entire architecture. Details regarding the designed implementation were evidenced, as well as the hardware area occupation and the processing time for two different implementations. The results showed that both implementations achieved high throughput enabling deep learning techniques to be applied for problems with large data amounts.
Genetic Algorithms (GAs) are used to solve search and optimization problems in which an optimal solution can be found using an iterative process with probabilistic and non-deterministic transitions. However, depending on the problem's nature, the time required to find a solution can be high in sequential machines due to the computational complexity of genetic algorithms. This work proposes a parallel implementation of a genetic algorithm on field-programmable gate array (FPGA). Optimization of the system's processing time is the main goal of this project. Results associated with the processing time and area occupancy (on FPGA) for various population sizes are analyzed. Studies concerning the accuracy of the GA response for the optimization of two variables functions were also evaluated for the hardware implementation. However, the high-performance implementation proposes in this paper is able to work with more variable from some adjustments on hardware architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.