Modeling of Resistive RAMs (RRAMs) is a herculean task due to its non-linearity. While the exigent need for a model has motivated research groups to formulate realistic models, the diversity in RRAMs' characteristics has created a gap between model developers and model users. This paper bridges the gap by proposing an algorithm by which the parameters of a model are tuned to specific RRAMs. To this end, a physicsbased compact model was chosen due to its flexibility, and the proposed algorithm was used to exactly fit the model to different RRAMs, which differed greatly in their material composition and switching behavior. Further, the model was extended to simulate multiple Low Resistance States (LRS), which is a vital focus of research to increase memory density in RRAMs. The ability of the model to simulate the switching from a high resistance state to multiple LRS was verified by measurements on 1T-1R cells.
SUMMARYEvolutionary algorithms are one of the most popular forms of optimization algorithms. They are comparatively easy to use and were successfully employed for a wide variety of practical applications. However, frequently, it is necessary to execute them in parallel in order to reduce the runtime. There are a number of different approaches for the parallelization of evolutionary algorithms, and various hardware platforms can be used for the parallel execution. However, not every platform is equally suitable for any kind of parallelization of evolutionary algorithms. In addition, it also depends on properties of the concrete optimization problem to be solved and on the used evolutionary algorithm, which platform is best suited for the execution. The present work observes this in detail for two common forms of parallelization of evolutionary algorithms -the island model and the global parallelization -and for four widely used parallel computing platforms -multi-core CPUs, clusters, graphics cards, and grids. Based on empirical and analytical investigations, it is determined, under which circumstances an architecture is better suited for the execution of a parallel evolutionary algorithm than another (and vice versa). Guidelines are derived that support users of parallel evolutionary algorithms with the choice of an appropriate platform.
The significant increase in complexity of Exascale platforms due to energy-constrained, billion-way parallelism, with major changes to processor and memory architecture, requires new energy-efficient and resilient programming techniques that are portable across multiple future generations of machines. We believe that guaranteeing adequate scalability, programmability, performance portability, resilience, and energy efficiency requires a fundamentally new approach, combined with a transition path for existing scientific applications, to fully explore the rewards of todays and tomorrows systems. We present HPX -a parallel runtime system which extends the C++11/14 standard to facilitate distributed operations, enable fine-grained constraint based parallelism, and support runtime adaptive resource management. This provides a widely accepted API enabling programmability, composability and performance portability of user applications. By employing a global address space, we seamlessly augment the standard to apply to a distributed case. We present HPX's architecture, design decisions, and results selected from a diverse set of application runs showing superior performance, scalability, and efficiency over conventional practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.