In-memory databases (IMDBs) have been the backbone of modern systems that demand high throughput and low latency. Because of the cost and volatility of DRAM, IMDBs become incompetent when dealing with workloads that require large data volume and strict durability. The emergence of non-volatile memory (NVM) brings new opportunities for IMDBs to tackle this situation. However, it is non-trivial to build an NVM-based IMDB, due to performance degradation, NVM programming complexity, and other challenges. In this paper, we present Tair-PMem , an NVM-based enterprise-strength database atop Redis, the most popular IMDB. Tair-PMem adopts a well-controlled data layout and a log-as-user-data design to mitigate NVM overheads. It eases the NVM programming complexity by providing a hybrid memory programming toolkit. To better leverage the enterprise-strength features and implementations from Redis, Tair-PMem retrofits it in a less intrusive way to achieve full compatibility and stability, while retaining its advanced features. With all of the above techniques elaborately implemented, Tair-PMem satisfies full durability, high throughput, and low latency at the same time. Tair-PMem has now been publicly available as a cloud service on Alibaba Cloud. To the best of our knowledge, Tair-PMem is the first cloud service that makes good use of the persistence capability of NVM.
An integrated observer framework based mechanical parameters identification approach for adaptive control of permanent magnet synchronous motors is proposed in this paper. Firstly, an integrated observer framework is established for mechanical parameters' estimation, which consists of an extended sliding mode observer (ESMO) and a Luenberger observer. Aiming at minimizing the influence of parameters coupling, the viscous friction and the moment of inertia are obtained by ESMO and the load torque is identified by Luenberger observer separately. After obtaining estimates of the mechanical parameters, the optimal proportional integral (PI) parameters of the speed-loop are determined according to third-order best design method. As a result, the controller can adjust the PI parameters in real time according to the parameter changes to realize the adaptive control of the system. Meanwhile, the disturbance is compensated according to the estimates. Finally, the experiments were carried out on simulation platform, and the experimental results validated the reliability of parameter identification and the efficiency of the adaptive control strategy presented in this paper.
In-memory key-value stores (IMKVSes) serve many online applications because of their efficiency. To support data backup, popular industrial IMKVSes periodically take a point-in-time snapshot of the in-memory data with the system call fork. However, this mechanism can result in latency spikes for queries arriving during the snapshot period because fork leads the engine into the kernel mode in which the engine is out-of-service for queries. In contrast to existing research focusing on optimizing snapshot algorithms, we optimize the fork operation to address the latency spikes problem from the operating system (OS) level, while keeping the data persistent mechanism in IMKVSes unchanged. Specifically, we first conduct an in-depth study to reveal the impact of the fork operation as well as the optimization techniques on query latency. Based on findings in the study, we propose Async-fork to offload the work of copying the page table from the engine (the parent process) to the child process as copying the page table dominates the execution time of fork. To keep data consistent between the parent and the child, we design the proactive synchronization strategy. Async-fork is implemented in the Linux kernel and deployed into the online Redis database in public clouds. Our experiment results show that compared with the default fork method in OS, Async-fork reduces the tail latency of queries arriving during the snapshot period by 81.76% on an 8GB instance and 99.84% on a 64GB instance.
In-memory key-value stores (IMKVSes) serve many online applications. They generally adopt the fork-based snapshot mechanism to support data backup. However, this method can result in query latency spikes because the engine is out-of-service for queries during the snapshot. In contrast to existing research optimizing snapshot algorithms, we address the problem from the operating system (OS) level, while keeping the data persistent mechanism in IMKVSes unchanged. Specifically, we first study the impact of the fork operation on query latency. Based on findings in the study, we propose Async-fork, which performs the fork operation asynchronously to reduce the out-of-service time of the engine. Async-fork is implemented in the Linux kernel and deployed into the online Redis database in public clouds. Our experiment results show that Async-fork can significantly reduce the tail latency of queries during the snapshot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.