Introduction. Streptococcus pneumoniae is an important pathogen with high morbidity and mortality rates. The aim of this study was to evaluate the distribution of common serotypes and antimicrobial susceptibility of S. pneumoniae in Korea. Methods. A total of 378 pneumococcal isolates were collected from 2008 through 2014. We analyzed the serotype and antimicrobial susceptibility for both invasive and noninvasive isolates. Results. Over the 7 years, 3 (13.5%), 35 (10.8%), 19A (9.0%), 19F (6.6%), 6A (6.1%), and 34 (5.6%) were common serotypes/serogroups. The vaccine coverage rates of PCV7, PCV10, PCV13, and PPSV23 were 21.4%, 23.3%, 51.9%, and 62.4% in all periods. The proportions of serotypes 19A and 19F decreased and nonvaccine serotypes increased between 2008 and 2010 and 2011 and 2014. Of 378 S. pneumoniae isolates, 131 (34.7%) were multidrug resistant (MDR) and serotypes 19A and 19F were predominant. The resistance rate to levofloxacin was significantly increased (7.2%). Conclusion. We found changes of pneumococcal serotype and antimicrobial susceptibility during the 7 years after introduction of the first pneumococcal vaccine. It is important to continuously monitor pneumococcal serotypes and their susceptibilities.
Data deduplication has been widely adopted in contemporary backup storage systems. It not only saves storage space considerably, but also shortens the data backup time significantly. Since the major goal of the original data deduplication lies in saving storage space, its design has been focused primarily on improving write performance by removing as many duplicate data as possible from incoming data streams. Although fast recovery from a system crash relies mainly on read performance provided by deduplication storage, little investigation into read performance improvement has been made. In general, as the amount of deduplicated data increases, write performance improves accordingly, whereas associated read performance becomes worse.In this paper, we newly propose a deduplication scheme that assures demanded read performance of each data stream while achieving its write performance at a reasonable level, eventually being able to guarantee a target system recovery time. For this, we first propose an indicator called cacheaware Chunk Fragmentation Level (CFL) that estimates degraded read performance on the fly by taking into account both incoming chunk information and read cache effects. We also show a strong correlation between this CFL and read performance in the backup datasets. In order to guarantee demanded read performance expressed in terms of a CFL value, we propose a read performance enhancement scheme called selective duplication that is activated whenever the current CFL becomes worse than the demanded one. The key idea is to judiciously write non-unique (shared) chunks into storage together with unique chunks unless the shared chunks exhibit good enough spatial locality. We quantify the spatial locality by using a selective duplication threshold value. Our experiments with the actual backup datasets demonstrate that the proposed scheme achieves demanded read performance in most cases at the reasonable cost of write performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.