The FHEW cryptosystem introduced the idea that an arbitrary function can be evaluated within the bootstrap procedure as a table lookup. The faster bootstraps of TFHE strengthened this approach, which was later named Functional Bootstrap (Boura et al., CSCML’19). From then on, little effort has been made towards defining efficient ways of using it to implement functions with high precision. In this paper, we introduce two methods to combine multiple functional bootstraps to accelerate the evaluation of reasonably large look-up tables and highly precise functions. We thoroughly analyze and experimentally validate the error propagation in both methods, as well as in the functional bootstrap itself. We leverage the multi-value bootstrap of Carpov et al. (CT-RSA’19) to accelerate (single) lookup table evaluation, and we improve it by lowering the complexity of its error variance growth from quadratic to linear in the value of the output base. Compared to previous literature using TFHE’s functional bootstrap, our methods are up to 2.49 times faster than the lookup table evaluation of Carpov et al. (CT-RSA’19) and up to 3.19 times faster than the 32-bit integer comparison of Bourse et al. (CT-RSA’20). Compared to works using logic gates, we achieved speedups of up to 6.98, 8.74, and 3.55 times over 8-bit implementations of the functions ReLU, Addition, and Maximum, respectively.
Summary This paper presents a new enhanced version of the QcBits key encapsulation mechanism, which is a constant‐time implementation of the Niederreiter cryptosystem using QC‐MDPC codes. In this version, we updated the implementation parameters to meet the 128‐bit quantum security level, replaced some of the core algorithms to avoid using slower instructions, vectorized the entire code using the AVX‐512 instruction set extension, and applied several other techniques to achieve a competitive performance level. Our implementation takes 928, 259, and 5008 thousand Skylake cycles to perform batch key generation (cost per key), encryption, and uniform decryption, respectively. Comparing with the current state‐of‐the‐art implementation for QC‐MDPC codes, BIKE, our code is 1.9 times faster when decrypting messages.
Summary Most of the applications in the seismology field rely on the processing of up to hundreds of terabytes of data and their performance is strongly affected by IO operations. In this article, we analyze the main file structures currently used to store seismic data and propose a new intermediate data structure to improve IO performance while still complying with established standards. We show that, throughout a common workflow in seismic data analysis, our IO performance gain greatly surpasses the overhead of translating data to the intermediate structure. This approach enables a speedup of up to 208 times in reading time when using classical standards (e.g., SEG‐Y) and our intermediate structure is up to 1.8 times more efficient than modern formats (e.g., ASDF). Considering cache‐friendly applications, our speedups over the direct use of SEG‐Y reach 8000 times. We also performed a cost analysis on the AWS cloud showing that, in our approach, HDDs can be 1.25 times more cost‐effective than SSDs.
Applications in the seismology field rely on the processing of up to hundreds of terabytes of data and their performance may be strongly affected by IO operations. In this paper, we generalize the main file structures currently used to store seismic data and evaluate their performance. We present a theoretical analysis of data loading operations and a benchmark on the AWS public cloud, using three different storage technologies (HDD, SSD, and EFS). We show that an adequate choice of the file structure for a typical use case enables an up to 193 times reduction on the amount of data read and 139 times speedup in time. Our results also indicate that the use of more expensive cloud instances presents negligible effects on the performance of network storage, despite featuring enhanced network transmission capacity.Resumo. As aplicações computacionais naárea de sismologia processam dados até a ordem de centenas de terabytes e seus desempenhos podem ser fortemente afetados pelas operações de Leitura e Escrita. Este artigo generaliza as principais estruturas de arquivos para armazenamento de dados sísmicos e avalia seus desempenhos. São apresentadas uma análise teórica do carregamento de dados na memória e uma análise de desempenho na nuvem computacional, utilizando diferentes tecnologias de armazenamento (HDD, SSD e EFS). A partir delas, obteve-se que a escolha adequada da estrutura de arquivo para um caso de uso típico permite uma redução de até 193 vezes na quantidade de dados lidos. Observou-se também que a melhor estrutura avaliada desempenha até 139 vezes mais rapidamente do que a estrutura adotada pelo formato SEG-Y, usado como padrão pela Agência Nacional de Petróleo do Brasil. Por fim, nos experimentos com armazenamento em rede, concluiu-se que o uso de instâncias mais custosas, mas com maior capacidade de transmissão de dados, não traz benefícios significativos.
O objetivo deste artigo foi contar um pouco da história do desenvolvimento da indústria mais competitiva da segunda metade do século passado: os computadores e seus sistemas operacionais e aplicativos, e de entender a atuação da Microsoft e de como ela influenciou de maneira importante o desenrolar dos acontecimentos, tornando-se esse "gigante do software", com fortes tendências monopolistas, que, em sua trajetória, derrotou todos os concorrentes que apareceram em seu caminho, conseguindo com isso colecionar inimigos poderosos. Nesse cenário de inconformismo, surge o Linux, software livre apoiado por grandes empresas, entre as quais estão algumas antigas concorrentes derrotadas pela Microsoft. O Linux vem ocupando enorme espaço no mundo corporativo, estando já quase maduro tecnologicamente para marcar sua presença no mercado, em arquitetura Intel, e se prepara para investir contra a reserva de mercado do Windows, em plataforma baixa, e invadir o mundo dos computadores pessoais. Chegou afinal um concorrente de peso para enfrentar o império? Palavras-chave: código aberto, Microsoft, Linux. A B S T R A C T The objective of this article is to tell a little of the development history of the most competitive industry on the second half of last century, the computers and their
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.