For any computing system to be secure, both hardware and software have to be trusted. If the hardware layer in a secure system is compromised, not only it would be possible to extract secret information about the software, but it would also be extremely hard for the software to detect that an attack is underway. In this work we detail a complete end-to-end fault-attack on a microprocessor system and practically demonstrate how hardware vulnerabilities can be exploited to target secure systems. We developed a theoretical attack to the RSA signature algorithm, and we realized it in practice against an FPGA implementation of the system under attack. To perpetrate the attack, we inject transient faults in the target machine by regulating the voltage supply of the system. Thus, our attack does not require access to the victim system's internal components, but simply proximity to it.The paper makes three important contributions: first, we develop a systematic fault-based attack on the modular exponentiation algorithm for RSA. Second, we expose and exploit a severe flaw on the implementation of the RSA signature algorithm on OpenSSL, a widely used package for SSL encryption and authentication. Third, we report on the first physical demonstration of a fault-based security attack of a complete microprocessor system running unmodified production software: we attack the original OpenSSL authentication library running on a SPARC Linux system implemented on FPGA, and extract the system's 1024-bit RSA private key in approximately 100 hours.
Extreme technology scaling in silicon devices drastically affects reliability, particularly because of runtime failures induced by transistor wearout. Current online testing mechanisms focus on testing all components in a microprocessor, including hardware that has not been exercised, and thus have high performance penalties.We propose a hybrid hardware/software online testing solution where components that are heavily utilized by the software application are tested more thoroughly and frequently. Thus, our online testing approach focuses on the processor units that affect application correctness the most, and it achieves high coverage while incurring minimal performance overhead. We also introduce a new metric, Application-Aware Fault Coverage, measuring a test's capability to detect faults that might have corrupted the state or the output of an application. Test coverage is further improved through the insertion of observation points that augment the coverage of the testing system. By evaluating our technique on a Sun OpenSPARC T1, we show that our solution maintains high Application-Aware Fault Coverage while reducing the performance overhead of online testing by more than a factor of 2 when compared to solutions oblivious to application's behavior. Specifically, we found that our solution can achieve 95% fault coverage while maintaining a minimal performance overhead (1.3%) and area impact (0.4%).
The reliability of future processors is threatened by decreasing transistor robustness. Current architectures focus on delivering high performance at low cost; lifetime device reliability is a secondary concern. As the rate of permanent hardware faults increases, robustness will become a first class constraint for even low-cost systems. Current research into reliable architectures has focused on ad-hoc solutions to improve designs without altering their centralized control logic. Unfortunately, this centralized control presents a single point of failure, which limits long-term robustness.To address this issue, we introduce Viper, an architecture built from a redundant collection of fine-grained hardware components. Instructions are perceived as customers that require a sequence of services in order to properly execute. The hardware components vie to perform what services they can, dynamically forming virtual pipelines that avoid defective hardware. This is done using distributed control logic, which avoids a single point of failure by construction.Viper can tolerate a high number of permanent faults due to its inherent redundancy. As fault counts increase, its performance degrades more gracefully than traditional centralized-logic architectures. We estimate that fault rates higher than one permanent faults per 12 million transistors, on average, cause the throughput of a classic CMP design to fall below that of a Viper design of similar size.
Abstract-Current technology scaling is leading to increasingly fragile components making hardware reliability a primary design consideration. Recently researchers have proposed low-cost reliability solutions that detect hardware faults through monitoring software-level symptoms. SWAT (SoftWare Anomaly Treatment), one such solution, demonstrated through microarchitecture level simulations that it can provide high fault coverage and a Silent Data Corruption (SDC) rate of under 0.5% for both permanent and transient hardware faults for all but one hardware component studied. More accurate evaluations of SWAT require tests on industry strength processor, a commercial operating system, unmodified applications, and accurate low-level fault models.In this paper, we propose a FPGA based evaluation platform that provides the software, hardware, and fault model accuracy to verify symptom-based fault detection schemes. Our platform targets a OpenSPARC T1 processor design running a commercial operating system, OpenSolaris, and leverages CrashTest, an accurate gate-level fault analysis framework, to model gate-level permanent faults. Furthermore, we modified the OpenSPARC core to support hardware checkpoint and restore to make a large volume of experiments feasible.With this platform we provide results for 30,620 fault injection experiments across the major components of the OpenSPARC T1 design and running five SPECInt 2000 benchmarks. With a conservative, overall SDC rate of 0.94%, the results are similar to previous microarchitectural level evaluations of SWAT and are encouraging for the effectiveness of symptom-based software detectors.
The KM3NeT Collaboration is constructing a km 3 -volume neutrino telescope in the Mediterranean sea, called ARCA (Astroparticle Research with Cosmics in the Abyss), that will achieve an unprecedented sensitivity to high-energy cosmic neutrinos. This telescope will be able to reconstruct the arrival direction of the neutrinos with a precision of 0.1 • . The configuration of ARCA makes it sensitive to neutrinos in a wide energy range, from sub-TeV up to tens of PeV. Moreover, this detector has a large field of view and a very high duty cycle, allowing for full-sky (and all-flavours) searches. All these features make ARCA an excellent instrument to study transient neutrino sources. Atmospheric muons and neutrinos, produced by primary cosmic rays, constitute the main background for ARCA. This background can be several orders of magnitude higher than the expected cosmic neutrino flux. In this work, we introduce an event selection which reduces the background up to a negligible level inside the region of interest and within the search time window. The ARCA performance to detect a transient neutrino flux, including the effective area, sensitivity and discovery potential, are provided for a given test source, and for different time windows.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.