With the continuous scaling of CMOS technology, microelectronic circuits are increasingly susceptible to microelectronic variations such as variations in operating conditions. Such variations can cause delay uncertainty in microelectronic circuits, leading to timing errors. Circuit designers typically combat these errors using conservative guardbands in the circuit and architectural design, which can, however, cause significant loss of operational efficiency. In this paper, we propose TEVoT, a supervised learning model that can predict the timing errors of functional units (FUs) under different operating conditions, clock speeds, and input workload. We perform dynamic timing analysis to characterize the delay variations of FUs under different conditions, based on which we collect training data. We then extract useful features from training data and apply supervised learning methods to establish TEVoT. Across 100 different operating conditions, 4 widely-used FUs, 3 clocking speeds, and 3 datasets, TEVoT achieves an average prediction accuracy at 98.25% and is 100X faster than gate-level simulation. We further use TEVoT to estimate application output quality under different operating conditions by exposing circuit-level timing errors to application level. TEVoT achieves an average estimation accuracy at 97% for two image processing applications across 100 operating conditions.
Brain-inspired hyperdimensional computing (HDC) is an emerging computational paradigm that mimics brain cognition and leverages hyperdimensional vectors with fully distributed holographic representation and (pseudo)randomness. Compared to other machine learning (ML) methods such as deep neural networks (DNNs), HDC offers several advantages including high energy efficiency, low latency, and one-shot learning, making it a promising alternative candidate on a wide range of applications. However, the reliability and robustness of HDC models have not been explored yet. In this paper, we design, implement, and evaluate HDTest to test HDC model by automatically exposing unexpected or incorrect behaviors under rare inputs. The core idea of HDTest is based on guided differential fuzz testing. Guided by the distance between query hypervector and reference hypervector in HDC, HDTest continuously mutates original inputs to generate new inputs that can trigger incorrect behaviors of HDC model. Compared to traditional ML testing methods, HDTest does not need to manually label the original input. Using handwritten digit classification as an example, we show that HDTest can generate thousands of adversarial inputs with negligible perturbations that can successfully fool HDC models. On average, HDTest can generate around 400 adversarial inputs within one minute running on a commodity computer. Finally, by using the HDTest-generated inputs to retrain HDC models, we can strengthen the robustness of HDC models. To the best of our knowledge, this paper presents the first effort in systematically testing this emerging brain-inspired computational model.
As Moore's Law comes to an end and transistor scaling increasingly falls short in improving energy efficiency, alternative computing paradigms are direly needed. This need is further highlighted by the overwhelming increase in computing demand posed by emerging applications such as multimedia and data analysis. Fortunately, such driving workloads also present new opportunities since, thanks to their inherent error tolerance, they do not require completely accurate computations. Thus, by trading off accuracy for better performance or improved efficiency, approximate computing promises tremendous growth for future computing. Various approximation methods demonstrate the effectiveness of voltage scaling in functional units (FUs) for exploring this energy-error trade-off. Yet, while an accurate error model is critical for assessing the error behavior of voltagescaled FUs and its effects on application quality, existing error models of voltage-scaled FUs overlook the effects of input data and error rate disparity among different bits. To tackle this challenge, we propose LEVAX, an input-aware learning-based error model of voltage-scaled FUs that can predict the timing error rate (TER) for each output bit. This model is trained using random forest methods, with input features and output labels extracted from gate-level simulations. To validate its effectiveness and demonstrate its prediction accuracy, we use LEVAX on various FUs. Across all bit positions, voltage levels, and FUs, LEVAX achieves, on average, a relative error of 1.20%. LEVAX also achieves an average per-voltage Root Mean Square Error (RMSE) of 1.03% and per-bit RMSE of 1.17%. Exposing this error rate even up to the application level, LEVAX can estimate the quality of four image processing applications under voltage scaling with an average accuracy of 97.9%. To the best of our knowledge, LEVAX is the first voltage scaling error model of FUs that can incorporate the effects of input data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.