Neuromorphic systems recently gained increasing attention for their high computation efficiency. Many designs have been proposed and realized with traditional CMOS technology or emerging devices. In this work, we proposed a spiking neuromorphic design built on resistive crossbar structures and implemented with IBM 130nm technology. Our design adopts a rate coding scheme where pre-and post-neuron signals are represented by digitalized pulses. The weighting function of pre-neuron signals is executed on the resistive crossbar in analog format. The computing result is transferred into digitalized output spikes via an integrate-and-fire circuit (IFC) as the postneuron. We calibrated the computation accuracy of the entire system through circuit simulations. The results demonstrated a good match to our analytic modeling. Furthermore, we implemented both feedforward and Hopfield networks by utilizing the proposed neuromorphic design. The system performance and robustness were studied through massive Monte-Carlo simulations based on the application of digital image recognition. Comparing to the previous crossbar-based computing engine that represents data with voltage amplitude, our design can achieve >50% energy savings, while the average probability of failed recognition increase only 1.46% and 5.99% in the feedforward and Hopfield implementations, respectively.
Deepfake represents a category of face-swapping attacks that leverage machine learning models such as autoencoders or generative adversarial networks. Although the concept of the face-swapping is not new, its recent technical advances make fake content (e.g., images, videos) more realistic and imperceptible to Humans. Various detection techniques for Deepfake attacks have been explored. These methods, however, are passive measures against Deepfakes as they are mitigation strategies after the high-quality fake content is generated. More importantly, we would like to think ahead of the attackers with robust defenses. This work aims to take an offensive measure to impede the generation of high-quality fake images or videos. Specifically, we propose to use novel transformation-aware adversarially perturbed faces as a defense against GAN-based Deepfake attacks. Different from the naïve adversarial faces, our proposed approach leverages differentiable random image transformations during the generation. We also propose to use an ensemblebased approach to enhance the defense robustness against GAN-based Deepfake variants under the black-box setting. We show that training a Deepfake model with adversarial faces can lead to a significant degradation in the quality of synthesized faces. This degradation is twofold. On the one hand, the quality of the synthesized faces is reduced with more visual artifacts such that the synthesized faces are more obviously fake or less convincing to human observers. On the other hand, the synthesized faces can easily be detected based on various metrics.
The emerging nonvolatile memory (NVM) technologies have demonstrated great potentials in revolutionizing modern memory hierarchy because of their many promising properties: nanosecond read/write time, small cell area, non-volatility, and easy CMOS integration. It is also found that NVM devices can be leveraged to realize some hardware security solutions efficiently, such as physical unclonable function (PUF) and random number generator (RNG). In this paper, we summarize two of our works about using NVM devices to implement these hardware security features and compare them with conventional designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.