Abstract.A successful detection of the stealthy dopant-level circuit (trojan), proposed by Becker et al. at CHES 2013 [1], is reported. Contrary to an assumption made by Becker et al., dopant types in active region are visible with either scanning electron microscopy (SEM) or focused ion beam (FIB) imaging. The successful measurement is explained by an LSI failure analysis technique called the passive voltage contrast [2]. The experiments are conducted by measuring a dedicated chip. The chip uses the diffusion programmable device [3]: an anti-reverse-engineering technique by the same principle as the stealthy dopant-level trojan. The chip is delayered down to the contact layer, and images are taken with (1) an optical microscope, (2) SEM, and (3) FIB. As a result, the four possible dopant-well combinations, namely (i) p+/n-well, (ii) p+/p-well, (iii) n+/n-well and (iv) n+/p-well are distinguishable in the SEM images. Partial but sufficient detection is also achieved with FIB. Although the stealthy dopant-level circuits are visible, however, they potentially make a detection harder. That is because the contact layer should be measured. We show that imaging the contact layer is at most 16-times expensive than that of a metal layer in terms of the number of images 3 .
Physical Unclonable Functions (PUFs) have been proposed to produce tamper-resistant device or create unique identifications of the secure systems. The conventional basic arbiter-PUF was fabricated with 0.18μm CMOS technology, and the uniqueness of generated multi-bit responses was evaluated. The uniqueness is inadequate than expected because some of multi-bit responses are never generated. In this study, we propose a novel arbiter-PUF utilizing a RG-DTM (Response Generation according to Delay Time Measurement) scheme. The uniqueness is evaluated by the standard deviation of the Hamming Distance distribution between generated 256-bit responses. The standard deviation on the proposed PUFs is greatly improved to 8.45 from 31 on the conventional PUFs.
Backdoor attacks are poisoning attacks and serious threats to deep neural networks. When an adversary mixes poison data into a training dataset, the training dataset is called a poison training dataset. A model trained with the poison training dataset becomes a backdoor model and it achieves high stealthiness and attack-feasibility. The backdoor model classifies only a poison image into an adversarial target class and other images into the correct classes. We propose an additional procedure to our previously proposed countermeasure against backdoor attacks by using knowledge distillation. Our procedure removes poison data from a poison training dataset and recovers the accuracy of the distillation model. Our countermeasure differs from previous ones in that it does not require detecting and identifying backdoor models, backdoor neurons, and poison data. A characteristic assumption in our defense scenario is that the defender can collect clean images without labels. A defender distills clean knowledge from a backdoor model (teacher model) to a distillation model (student model) with knowledge distillation. Subsequently, the defender removes poison-data candidates from the poison training dataset by comparing the predictions of the backdoor and distillation models. The defender fine-tunes the distillation model with the detoxified training dataset to improve classification accuracy. We evaluated our countermeasure by using two datasets. The backdoor is disabled by distillation and fine-tuning further improves the classification accuracy of the distillation model. The fine-tuning model achieved comparable accuracy to a baseline model when the number of clean images for a distillation dataset was more than 13% of the training data. Our results indicate that our countermeasure can be applied for general imageclassification tasks and that it works well whether the defender's received training dataset is a poison dataset or not. CCS CONCEPTS • Computing methodologies → Computer vision; • Security and privacy → Domain-specific security and privacy architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.