Transferability of adversarial examples is of central importance for attacking an unknown model, which facilitates adversarial attacks in more practical scenarios, e.g., blackbox attacks. Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features to degrade prediction accuracy in a source model without aware of intrinsic features of objects in the images. We argue that such brute-force degradation would introduce model-specific local optimum into adversarial examples, thus limiting the transferability. By contrast, we propose the Feature Importance-aware Attack (FIA), which disrupts important object-aware features that dominate model decisions consistently. More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image. The gradients will be highly correlated to objects of interest, and such correlation presents invariance across different models. Besides, the random transforms will preserve intrinsic features of objects and suppress model-specific information. Finally, the feature importance guides to search for adversarial examples towards disrupting critical features, achieving stronger transferability. Extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed FIA, i.e., improving the success rate by 8.4% against normally trained models and 11.7% against defense models as compared to the state-of-the-art transferable attacks. Code
Collaborative inference (co-inference) accelerates deep neural network inference via extracting representations at the device and making predictions at the edge server, which however might disclose the sensitive information about private attributes of users (e.g., race). Although many privacy-preserving mechanisms on co-inference have been proposed to eliminate privacy concerns, privacy leakage of sensitive attributes might still happen during inference. In this paper, we explore privacy leakage against the privacy-preserving co-inference by decoding the uploaded representations into a vulnerable form. We propose a novel attack framework named AttrLeaks, which consists of the shadow model of feature extractor (FE), the susceptibility reconstruction decoder, and the private attribute classifier. Based on our observation that values in inner layers of FE (internal representation) are more sensitive to attack, the shadow model is proposed to simulate the FE of the victim in the blackbox scenario and generates the internal representations. Then, the susceptibility reconstruction decoder is designed to transform the uploaded representations of the victim into the vulnerable form, which enables the malicious classifier to easily predict the private attributes. Extensive experimental results demonstrate that AttrLeaks outperforms the state of the art in terms of attack success rate.
Photoacoustic imaging has developed rapidly in recent years and can effectively detect tumors, and this paper p roposes a non-destructive photoacoustic imaging detection method suitable for uterine tumors based on delayed superposition algorithm. A three-dimensional model of the uterus was designed using Solidworks software and printed on a 3D printing device. Strong absorbers were placed in different parts of the model, the detection of tumors in different parts was simulated, the pulsed laser was transmitted to the uterus by hysteroscopy, and the ultrasound probe was used to receive ultrasound signals in vitro, and image reconstruction was carried out to verify the sustainability of subsequent experiments. The embryonic uterus was made by using a fresh pork belly to cover the uterus model, and the imitation uterus was imaged, and the optical absorption parameters of the tumor and normal uterine tissue were used to reconstruct the uterus image.Aiming at the blurring of the image caused by the noise brought in during data acquisition, the noise cancellation method is used to effectively solve this problem. Experiments have shown that the system can accurately detect the structural size and location depth of the simulated tumor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.