In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, ∞), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.
In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
In the evaluation of magnetic field data, edge enhancement and detection techniques are important treatments for the interpretation of geological structures. In general geological sense, contiguity of deep and shallow magnetic sources leads to weak and intense anomalies that complicates the interpretation to disclose adjacent anomalous sources. Many of the existing filters for edge detection in magnetics mostly have the disadvantage that they require a reduction to pole transformation as the pre-process of the data or they cannot balance weak and intense anomalies and therefore fail in detecting edges of deep and shallow sources simultaneously. This study presents an improved edge detection filter LAS (logistic function of the analytical signal), based on the generalised logistic function configured by the ratio of derivatives of the analytical signal. This novel approach has the capability of reducing the dependence on the direction of the magnetization and also balancing anomalies of sources at different levels of depth. The feasibility of the method is examined on both theoretical and real data cases comparatively with some other methods that utilize the analytical signal in their basis. In comparison, the results demonstrate that the LAS method provides more accurate estimation of edge localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.