Networks, such as social networks, biochemical networks, and protein-protein interaction networks are ubiquitous in the real world. Network representation learning aims to embed nodes in a network as low-dimensional, dense, real-valued vectors, and facilitate downstream network analysis. The existing embedding methods commonly endeavor to capture structure information in a network, but lack of consideration of subsequent tasks and synergies between these tasks, which are of equal importance for learning desirable network representations. To address this issue, we propose a novel multi-task network representation learning (MTNRL) framework, which is end-to-end and more effective for underlying tasks. The original network and the incomplete network share a unified embedding layer followed by node classification and link prediction tasks that simultaneously perform on the embedding vectors. By optimizing the multi-task loss function, our framework jointly learns task-oriented embedding representations for each node. Besides, our framework is suitable for all network embedding methods, and the experiment results on several benchmark datasets demonstrate the effectiveness of the proposed framework compared with state-of-the-art methods.
R ecent years have witnessed re markable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream priva cypreserving techniques, differential pri vacy provides a promising way to prevent the leaking of individuallevel privacy in training data while preserving the quality of training data for model building. This work provides a comprehensive survey on the existing works that incorporate differ ential privacy with machine learning, socalled differentially private machine learning and categorizes them into two broad categories as per different differen tial privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mecha nism. In the former, a calibrated amount of noise is added to the nonprivate model and in the latter, the output or the objective function is perturbed by ran dom noise. Particularly, the survey covers the techniques of differentially private deep learning to alleviate the recent con cerns about the privacy of big data con tributors. In addition, the research challenges in terms of model utility, priva cy level and applications are discussed. To tackle these challenges, several potential future research directions for differentially private machine learning are pointed out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.