Graph theory (GT) concepts are potentially applicable in the field of computer science (CS) for many purposes. The unique applications of GT in the CS field such as clustering of web documents, cryptography, and analyzing an algorithm’s execution, among others, are promising applications. Furthermore, GT concepts can be employed to electronic circuit simplifications and analysis. Recently, graphs have been extensively used in social networks (SNs) for many purposes related to modelling and analysis of the SN structures, SN operation modelling, SN user analysis, and many other related aspects. Considering the widespread applications of GT in SNs, this article comprehensively summarizes GT use in the SNs. The goal of this survey paper is twofold. First, we briefly discuss the potential applications of GT in the CS field along with practical examples. Second, we explain the GT uses in the SNs with sufficient concepts and examples to demonstrate the significance of graphs in SN modeling and analysis.
Personally identifiable information (PII) affects individual privacy because PII combinations may yield unique identifications in published data. User PII such as age, race, gender, and zip code contain private information that may assist an adversary in determining the user to whom such information relates. Each item of user PII reveals identity differently, and some types of PII are highly identity vulnerable. More vulnerable types of PII enable unique identification more easily, and their presence in published data increases privacy risks. Existing privacy models treat all types of PII equally from an identity revelation point of view, and they mainly focus on hiding user PII in a crowd of other users. Ignoring the identity vulnerability of each type of PII during anonymization is not an effective method of protecting user privacy in a fine-grained manner. This paper proposes a new anonymization scheme that considers the identity vulnerability of PII to effectively protect user privacy. Data generalization is performed adaptively based on the identity vulnerability of PII as well as diversity to anonymize data. This adaptive generalization effectively enables anonymous data, which protects user identity and private information disclosures while maximizing the utility of data for performing analyses and building classification models. Additionally, the proposed scheme has low computational overheads. The simulation results show the effectiveness of the scheme and verify the aforementioned claims.
User attributes affect community (i.e., a group of people with some common properties/attributes) privacy in users' data publishing because some attributes may expose multiple users' identities and their associated sensitive information during published data analysis. User attributes such as gender, age, and race, may allow an adversary to form users' communities based on their values, and launch sensitive information inference attack subsequently. As a result, explicit disclosure of private information of a specific users' community can occur from the privacy preserved published data. Each item of user attributes impacts users' community privacy differently, and some types of attributes are highly susceptible. More susceptible types of attributes enable multiple users' unique identifications and sensitive information inferences more easily, and their presence in published data increases users' community privacy risks. Most of the existing privacy models ignore the impact of susceptible attributes on user's community privacy and they mainly focus on preserving the individual privacy in the released data. This paper presents a novel data anonymization algorithm that significantly improves users' community privacy without sacrificing the guarantees on anonymous data utility in publishing data. The proposed algorithm quantifies the susceptibility of each attribute present in user's dataset to effectively preserve users' community privacy. Data generalization is performed adaptively by considering both user attributes' susceptibility and entropy simultaneously. The proposed algorithm controls overgeneralization of the data to enhance anonymous data utility for the legitimate information consumers. Due to the widespread applications of social networks (SNs), we focused on the SN users' community privacy preserved and utility enhanced anonymous data publishing. The simulation results obtained from extensive experiments, and comparisons with the existing algorithms show the effectiveness of the proposed algorithm and verify the aforementioned claims.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.