Many anonymous communication networks (ACNs) with different privacy goals have been developed. However, there are no accepted formal definitions of privacy and ACNs often define their goals and adversary models ad hoc. However, for the understanding and comparison of different flavors of privacy, a common foundation is needed. In this paper, we introduce an analysis framework for ACNs that captures the notions and assumptions known from different analysis frameworks. Therefore, we formalize privacy goals as notions and identify their building blocks. For any pair of notions we prove whether one is strictly stronger, and, if so, which. Hence, we are able to present a complete hierarchy. Further, we show how to add practical assumptions, e.g. regarding the protocol model or user corruption as options to our notions. This way, we capture the notions and assumptions of, to the best of our knowledge, all existing analytical frameworks for ACNs and are able to revise inconsistencies between them. Thus, our new framework builds a common ground and allows for sharper analysis, since new combinations of assumptions are possible and the relations between the notions are known.
Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity. The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex, multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decisionmaking, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.
Privacy-respecting reputation systems have been constructed based on anonymous payment systems to implement raters' anonymity for privacy-respecting reputation systems. To the best of our knowledge, all these systems suffer from the problem having a "final state", that is a system state in which users have no incentive any longer to behave honestly, because they reached a maximum reputation or they can no longer be rated. Thus the reputation is in fact no longer lively. We propose a novel approach to address the problem of liveliness by the introduction of negative ratings. We tie ratings to actual interactions to force users to also deposit their negative ratings at the reputation server. Additionally we enhance raters' anonymity by limiting timing attacks through the use of transferable-eCash-based payment systems.
While performing pure e-business transactions such as purchasing software or music, customers can act anonymously supported by, e.g., anonymous communication protocols and anonymous payment protocols. However, it is hard to establish trust relations among anonymously acting business partners. Anonymous reputation systems have been proposed to mitigate this problem. Schiffner et al. recently proved that there is a conflict between anonymity and reputation and they established the non-existence of certain privacy-preserving reputation functions. In this paper we argue that this relationship is even more intricate. First, we present a reputation function that deanonymizes the user, yet provides strong anonymity (SA) according to their definitions. However, this reputation function has no utility, i.e., the submitted ratings have no influence on the resulting reputation values. Second, we show that a reputation function having utility requires the system to choose new independently at random selected pseudonyms (for all users it has utility for) on every new rating as a necessary condition to provide strong anonymity according to the aforementioned definition. Since some persistence of pseudonyms is favorable, we present a more secure, but also more usable definition for anonymous reputation systems that allows persistency yet guaranties k-anonymity. We further present a definition for rating secrecy based on a threshold. Finally, we propose a practical reputation function, for which we prove that it satisfies these definitions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.