“…These two properties together also ensure that no one can impersonate a sender to the receiver [359].…”
Section: Transparency Methods In the Literaturementioning
confidence: 99%
“…This cryptographically ensures that receivers cannot report to the moderator messages that are "forged" to appear as if they were from the sender; the moderator cannot be convinced any party sent a message they did not send. These schemes have two key accountability properties [151,359]:…”
Section: Transparency Methods In the Literaturementioning
confidence: 99%
“…Message franking [62,98,151,164,167,173,183,222,359,376] (total: 10) Reveal source, traceback, or popular messages [173,231,285,360] (total: 4) Other user reporting [26,86,128,192,207,214,237,245,248,377,384] (total: 11)…”
Popular messaging applications now enable end-to-end-encryption (E2EE) by default, and E2EE data storage is becoming common. These important advances for security and privacy create new content moderation challenges for online services, because services can no longer directly access plaintext content. While ongoing public policy debates about E2EE and content moderation in the United States and European Union emphasize child sexual abuse material and misinformation in messaging and storage, we identify and synthesize a wealth of scholarship that goes far beyond those topics. We bridge literature that is diverse in both content moderation subject matter, such as malware, spam, hate speech, terrorist content, and enterprise policy compliance, as well as intended deployments, including not only privacy-preserving content moderation for messaging, email, and cloud storage, but also private introspection of encrypted web traffic by middleboxes. In this work, we systematize the study of content moderation in E2EE settings. We set out a process pipeline for content moderation, drawing on a broad interdisciplinary literature that is not specific to E2EE. We examine cryptography and policy design choices at all stages of this pipeline, and we suggest areas of future research to fill gaps in literature and better understand possible paths forward.
“…These two properties together also ensure that no one can impersonate a sender to the receiver [359].…”
Section: Transparency Methods In the Literaturementioning
confidence: 99%
“…This cryptographically ensures that receivers cannot report to the moderator messages that are "forged" to appear as if they were from the sender; the moderator cannot be convinced any party sent a message they did not send. These schemes have two key accountability properties [151,359]:…”
Section: Transparency Methods In the Literaturementioning
confidence: 99%
“…Message franking [62,98,151,164,167,173,183,222,359,376] (total: 10) Reveal source, traceback, or popular messages [173,231,285,360] (total: 4) Other user reporting [26,86,128,192,207,214,237,245,248,377,384] (total: 11)…”
Popular messaging applications now enable end-to-end-encryption (E2EE) by default, and E2EE data storage is becoming common. These important advances for security and privacy create new content moderation challenges for online services, because services can no longer directly access plaintext content. While ongoing public policy debates about E2EE and content moderation in the United States and European Union emphasize child sexual abuse material and misinformation in messaging and storage, we identify and synthesize a wealth of scholarship that goes far beyond those topics. We bridge literature that is diverse in both content moderation subject matter, such as malware, spam, hate speech, terrorist content, and enterprise policy compliance, as well as intended deployments, including not only privacy-preserving content moderation for messaging, email, and cloud storage, but also private introspection of encrypted web traffic by middleboxes. In this work, we systematize the study of content moderation in E2EE settings. We set out a process pipeline for content moderation, drawing on a broad interdisciplinary literature that is not specific to E2EE. We examine cryptography and policy design choices at all stages of this pipeline, and we suggest areas of future research to fill gaps in literature and better understand possible paths forward.
“…Tyagi, Grubbs, Len, Miers, and Ristenpart [23] intro-duced asymmetric message franking to achieve content moderation under the condition that the sender and receiver identities are hidden from the service providers. They provided a construction of an asymmetric message franking scheme using an applied technique of a designated verifier signature scheme.…”
Message franking is introduced by Facebook in end-to-end encrypted messaging services. It allows to produce verifiable reports of malicious messages by including cryptographic proofs, called reporting tags, generated by Facebook. Recently, Grubbs et al. (CRYPTO'17) proceeded with the formal study of message franking and introduced committing authenticated encryption with associated data (CAEAD) as a core primitive for obtaining message franking.In this work, we aim to enhance the security of message franking and introduce forward security and updates of reporting tags for message franking. Forward security guarantees the security associated with the past keys even if the current keys are exposed and updates of reporting tags allow for reporting malicious messages after keys are updated. To this end, we firstly propose the notion of key-evolving message franking with updatable reporting tags including additional key and reporting tag update algorithms. Then, we formalize five security requirements: confidentiality, ciphertext integrity, unforgeability, receiver binding, and sender binding. Finally, we show a construction of forward secure message franking with updatable reporting tags based on CAEAD, forward secure pseudorandom generator, and updatable message authentication code.
“…Message Franking: The most common approach today for reporting malicious messages in encrypted messaging systems is message franking [11], [21], [38]. Message franking allows a recipient to prove the identity of the sender of a malicious message.…”
Recent years have seen a strong uptick in both the prevalence and real-world consequences of false information spread through online platforms. At the same time, encrypted messaging systems such as WhatsApp, Signal, and Telegram, are rapidly gaining popularity as users seek increased privacy in their digital lives. The challenge we address is how to combat the viral spread of misinformation without compromising privacy. Our FACTS system tracks user complaints on messages obliviously, only revealing the message's contents and originator once sufficiently many complaints have been lodged. Our system is private, meaning it does not reveal anything about the senders or contents of messages which have received few or no complaints; secure, meaning there is no way for a malicious user to evade the system or gain an outsized impact over the complaint system; and scalable, as we demonstrate excellent practical efficiency for up to millions of complaints per day. Our main technical contribution is a new collaborative counting Bloom filter, a simple construction with difficult probabilistic analysis, which may have independent interest as a privacy-preserving randomized count sketch data structure. Compared to prior work on message flagging and tracing in end-to-end encrypted messaging, our novel contribution is the addition of a high threshold of multiple complaints that are needed before a message is audited or flagged. We present and carefully analyze the probabilistic performance of our data structure, provide a precise security definition and proof, and then measure the accuracy and scalability of our scheme via experimentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.