2021
DOI: 10.48550/arxiv.2109.10104
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

InvBERT: Text Reconstruction from Contextualized Embeddings used for Derived Text Formats of Literary Works

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…KNN (Qu et al, 2021) selects the closest word in the embedding space as the real word. InvBert (Höhmann et al, 2021) trains an embedding inversion model that takes word representations as input and outputs each representation's original word one-to-one. MLC (Song and Raghunathan, 2020) is similar to InvBert, the difference is that MLC performs multi-label classification tasks, predicting all the words that have appeared without emphasizing the order.…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…KNN (Qu et al, 2021) selects the closest word in the embedding space as the real word. InvBert (Höhmann et al, 2021) trains an embedding inversion model that takes word representations as input and outputs each representation's original word one-to-one. MLC (Song and Raghunathan, 2020) is similar to InvBert, the difference is that MLC performs multi-label classification tasks, predicting all the words that have appeared without emphasizing the order.…”
Section: Baselinesmentioning
confidence: 99%
“…For example, they can use these data to train a better model or extract users' private information, such as personal details and confidential business information, even if it is prohibited by law. Recent literature (Song and Shmatikov, 2019;Pan et al, 2020) shows that even uploading word representations can still leak privacy, as the embedding inversion attack (Höhmann et al, 2021) can restore word representations to their original words.…”
Section: Plaintext Privacy During Inferencementioning
confidence: 99%
See 1 more Smart Citation