2023
DOI: 10.1007/978-3-031-41501-2_4
|View full text |Cite
|
Sign up to set email alerts
|

Long-Range Transformer Architectures for Document Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…In addition, refs. [20,21] introduces the self-attention approach, which reduces the number of sequential computations by short paths between distant words in entire paragraphs. It concludes that these short paths are particularly useful for learning strong semantic feature extractors.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, refs. [20,21] introduces the self-attention approach, which reduces the number of sequential computations by short paths between distant words in entire paragraphs. It concludes that these short paths are particularly useful for learning strong semantic feature extractors.…”
Section: Related Workmentioning
confidence: 99%
“…Adversaries could exploit overfitting in LLMs to infer details about individual training examples, further complicating the privacy landscape [38,49]. The necessity for ongoing research and development in privacy-preserving methods was highlighted as a critical component in safeguarding the future deployment of LLMs [50,51].…”
Section: Privacy Breaches In Llm Modelsmentioning
confidence: 99%