Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-short.26
|View full text |Cite
|
Sign up to set email alerts
|

A Span-based Dynamic Local Attention Model for Sequential Sentence Classification

Abstract: Sequential sentence classification aims to classify each sentence in the document based on the context in which sentences appear. Most existing work addresses this problem using a hierarchical sequence labeling network. However, they ignore considering the latent segment structure of the document, in which contiguous sentences often have coherent semantics. In this paper, we proposed a span-based dynamic local attention model that could explicitly capture the structural information by the proposed supervised d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…Approaches for Abstracts: Deep learning has been the preferred approach for sentence classification in abstracts in recent years [15,19,28,34,65,75]. These approaches follow a common hierarchical sequence labelling architecture: (1) a word embedding layer encodes tokens of a sentence to word embeddings, (2) a sentence encoder transforms the word embeddings of a sentence to a sentence representation, (3) a context enrichment layer enriches all sentence representations of the abstract with context from surrounding sentences, and (4) an output layer predicts the label sequence.…”
Section: Sequential Sentence Classification In Scientific Textmentioning
confidence: 99%
See 4 more Smart Citations
“…Approaches for Abstracts: Deep learning has been the preferred approach for sentence classification in abstracts in recent years [15,19,28,34,65,75]. These approaches follow a common hierarchical sequence labelling architecture: (1) a word embedding layer encodes tokens of a sentence to word embeddings, (2) a sentence encoder transforms the word embeddings of a sentence to a sentence representation, (3) a context enrichment layer enriches all sentence representations of the abstract with context from surrounding sentences, and (4) an output layer predicts the label sequence.…”
Section: Sequential Sentence Classification In Scientific Textmentioning
confidence: 99%
“…Global Vectors (GloVe) [55], Word2Vec [47], or SciBERT [6] that is BERT [20] pre-trained on scientific text. For sentence encoding, a bidirectional long short-term memory (Bi-LSTM) [31] or a convolutional neural network (CNN) with various pooling strategies are utilised, while Yamada et al [75] and Shang et al [65] use the classification token ([CLS]) of BERT or SciBERT. To enrich sentences with further context, a recurrent neural network such as a Bi-LSTM or bidirectional gated recurrent unit (Bi-GRU) [13] is used.…”
Section: Sequential Sentence Classification In Scientific Textmentioning
confidence: 99%
See 3 more Smart Citations