Computer Science &Amp; Information Technology (CS &Amp; IT) 2021
DOI: 10.5121/csit.2021.110811
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Model for Chunking

Abstract: Transformer Models have taken over most of the Natural language Inference tasks. In recent times they have proved to beat several benchmarks. Chunking means splitting the sentences into tokens and then grouping them in a meaningful way. Chunking is a task that has gradually moved from POS tag-based statistical models to neural nets using Language models such as LSTM, Bidirectional LSTMs, attention models, etc. Deep neural net Models are deployed indirectly for classifying tokens as different tags defined under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 13 publications
0
0
0
Order By: Relevance
“…Tokenization divides sentences into vocabulary meaning so-called token [19]. In HTML tokenization, XML scripts, special characters, and punctuation marks in a document do not affect in the performance of the algorithm used [20].…”
Section: Tokenizationmentioning
confidence: 99%
“…Tokenization divides sentences into vocabulary meaning so-called token [19]. In HTML tokenization, XML scripts, special characters, and punctuation marks in a document do not affect in the performance of the algorithm used [20].…”
Section: Tokenizationmentioning
confidence: 99%