2020 IEEE International Conference on Big Data and Smart Computing (BigComp) 2020
DOI: 10.1109/bigcomp48618.2020.000-2
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Patent Classification using Attention-Aware Deep Learning Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…Methods for the second task captured temporal dependencies from the patent citation sequences by a point process [5] or an attention mechanism [6]. Another branch aimed to make patents analysis more automatic, such as patent classification [10], [11], patent retrieval [12], and patent text generation [13]. Specifically, these tasks mostly focused on capturing semantic relationships from patent descriptions with text representation learning methods in Natural Language Processing (NLP) field.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Methods for the second task captured temporal dependencies from the patent citation sequences by a point process [5] or an attention mechanism [6]. Another branch aimed to make patents analysis more automatic, such as patent classification [10], [11], patent retrieval [12], and patent text generation [13]. Specifically, these tasks mostly focused on capturing semantic relationships from patent descriptions with text representation learning methods in Natural Language Processing (NLP) field.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, these tasks mostly focused on capturing semantic relationships from patent descriptions with text representation learning methods in Natural Language Processing (NLP) field. For example, [10] utilized Bert [14] pre-trained model to capture semantic dependencies in patent documents while [11] combined GCN and attention mechanism to embed patent representations for classification.…”
Section: Related Workmentioning
confidence: 99%
“…The dynamic word vector pre-trained by ALBERT is used to replace the traditional static word vector, and the BiGRU neural network model is used for training, which preserves the semantic association between long-distance words in the patent text to the maximum extent. Roudsari et al [23]. Studied the effect of applying the DistilBERT pre-training model and finetuned it to complete the important task of multi-label patent classification.…”
Section: )mentioning
confidence: 99%
“…Following this trend, Lee et al [15] leveraged and fine-tuned the BERT-Base model and applied it to patent classification getting better accuracy than other recent DL approaches. This method was evaluated on a multi-label classification and achieved 54.33 recall at 1 Similarly, Roudsari et al [16] presented a short work applying and fine-tuning Distil-BERT model for patent classification.…”
Section: Prior Workmentioning
confidence: 99%