2021
DOI: 10.1109/access.2021.3133651
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Learning Model Based on BERT and Sentence Transformer for Semantic Keyphrase Extraction on Big Social Data

Abstract: In the evolution of the Internet, social media platform like Twitter has permitted the public user to share information such as famous current affairs, events, opinions, news, and experiences. Extracting and analyzing keyphrases in Twitter content is an essential and challenging task. Keyphrases can become precise the main contribution of Twitter content as well as it is a vital issue in vast Natural Language Processing (NLP) application. Extracting keyphrases is not only a time-consuming process but also requ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
1

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(15 citation statements)
references
References 25 publications
0
14
0
1
Order By: Relevance
“…e PIA attention mechanism is based on the simple idea that any natural language sentence can be transformed into a big integer in a Computational Intelligence and Neuroscience power-of-two number system using a simple polynomial transformation. By "power-of-two" here we mean that the radix x of the target number system is a power-of-two expressed as x � 2 n for some integer n. For example, the hexadecimal system is a power-of-two number system in which the radix x is 2 4 . e resulting big integer is then converted to a binary vector that encodes the coefficients of the transformation polynomial.…”
Section: E Pia Attention Mechanism: a Power-of-two Binary Polynomial ...mentioning
confidence: 99%
See 2 more Smart Citations
“…e PIA attention mechanism is based on the simple idea that any natural language sentence can be transformed into a big integer in a Computational Intelligence and Neuroscience power-of-two number system using a simple polynomial transformation. By "power-of-two" here we mean that the radix x of the target number system is a power-of-two expressed as x � 2 n for some integer n. For example, the hexadecimal system is a power-of-two number system in which the radix x is 2 4 . e resulting big integer is then converted to a binary vector that encodes the coefficients of the transformation polynomial.…”
Section: E Pia Attention Mechanism: a Power-of-two Binary Polynomial ...mentioning
confidence: 99%
“…Transformers are currently the dominant approach in natural language processing (NLP) [ 1 , 2 ]. Since their introduction in 2017 [ 3 ], they have been successfully applied in various areas of NLP, including semantic key phrase extraction [ 4 ], hyperspectral image classification [ 5 ], multidimensional essay scoring [ 6 ], relation extraction [ 7 ], speech recognition [ 8 ], sentiment classification [ 9 ], geospatial market segmentation [ 10 ], fake news detection [ 11 ], question answering [ 12 ], text summarization [ 13 ], and text generation [ 14 ]. Good surveys of transformers and related attention technologies can be found in these references [ 45 , 46 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The Adjacent weighting matrix is calculated based on semantic knowledge and page knowledge. Since the semantic knowledge of advertising is mainly composed of sentences, here we use the output of the sentence transformer [7] as the semantic knowledge vectors s ∈ R k . As mentioned in section 2.1.3, we count the total number of ads placed on specific page channels and aggregate the average value of user interaction knowledge of historical ads placed on a specific page channel as Page knowledge vectors s ∈ R d .…”
Section: Graph Creationmentioning
confidence: 99%
“…Python was used to program the AI because it lets users easily express many complex and high-level tasks concisely, as well as offers a good platform for developing more specialized objects that are directly suited to scientific work (Perez et al, 2011) such as sentence transformer framework to compute semantic similarity and develop the language models (Devika et al, 2021) to answer the research question. Language models (such as BERT and MPNET) represent individual words with semantically fixed-length vectors that make possible natural language processing (NLP) (Greiner-Petter et al, 2020).…”
Section: Artificial Intelligence Training and Preparationmentioning
confidence: 99%