2017
DOI: 10.1016/j.engappai.2017.05.010
|View full text |Cite
|
Sign up to set email alerts
|

Sprinkled semantic diffusion kernel for word sense disambiguation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 31 publications
0
17
0
Order By: Relevance
“…Supervised semantic smoothing kernels exist that utilise class information in building a semantic matrix [1,2,36]. A sprinkled diffusion kernel that uses both co-occurrence information and class information for word sense disambiguation is presented in [36]. In this approach, the smoothing helps in increasing the semantic relationship between terms in the same class.…”
Section: Related Workmentioning
confidence: 99%
“…Supervised semantic smoothing kernels exist that utilise class information in building a semantic matrix [1,2,36]. A sprinkled diffusion kernel that uses both co-occurrence information and class information for word sense disambiguation is presented in [36]. In this approach, the smoothing helps in increasing the semantic relationship between terms in the same class.…”
Section: Related Workmentioning
confidence: 99%
“…Machine learning approaches, also called corpus-based approaches, do not make use of any knowledge resources for disambiguation (Raganato et al, 2017). Most accurate WSD systems to date exploit supervised methods which automatically learn cues useful for disambiguation from manually sense-annotated data (Wang et al, 2017). All above analyses are very useful for a company from where a user can purchase a physical product as well as on line service provider.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For instance in [29], latent semantic indexing (LSI) is performed both on standard term-document matrix and term-document matrix augmented with sprinkled terms. The sprinkling process is shown in Figure 1: In Figure 1, to explain the sprinkling process, we use the toy corpus from [28] that has 2 different class labels with 3 documents (Doc-1, Doc-2, and Doc-3 ) and 4 different terms ( t 1 , t 2 , t 3 , and t 4 ). In (a) we get the document-term matrix with 3 documents and 4 terms.…”
Section: Sprinklingmentioning
confidence: 99%