2021
DOI: 10.1016/j.neunet.2020.11.017
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised disentangled framework for transferable named entity recognition

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…GMI [28] brings mutual information into graph representation learning to alleviate the problem of lacking available supervision and avoid potential risks from unreliable labels. In addition, SSD [12] is a disentanglement framework, where mutual information is served as supervision signals for domain adaptation tasks. For DGRe, we utilize mutual information constraints similar to [12], but apply them into the disentanglement of the document representation for document ranking task.…”
Section: Other Related Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…GMI [28] brings mutual information into graph representation learning to alleviate the problem of lacking available supervision and avoid potential risks from unreliable labels. In addition, SSD [12] is a disentanglement framework, where mutual information is served as supervision signals for domain adaptation tasks. For DGRe, we utilize mutual information constraints similar to [12], but apply them into the disentanglement of the document representation for document ranking task.…”
Section: Other Related Techniquesmentioning
confidence: 99%
“…In addition, SSD [12] is a disentanglement framework, where mutual information is served as supervision signals for domain adaptation tasks. For DGRe, we utilize mutual information constraints similar to [12], but apply them into the disentanglement of the document representation for document ranking task.…”
Section: Other Related Techniquesmentioning
confidence: 99%
“…The most obviously explainable networks are those in which individual dimensions of the latent space more or less directly reflect or represent identifiable features of the inputs; in the case of images of faces, for example, this would occur when the value of a feature in one dimension varies smoothly with and thus can be seen to represent, an input feature such as hair color, the presence or type of spectacles, the presence or type of a moustache, and so on [23][24][25][26]. This is known as a disentangled representation (e.g., [27][28][29][30][31][32][33][34][35][36][37][38]). To this end, it is worth commenting that the ability to generate more or less realistic facial image structures using orthogonal features extracted from a database or collection of relevant objects that can be parametrized has been known for some decades [39][40][41][42].…”
Section: Introductionmentioning
confidence: 99%
“…Based on the above idea, we propose the Temporally Evolving Aggregation (TEA in short) framework for sequential recommendation by aggregating the user behavior sequence as well as the dynamic user-item heterogeneous graph. Inspired by the sequence labeling in natural language processing [18], [19] to model the joint probability distribution), we adopt CRF to model the item decision sequence and estimate P (v t+1 |u i , H 1:t+1 ; v 1:t ). In order to alleviate the issue of the large item space, we use the pseudo likelihood method to approximate the aforementioned conditional probability.…”
Section: Introductionmentioning
confidence: 99%