2017
DOI: 10.1007/978-3-319-59758-4_17
|View full text |Cite
|
Sign up to set email alerts
|

Learning Concept-Driven Document Embeddings for Medical Information Search

Abstract: Many medical tasks such as self-diagnosis, health-care assessment, and clinical trial patient recruitment involve the usage of information access tools. A key underlying step to achieve such tasks is the document-to-document matching which mostly fails to bridge the gap identified between raw level representations of information in documents and high-level human interpretation. In this paper, we study how to optimize the document representation by leveraging neural-based approaches to capture latent representa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
38
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(39 citation statements)
references
References 23 publications
1
38
0
Order By: Relevance
“…The results showed that constrained word representations are more effective than corpus-driven word representations when used together with bag-of-words models for re-ranking. Nguyen et al [47] present two models: the conceptual doc2vec (cdoc2vec) and the retrofitted doc2vec (rdoc2vec). Similar to the model proposed by De Vine et al [15], cdoc2vec learns document representations built upon concepts that have been previously extracted from text.…”
Section: Knowledge-enhanced Representation Modelsmentioning
confidence: 99%
See 4 more Smart Citations
“…The results showed that constrained word representations are more effective than corpus-driven word representations when used together with bag-of-words models for re-ranking. Nguyen et al [47] present two models: the conceptual doc2vec (cdoc2vec) and the retrofitted doc2vec (rdoc2vec). Similar to the model proposed by De Vine et al [15], cdoc2vec learns document representations built upon concepts that have been previously extracted from text.…”
Section: Knowledge-enhanced Representation Modelsmentioning
confidence: 99%
“…The authors employ the model in two retrieval strategies: document re-ranking and query expansion. Tamine et al [66] extend [47,48] to investigate the combined use of corpus-based information and external knowledge resources in different NLP and IR tasks. The authors compare the impact of the different learning approaches on the quality of the learned representations.…”
Section: Knowledge-enhanced Representation Modelsmentioning
confidence: 99%
See 3 more Smart Citations