2018
DOI: 10.48550/arxiv.1804.01486
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(23 citation statements)
references
References 0 publications
0
23
0
Order By: Relevance
“…General patterns are captured during pre-trained processes and can be "transferred" into new prediction tasks. There also exist some pre-trained embeddings of biomedical entities (Choi et al, 2016;Beam et al, 2018) which allow us to adopt similar ideas of "transfer learning" to learn graph embeddings. We can initialize the embedding vector for each node on a graph with its pre-trained embedding (e.g., by looking for the corresponding entity in (Choi et al, 2016;Beam et al, 2018)) rather than by random initialization, and then continue training various graph embedding methods as before (which is often referred to as "fine-tuning").…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…General patterns are captured during pre-trained processes and can be "transferred" into new prediction tasks. There also exist some pre-trained embeddings of biomedical entities (Choi et al, 2016;Beam et al, 2018) which allow us to adopt similar ideas of "transfer learning" to learn graph embeddings. We can initialize the embedding vector for each node on a graph with its pre-trained embedding (e.g., by looking for the corresponding entity in (Choi et al, 2016;Beam et al, 2018)) rather than by random initialization, and then continue training various graph embedding methods as before (which is often referred to as "fine-tuning").…”
Section: Methodsmentioning
confidence: 99%
“…There also exist some pre-trained embeddings of biomedical entities (Choi et al, 2016;Beam et al, 2018) which allow us to adopt similar ideas of "transfer learning" to learn graph embeddings. We can initialize the embedding vector for each node on a graph with its pre-trained embedding (e.g., by looking for the corresponding entity in (Choi et al, 2016;Beam et al, 2018)) rather than by random initialization, and then continue training various graph embedding methods as before (which is often referred to as "fine-tuning"). The pre-trained embeddings can be seen as "coarse embeddings" since they are usually pre-trained on a large general corpus and have not been optimized for downstream tasks yet.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…(Finlayson et al, 2014) learned the UMLS CUIs embedding using 20 million clinical notes spanning 19 years of data from Stanford Hospital and Clinics using co-occurrence based analyses 12 . (Beam et al, 2018) also learned the UMLS CUIs embeddings, cui2vec, from medical billing codes, biomedical journal texts, and the clinical concept co-occurence matrix used in (Finlayson et al, 2014) 13 . (Choi et al, 2016e) learned three dense, low-dimensional embedding spaces of UMLS CUIs and billing codes from UMLS-processed journal abstracts, UMLS-processed clinical notes and claims data using the word2vec skip-gram framework 14 .…”
Section: Concept Representationsmentioning
confidence: 99%