Data Science for Healthcare 2019
DOI: 10.1007/978-3-030-05249-2_5
|View full text |Cite
|
Sign up to set email alerts
|

Clinical Natural Language Processing with Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(22 citation statements)
references
References 54 publications
0
22
0
Order By: Relevance
“…An example of such models is the skip-gram model ( Mikolov, Yih & Zweig, 2013 ) proposed by Mikolov in 2013 and the model proposed by Gauch, Wang & Rachakonda (1999) . Word2Vec ( Mikolov et al, 2013b ), FastText ( Bojanowski et al, 2017 ), GloVe ( Pennington, Socher & Manning, 2014 ) and WOVe ( Ibrahim et al, 2021 ) are all examples of vector learning methods that have been shown to be superior to traditional NLP methods in different text mining applications ( Gu et al, 2019 ; Hasan & Farri, 2019 ). Some of these techniques have been applied in the medical field to build medical ontologies, such as ( Minarro-Giménez, Marin-Alonso & Samwald, 2014 ; Hughes et al, 2017 ; De Vine et al, 2014 ; Minarro-Giménez, Marín-Alonso & Samwald, 2015 ; Wang, Cao & Zhou, 2015 ).…”
Section: Related Workmentioning
confidence: 99%
“…An example of such models is the skip-gram model ( Mikolov, Yih & Zweig, 2013 ) proposed by Mikolov in 2013 and the model proposed by Gauch, Wang & Rachakonda (1999) . Word2Vec ( Mikolov et al, 2013b ), FastText ( Bojanowski et al, 2017 ), GloVe ( Pennington, Socher & Manning, 2014 ) and WOVe ( Ibrahim et al, 2021 ) are all examples of vector learning methods that have been shown to be superior to traditional NLP methods in different text mining applications ( Gu et al, 2019 ; Hasan & Farri, 2019 ). Some of these techniques have been applied in the medical field to build medical ontologies, such as ( Minarro-Giménez, Marin-Alonso & Samwald, 2014 ; Hughes et al, 2017 ; De Vine et al, 2014 ; Minarro-Giménez, Marín-Alonso & Samwald, 2015 ; Wang, Cao & Zhou, 2015 ).…”
Section: Related Workmentioning
confidence: 99%
“…The hierarchical structure of natural language suggests the use of deep learning via neural networks that have multiple hidden layers (Satapathy et al, 2018, §1.6). Such deep neural networks have outperformed other forms of machine learning for problems in natural language processing (Hasan and Farri, 2019). For instance, Lee et al (2017) used a deep neural network to detect adverse drug events from social media such as Twitter feeds.…”
Section: Natural Language Processing By Deep Learningmentioning
confidence: 99%
“…In the field of text similarity detection, techniques based on deep learning moved researchers towards semantically distributed representations [2], [5], [37], [13], [23], [60], [17]. To find the semantic relatedness among question pairs, we proposed a technique based on deep learn-ing.…”
Section: Proposed Workmentioning
confidence: 99%