2019
DOI: 10.3390/ijerph16224360
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Word Embeddings for Extraction from Medical Records

Abstract: This paper is an extension of the work originally presented in the 16th International Conference on Wearable, Micro and Nano Technologies for Personalized Health. Despite using electronic medical records, free narrative text is still widely used for medical records. To make data from texts available for decision support systems, supervised machine learning algorithms might be successfully applied. In this work, we developed and compared a prototype of a medical data extraction system based on different artific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…They presented the performance evaluations of employing three sentence embedding models on DUC 2002 dataset for ATS and show that deep learning methods display exceptional output [22]. Dudchenko and Kopanitsa compared the performance of various artificial neural network models for extracting text from Russian medical text documents with the help of word embeddings [23]. They compared multi-layer perceptron (MLP), convolutional neural networks (CNN) and long short-term memory networks (LSTMs); MLP and CNN presented similar performance with the embedding models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They presented the performance evaluations of employing three sentence embedding models on DUC 2002 dataset for ATS and show that deep learning methods display exceptional output [22]. Dudchenko and Kopanitsa compared the performance of various artificial neural network models for extracting text from Russian medical text documents with the help of word embeddings [23]. They compared multi-layer perceptron (MLP), convolutional neural networks (CNN) and long short-term memory networks (LSTMs); MLP and CNN presented similar performance with the embedding models.…”
Section: Related Workmentioning
confidence: 99%
“…MLP network has a relatively simpler structure but yet obtains respectable results. The simpler structure has a direct impact on the time and space complexity of a system [23], [28]. The optimizer used in this experiment was Adam [32] and the loss function used was binary_crossentropy.…”
Section: Multi-layer Perceptron Networkmentioning
confidence: 99%
“…can then be used as inputs to a classifier [20,21]. There is no publicly available text corpus related to pediatric abuse case analysis, therefore two embeddings were selected for this study: the publicly available GloVE embedding (WE-GLOVE, Global Vectors for word representation) utilizing 100 dimensional representations of words trained on the entirety of Wikipedia [22], and an in-house embedding based on the MIMIC-III database (WE-MIMIC, Medical Information Mart for Intensive Care) trained on the notes in the MIMIC-III database, a set of adult ICU records from Beth-Israel-Deaconess Medical Center [23].…”
Section: Plos Onementioning
confidence: 99%
“…Advanced textual analysis, akin to platforms like Voyant Tools but customized for this study, uses natural language processing (NLP) to dissect academic content meticulously [ 452 , 453 , 454 , 455 , 456 , 457 , 458 , 459 , 460 , 461 , 462 , 463 , 464 ]. This aligns with the core themes of microstructural characterization, electrical dynamics, and mechanical complexities in nanocomposites.…”
Section: Introductionmentioning
confidence: 99%