Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop 2019
DOI: 10.18653/v1/p19-2055
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Models for Medical Concept Normalization in User-Generated Texts

Abstract: In this work, we consider the medical concept normalization problem, i.e., the problem of mapping a health-related entity mention in a free-form text to a concept in a controlled vocabulary, usually to the standard thesaurus in the Unified Medical Language System (UMLS). This is a challenging task since medical terminology is very different when coming from health care professionals or from the general public in the form of social media texts. We approach it as a sequence learning problem with powerful neural … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 39 publications
(38 citation statements)
references
References 19 publications
1
37
0
Order By: Relevance
“…Similarly, previous implementations of ADE normalization have often limited their target classes to the ones available only in the dataset thereby artificially inflating the reported performance [14, 15, 16]. We find that training on only the common identifiers available in the training set or limited number of identifiers may yield better accuracy but does not allow discovery of new ADEs because target classes outside those in the training data or the datasets are not considered.…”
Section: Introductionmentioning
confidence: 81%
“…Similarly, previous implementations of ADE normalization have often limited their target classes to the ones available only in the dataset thereby artificially inflating the reported performance [14, 15, 16]. We find that training on only the common identifiers available in the training set or limited number of identifiers may yield better accuracy but does not allow discovery of new ADEs because target classes outside those in the training data or the datasets are not considered.…”
Section: Introductionmentioning
confidence: 81%
“…Following state-of-the-art research (Tutubalina et al, 2018;Miftahutdinov and Tutubalina, 2019), we view concept normalization as a classification task. Following (Miftahutdi-nov and Tutubalina, 2019), we convert each ADR mention into a vector representation using BERT or RNN.…”
Section: Methodsmentioning
confidence: 99%
“…For the first step, we use the NER model described in Section 3. The system used for concept normalization is based on our previous works (Tutubalina et al, 2018;Miftahutdinov and Tutubalina, 2019) and presented below.…”
Section: Task 3: Medical Concept Normalizationmentioning
confidence: 99%
“…In this work, we take the task a step further from existing monolingual research in a single domain [2,3,6,12,13,20,22] by exploring multilingual transfer between EHRs and UGTs in different languages. Our goal is not to outperform state of the art models on each dataset separately, but to ask whether we can transfer knowledge from a high-resource language, such as English, to a lowresource one, e.g., Russian, for NER of biomedical entities.…”
Section: Introductionmentioning
confidence: 99%