Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop &Amp; Shared Task 2019
DOI: 10.18653/v1/w19-3216
|View full text |Cite
|
Sign up to set email alerts
|

Using Machine Learning and Deep Learning Methods to Find Mentions of Adverse Drug Reactions in Social Media

Abstract: Today social networks play an important role, where people can share information related to health. This information can be used for public health monitoring tasks through the use of Natural Language Processing (NLP) techniques. Social Media Mining for Health Applications (SMM4H) provides tasks such as those described in this document to help manage information in the health domain. This document shows the first participation of the SINAI group in SMM4H. We study approaches based on machine learning and deep l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 3 publications
0
1
0
Order By: Relevance
“…For example, the second-best architecture [34] was based on Convolutional Neural Networks (CNNs), BiLSTMs, CRF and Multi-head self-attention, employing features such as part-of-speech tagging, ELMo embeddings [41], and Word2Vec embeddings [42]. Sarabadani [43] also used LSTMs and CNNs, combined with ELMo embeddings and three specialized lexicon sets, while Lopez et al [44] used a CRF with GloVe embeddings [45]. The other half of the proposed models were all based on the recently-introduced BERT and its variants, including the best architecture for 2019 [33], which employed an ensemble of BioBERTs with a CRF module.…”
Section: Related Workmentioning
confidence: 99%
“…For example, the second-best architecture [34] was based on Convolutional Neural Networks (CNNs), BiLSTMs, CRF and Multi-head self-attention, employing features such as part-of-speech tagging, ELMo embeddings [41], and Word2Vec embeddings [42]. Sarabadani [43] also used LSTMs and CNNs, combined with ELMo embeddings and three specialized lexicon sets, while Lopez et al [44] used a CRF with GloVe embeddings [45]. The other half of the proposed models were all based on the recently-introduced BERT and its variants, including the best architecture for 2019 [33], which employed an ensemble of BioBERTs with a CRF module.…”
Section: Related Workmentioning
confidence: 99%