Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023) 2023
DOI: 10.18653/v1/2023.repl4nlp-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing text comprehension for Question Answering with Contrastive Learning

Abstract: Most of the existing medication recommendation models are predicted with only structured data such as medical codes, with the remaining other large amount of unstructured or semi-structured data underutilization. To increase the utilization effectively, we proposed a method of enhancing medication recommendation with Large Language Model (LLM) text representation. LLM harnesses powerful language understanding and generation capabilities, enabling the extraction of information from complex and lengthy unstructu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…Furthermore, we will use our unsupervised classification in KAGRA to develop a transient noise system. Additionally, we will extend our architecture to self‐supervised learning [ 41 ] to improve classification accuracy, in which the architecture generates pseudo labels for a given dataset and re‐trains it.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, we will use our unsupervised classification in KAGRA to develop a transient noise system. Additionally, we will extend our architecture to self‐supervised learning [ 41 ] to improve classification accuracy, in which the architecture generates pseudo labels for a given dataset and re‐trains it.…”
Section: Discussionmentioning
confidence: 99%
“…Contrastive learning, in particular, encourages models to generate diverse augmented representations from the same input. Prior to contrastive learning, numerous works sought to enforce invariance in representations through different methodologies, such as pseudo-labelling [31] and virtual adversarial training [32] .…”
Section: Related Workmentioning
confidence: 99%