Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-srw.9
|View full text |Cite
|
Sign up to set email alerts
|

Improving Classification of Infrequent Cognitive Distortions: Domain-Specific Model vs. Data Augmentation

Abstract: Cognitive distortions are counterproductive patterns of thinking that are one of the targets of cognitive behavioral therapy (CBT). These can be challenging for clinicians to detect, especially those without extensive CBT training or supervision. Text classification methods can approximate expert clinician judgment in the detection of frequently occurring cognitive distortions in text-based therapy messages. However, performance with infrequent distortions is relatively poor. In this study, we address this spa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Although requiring massive text corpora to initially train on masked language, language models build linguistic representations that can then be fine-tuned to downstream clinical tasks [ 69 ]. Applications examined include fine-tuning BERT for domain adaptation to mental health language (MentalBERT) [ 70 ], for sentiment analysis via transfer learning (e.g., using the GoEmotions corpus) [ 71 ], and detection of topics [ 72 ]. Generative language models were used for revising interventions [ 73 ], session summarizations [ 74 ], or data augmentation for model training [ 70 ].…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Although requiring massive text corpora to initially train on masked language, language models build linguistic representations that can then be fine-tuned to downstream clinical tasks [ 69 ]. Applications examined include fine-tuning BERT for domain adaptation to mental health language (MentalBERT) [ 70 ], for sentiment analysis via transfer learning (e.g., using the GoEmotions corpus) [ 71 ], and detection of topics [ 72 ]. Generative language models were used for revising interventions [ 73 ], session summarizations [ 74 ], or data augmentation for model training [ 70 ].…”
Section: Resultsmentioning
confidence: 99%
“…Applications examined include fine-tuning BERT for domain adaptation to mental health language (MentalBERT) [ 70 ], for sentiment analysis via transfer learning (e.g., using the GoEmotions corpus) [ 71 ], and detection of topics [ 72 ]. Generative language models were used for revising interventions [ 73 ], session summarizations [ 74 ], or data augmentation for model training [ 70 ].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations