Itise 2023 2023
DOI: 10.3390/engproc2023039026
|View full text |Cite
|
Sign up to set email alerts
|

BERT (Bidirectional Encoder Representations from Transformers) for Missing Data Imputation in Solar Irradiance Time Series

Abstract: This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Bidirectional Encoder Representations from Transformers (BERT) [82][83][84] stands out as a widely utilized language model crafted by Google Research. Being grounded in the transformer architecture, BERT diverges from sequential text processing, opting for an attention mechanism to discern word relationships.…”
Section: Bidirectional Encoder Representations From Transformersmentioning
confidence: 99%
“…Bidirectional Encoder Representations from Transformers (BERT) [82][83][84] stands out as a widely utilized language model crafted by Google Research. Being grounded in the transformer architecture, BERT diverges from sequential text processing, opting for an attention mechanism to discern word relationships.…”
Section: Bidirectional Encoder Representations From Transformersmentioning
confidence: 99%
“…These approaches offer the promise of enhanced accuracy and versatility, yet they are accompanied by several notable challenges. The intricate selfattention mechanisms of transformers, while powerful, increase model complexity and computational demands [24]. Consequently, this extends training times and raises resource requirements, making it imperative to explore efficient training strategies and model architectures that are less resource-intensive [25].…”
Section: Hybrid Approachesmentioning
confidence: 99%
“…Language models like BERT can learn contextual representations of words and sentences, capturing the intricate relationships between symptoms and diseases. By training these models on large medical text corpora, they can acquire domain-specific knowledge and improve disease prediction accuracy 9 – 11 .…”
Section: Introductionmentioning
confidence: 99%