2023 International Conference on Intelligent Computing and Control (IC&C) 2023
DOI: 10.1109/ic-c57619.2023.00020
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study for Temperature Prediction by Machine Learning and Deep Learning

Heng Zhao,
Yixing Chen
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Concurrently, the representation with subspace structure can enhance the interpretability of deep learning models and mitigate the risk of model collapse 61 . Here, we use the self-expression matrix learned from the subspace to reconstruct the low-dimensional representations of the positive views through a one-step linearization 44 and to obtain the representations of the microenvironments that contain the same domain tissue. Meanwhile, we provide a post-processing tool that could mask the self-expression matrix based on the magnitude of self-expression values 62 .…”
Section: /28 Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Concurrently, the representation with subspace structure can enhance the interpretability of deep learning models and mitigate the risk of model collapse 61 . Here, we use the self-expression matrix learned from the subspace to reconstruct the low-dimensional representations of the positive views through a one-step linearization 44 and to obtain the representations of the microenvironments that contain the same domain tissue. Meanwhile, we provide a post-processing tool that could mask the self-expression matrix based on the magnitude of self-expression values 62 .…”
Section: /28 Discussionmentioning
confidence: 99%
“…Once the self-expression coefficient matrix C is obtained, a reconstructed representation containing local spatial information can be computed, i.e., Z 1 = Ĥ1 C, Z 2 = Ĥ2 C (the process can be considered as linearization 44 ). Given that spots within the same subspace tend to belong to the same domain, the reconstructed representation encapsulates local information specific to that domain.…”
Section: /28mentioning
confidence: 99%
“…The K-NN algorithm [9], [10] relies solely on the provided dataset and a constant parameter, denoted as K. The subsequent steps elucidate the workings of the K-NN algorithm:…”
Section: K-nearest Neighbors (K-nn)mentioning
confidence: 99%