2023
DOI: 10.1016/j.asr.2022.09.040
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning method for the orbit state classification of large LEO constellation satellites

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…These machine learning methods can be roughly divided into clustering discrimination and reconstruction discrimination according to the criteria of anomaly judgement. Clustering techniques such as Gaussian mixture models, K-nearest neighbour methods, multilayer neural networks, and convolutional neural networks, among others, identify points that are distant from the cluster centre in the data set as anomalies [15][16][17]. Reconstruction discrimination methods, such as principal components analysis, generative adversarial networks, and autoencoder networks, project the data set into a subspace and calculate the reconstruction error, judging samples that do not match the distribution of training samples as anomalies [18][19][20][21].…”
Section: Introductionmentioning
confidence: 99%
“…These machine learning methods can be roughly divided into clustering discrimination and reconstruction discrimination according to the criteria of anomaly judgement. Clustering techniques such as Gaussian mixture models, K-nearest neighbour methods, multilayer neural networks, and convolutional neural networks, among others, identify points that are distant from the cluster centre in the data set as anomalies [15][16][17]. Reconstruction discrimination methods, such as principal components analysis, generative adversarial networks, and autoencoder networks, project the data set into a subspace and calculate the reconstruction error, judging samples that do not match the distribution of training samples as anomalies [18][19][20][21].…”
Section: Introductionmentioning
confidence: 99%