2022
DOI: 10.3390/app12031520
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Socio-Economic Impacts of High-Speed Rail Networks Using K-Nearest Neighbour and Pearson’s Correlation Coefficient Techniques through Computational Model-Based Analysis

Abstract: Not only have high-speed rail (HSR) services stimulated the economy of many countries, but they have also significantly uplifted quality of lives (QoL) of countless people. For many decades, the aspiration for HSR network development has dramatically risen, and HSR networks have inevitably become an icon of civilisation. However, only a few successful HSR networks globally can truly generate socio-economic impacts on their societies. This research aims to understand the impact of HSR networks on social and eco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 92 publications
0
5
0
Order By: Relevance
“…To solve the problem of how to reasonably use, mechanical learning came into being. Nowadays, there are various popular and general machine learning algorithms, including decision tree algorithm which can process incomplete data; there is a naive Bayesian algorithm based on the principle of probability theory; there are vector machine algorithms that can map low-dimensional data to highdimensional data; K nearest neighbor algorithm with K neighbors represents the nearest sample; there is an Ada-Boost adaptive lifting algorithm that can combine multiple weak classifiers into a strong classifier; there is a widely used Apriori mining association algorithm to reveal the association between samples; there are also EM clustering algorithms to establish core reclassification; there is also PageRank algorithm originated from the influence calculation of the paper [7][8][9][10][11][12][13][14]. Using machine learning algorithm technology to mine real data is the ultimate goal of big data processing.…”
Section: Introductionmentioning
confidence: 99%
“…To solve the problem of how to reasonably use, mechanical learning came into being. Nowadays, there are various popular and general machine learning algorithms, including decision tree algorithm which can process incomplete data; there is a naive Bayesian algorithm based on the principle of probability theory; there are vector machine algorithms that can map low-dimensional data to highdimensional data; K nearest neighbor algorithm with K neighbors represents the nearest sample; there is an Ada-Boost adaptive lifting algorithm that can combine multiple weak classifiers into a strong classifier; there is a widely used Apriori mining association algorithm to reveal the association between samples; there are also EM clustering algorithms to establish core reclassification; there is also PageRank algorithm originated from the influence calculation of the paper [7][8][9][10][11][12][13][14]. Using machine learning algorithm technology to mine real data is the ultimate goal of big data processing.…”
Section: Introductionmentioning
confidence: 99%
“…The Pearson correlation coefficient measures the correlation between different features ( Panrawee, Shen & Sakdirat, 2022 ). The correlation degree is calculated by covariance function, then divided by standard deviation to unify the variables to the same order of magnitude to avoid the oscillation of loss function caused by the difference between features, which will affect the learning ability of the network.…”
Section: Shadow Image Retrieval Model Based On Cbam-resnet50mentioning
confidence: 99%
“…K-NN is a classification method based on similarity or proximity between data [21]. The proximity is based on the closest K-data value and is used as a reference in determining the new data class [22]. The technique for finding the nearest k generally uses the Euclidean distance formula [23].…”
Section: K-nearest Neighbour (K-nn)mentioning
confidence: 99%