2020
DOI: 10.1108/jsit-10-2019-0212
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting multiclass classification algorithms for the prediction of ship routes: a study in the area of Malta

Abstract: Purpose Ship route prediction (SRP) is a quite complicated task, which enables the determination of the next position of a ship after a given period of time, given its current position. This paper aims to describe a study, which compares five families of multiclass classification algorithms to perform SRP. Design/methodology/approach Tested algorithm families include: Naive Bayes (NB), nearest neighbors, decision trees, linear algorithms and extension from binary. A common structure for all the algorithm fam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…In a previous work [71], we developed a comparison among different multiclass classification algorithms to solve the SRP problem. Here, we focus the analysis on the K-Nearest Neighbors classifier and retrain it with new data.…”
Section: Ship Route Predictionmentioning
confidence: 99%
“…In a previous work [71], we developed a comparison among different multiclass classification algorithms to solve the SRP problem. Here, we focus the analysis on the K-Nearest Neighbors classifier and retrain it with new data.…”
Section: Ship Route Predictionmentioning
confidence: 99%
“…In [11] the regression problem was turned into a classification problem using several supervised machine learning models such as Decision Trees, Nearest Neighbours and Naive Bayes as well as, e.g., a linear support-vector machine. The future region for a ship was predicted instead of the actual location, with only the KNN model showing promising results.…”
Section: Related Workmentioning
confidence: 99%
“…with x q being the q'th sample and l the total number of samples in the batch. By maximising (11), better values for the mixture components in (8) are found. This corresponds to minimising the negative log likelihood (NLL), − ln(L), in which p(x) is neglected [34] L…”
Section: Loss Functionmentioning
confidence: 99%