Proceedings of the Workshop on Human-in-the-Loop Data Analytics 2019
DOI: 10.1145/3328519.3329126
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Validate the Predictions of Black Box Machine Learning Models on Unseen Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 2 publications
0
4
0
Order By: Relevance
“…Second of all, the fact that the accuracy for training and testing are virtually equal. Which means its accuracy on unseen data [25] is as good and there is no potential overfitting at all.…”
Section: Discussionmentioning
confidence: 97%
“…Second of all, the fact that the accuracy for training and testing are virtually equal. Which means its accuracy on unseen data [25] is as good and there is no potential overfitting at all.…”
Section: Discussionmentioning
confidence: 97%
“…This is due to the fact that larger data sets include more information. This challenge is not insurmountable; nonetheless, mathematics condensing and dimensionality reduction are both required to solve it [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Athletes may have specialized sensors such as gyroscopes, magnetometers, accelerometers, and infrared sensors and be connected to them so that data can be collected in the field of sports medicine.…”
Section: A Knnmentioning
confidence: 99%
“…Model validation and assessment may take place after training has been completed. Validation and assessment must adhere to a number of prerequisites for success, including the use of separate data sets for training and testing, the use of an acceptable error measure, the use of simulated data when working with smaller data sets, and an awareness of typical errors that might occur when working with ML [11][12][13]. The K-fold cross-validation method is the gold standard for validation right now.…”
Section: Introductionmentioning
confidence: 99%
“…Prior work exists on performance prediction (Guerra, Prudêncio, and Ludermir 2008;Chen et al 2019;Finn et al 2019;Redyuk et al 2019;Schat et al 2020;Talagala, Li, and The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)…”
Section: Related Workmentioning
confidence: 99%