Guide to Deep Learning Basics 2020
DOI: 10.1007/978-3-030-37591-1_9
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning and the Philosophical Problems of Induction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…For instance, our results are in line with the so-called "no free lunch theorems" in machine learning (Wolpert, 1996), according to which all training algorithms have the same expected performance, when a suitable average over all possible supervised machine learning problems is taken. These theorems are another important formalization of philosophical principles (Schurz, 2017;Lauc, 2020) and would deserve further investigation in connection with our analysis. Moreover, the connections with philosophical results on inductive learning and truth approximation could be explored from the point of view of machine learning and SLT.…”
Section: Discussionmentioning
confidence: 94%
“…For instance, our results are in line with the so-called "no free lunch theorems" in machine learning (Wolpert, 1996), according to which all training algorithms have the same expected performance, when a suitable average over all possible supervised machine learning problems is taken. These theorems are another important formalization of philosophical principles (Schurz, 2017;Lauc, 2020) and would deserve further investigation in connection with our analysis. Moreover, the connections with philosophical results on inductive learning and truth approximation could be explored from the point of view of machine learning and SLT.…”
Section: Discussionmentioning
confidence: 94%
“…[26][27][28][29] The ability to use pathologists' conceptual frameworks 60,73,79,80 (in addition to pixels-derived recurrent patterns learned from training datasets) could be needed if widely generalizable ML models are expected to be developed. 118,126,127 And even if, for practical reasons, it is assumed that future events (such as those expected to be predicted by ML models) will always resemble past events (e.g., those used to create training datasets), it is important to be mindful that "universally" generalizable models may still be unachievable if the generalization problem is approached from a philosophical perspective and other problems of induction, as those explained by Lauc, 128 are contemplated.…”
Section: After ML Models' Deploymentmentioning
confidence: 99%
“…32 This can become an iterative process for each institution, considering that the ML model's performance would need to be monitored as new cases with previously unseen relevant characteristics would permanently arrive to be assessed. 26,[121][122][123][130][131][132] Although the iterative nature of this process may not make ML models "universally" generalizable, 26,125,128 it would certainly boost their learning capabilities by leveraging their ability to falsify prediction rules that lack empirical adequacy (as postulated by Buchholz and Raidl). 133 If some major technical challenges are overcome, [134][135][136][137] and these steps can be done automatically, 122,132,138,139 a site-specific autonomous endless selflearning process could eventually be developed.…”
Section: After ML Models' Deploymentmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, the no-free-lunch theorem [79] and some pathological example [80], ensure us that the choice of an algorithm strongly depends on the specific application and there is no way to choose the best solutions a-priori. Nevertheless, keeping the approach as simple as possible [81] and also keeping in mind that the more data you have the more complex the algorithm can be and vice versa [82], [83] are always good guidelines. Moreover, the experience of the data scientists [84], [85] can further improve the quality of the resulting selection strategy.…”
Section: A Choosing the Algorithmmentioning
confidence: 99%