2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB) 2019
DOI: 10.1109/spmb47826.2019.9037840
|View full text |Cite
|
Sign up to set email alerts
|

Issues in the Reproducibility of Deep Learning Results

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 6 publications
1
3
0
Order By: Relevance
“…Using such platform has the benefit of improving reusability and reproducibility of previous ML research. In another work, authors set the same random seed in order to measure difference in performance between different random seeds, and between PyTorch and TensorFlow [17]. They show a 7% difference between the random seeds; which is similar to the aforementioned results.…”
Section: Related Worksupporting
confidence: 73%
See 1 more Smart Citation
“…Using such platform has the benefit of improving reusability and reproducibility of previous ML research. In another work, authors set the same random seed in order to measure difference in performance between different random seeds, and between PyTorch and TensorFlow [17]. They show a 7% difference between the random seeds; which is similar to the aforementioned results.…”
Section: Related Worksupporting
confidence: 73%
“…A similar study for ML applied in medicine shows a 9% reproducibility [15]. Other researchers have also called for better reproducibility in ML [2], [16] and DL [17], [18].…”
Section: Introductionmentioning
confidence: 84%
“…It would be worth conducting further research to determine the extent to which the lack of code and data sharing is a general trend, rather than specific to the domain of CVD. It is understandable that patient data is sensitive and the developed algorithms are intellectual properties [19]. However, sharing does not necessarily mean providing unlimited access for free, rather using a set of protocols and appropriate licenses that enable other researchers to use and cite the work as needed.…”
Section: Discussionmentioning
confidence: 99%
“…At minimum, the specifics about the CPU and GPU. This is for indicating the amount of compute necessary for the project, but also for the sake of replicability issues due to the non-deterministic nature of the GPU (Jean-Paul et al, 2019;. Moreover, Dodge et al (2019) demonstrate that test performance scores alone are insufficient for claiming the dominance of a model over another, and argue for reporting additional performance details on validation data as a function of computation budget, which can also estimate the amount of computation required to obtain a given accuracy.…”
Section: Hardware Requirementsmentioning
confidence: 99%