2020
DOI: 10.1371/journal.pcbi.1007924
|View full text |Cite
|
Sign up to set email alerts
|

A validation framework for neuroimaging software: The case of population receptive fields

Abstract: Neuroimaging software methods are complex, making it a near certainty that some implementations will contain errors. Modern computational techniques (i.e., public code and data repositories, continuous integration, containerization) enable the reproducibility of the analyses and reduce coding errors, but they do not guarantee the scientific validity of the results. It is difficult, nay impossible, for researchers to check the accuracy of software by reading the source code; ground truth test datasets are neede… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
59
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 41 publications
(62 citation statements)
references
References 39 publications
3
59
0
Order By: Relevance
“…While our results on the validity of the estimates are in line with the results reported previously ( Lerma-Usabiaga et al, 2020 ), here we focused on understanding how the interaction between the pRF model (e.g., Gaussian) and cost function underlie the previously reported variability of the pRF size. In this respect, while interesting, we did not consider the effect that the hemodynamic response function has on the estimates (which we considered fixed and equal across methods).…”
Section: Resultssupporting
confidence: 85%
See 2 more Smart Citations
“…While our results on the validity of the estimates are in line with the results reported previously ( Lerma-Usabiaga et al, 2020 ), here we focused on understanding how the interaction between the pRF model (e.g., Gaussian) and cost function underlie the previously reported variability of the pRF size. In this respect, while interesting, we did not consider the effect that the hemodynamic response function has on the estimates (which we considered fixed and equal across methods).…”
Section: Resultssupporting
confidence: 85%
“…Note that different software implementations of the same algorithm are available and they may differ in computational efficiency, number of hyperparameters and programming language. For an exhaustive comparison between software tools for estimating the pRF see (Lerma-Usabiaga et al, 2020).…”
Section: Prf Estimation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…One non-neural factor that has a large effect on the estimated pRF size (and less so for pRF position) is the mismatch between the assumed and actual underlying hemodynamic response function (HRF). This mismatch can cause both over- and underestimation of pRF sizes, depending on the experimental design or whether the spatial or temporal component of the assumed HRF is inaccurate (Dumoulin & Wandell, 2008; Lerma-Usabiaga, Benson, Winawer, & Wandell, 2020). Since our fMRI session used stimuli that swept across the visual field in both directions for a given orientation, we believe that our experimental design minimized any bias in the estimated pRF size caused by the sluggish HRF.…”
Section: Discussionmentioning
confidence: 99%
“…We also did not model the spatial component of the HRF. However, our presentation time of sweeping bars was relatively long (31s/bar sweep), which largely reduces the impact of pRF size biases caused by the HRF mismatch (Lerma-Usabiaga et al, 2020).…”
Section: Discussionmentioning
confidence: 99%