Big data initiatives have gained popularity for leveraging a large sample of subjects to study a wide range of effect magnitudes in the brain. On the other hand, most task-based FMRI designs feature relatively small number of subjects, so that resulting parameter estimates may be associated with compromised precision. Nevertheless, little attention has been given to another important dimension of experimental design, which can equally boost a study's statistical efficiency: the trial sample size. Here, we systematically explore the different factors that impact effect uncertainty, drawing on evidence from hierarchical modeling, simulations and an FMRI dataset of 42 subjects who completed a large number of trials of a commonly used cognitive task. We find that, due to the presence of relatively large cross-trial variability: 1) trial sample size has nearly the same impact as subject sample size on statistical efficiency; 2) increasing both trials and subjects improves statistical efficiency more effectively than focusing on subjects alone; 3) trial sample size can be leveraged with the number of subjects to improve the cost-effectiveness of an experimental design; 4) for small trial sample sizes, rather than the common practice of condition-level modeling through summary statistics, trial-level modeling may be necessary to accurately assess the standard error of an effect estimate. Lastly, we make practical recommendations for improving experimental designs across neuroimaging and behavioral studies.