The psychometric function relates an observer's performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function's parameters and other estimatesderived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing severalgoodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function's parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional c 2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.
The performance of an observer on a psychophysical task is typically summarized by reporting one or more response thresholds-stimulus intensities required to produce a given level of performance-and by a characterization of the rate at which performance improves with increasing stimulus intensity. These measures are derived from a psychometric function, which describes the dependence of an observer's performance on some physical aspect of the stimulus.Fitting psychometric functions is a variant of the more general problem of modeling data. Modeling data is a three-step process: First, a model is chosen, and the parameters are adjusted to minimize the appropriate error metric or loss function. Second, error estimates of the parameters are derived and third, the goodness of fit between model and the data is assessed. This paper is concerned with the second of these steps, the estimation of variability in fitted parameters and in quantities derived from them. Our companion paper (Wichmann & Hill, 2001) illustrates how to fit psychometric functions while avoiding bias resulting from stimulus-independentlapses, and how to evaluate goodness of fit between model and data.We advocate the use of Efron's bootstrap method, a particular kind of Monte Carlo technique, for the problem of estimating the variability of parameters, thresholds, and slopes of psychometric functions (Efron, 1979(Efron, , 1982 Efron & Gong, 1983; Efron & Tibshirani, 1991, 1993. Bootstrap techniques are not without their own assumptions and potential pitfalls. In the course of this paper, we shall discuss these and examine their effect on the estimates of variability we obtain. We describe and examine the use of parametric bootstrap techniques in finding confidence intervals for thresholds and slopes. We then explore the sensitivity of the estimated confidence interval widths to (1) sampling schemes, (2) mismatch of the objective function, and (3) accuracy of the originally fitted parameters. The last of these is particularly important, since it provides a test of the validity of the bridging as- The psychometric function relates an observer' s performance to an independent variable, usually a physical quantity of an experimental stimulus. Even if a model is successfully fit to the data and its goodness of fit is acceptable,experimentersrequire an estimate of the variabilityof the parameters to assess whether differences across conditions are significant.Accurate estimates of variabilityare difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statisticaltechniques are only asymptotically correct and can be shown to be unreliable in some common situations. Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternativestatisticaltechniques based on Monte Carlo resampling methods. The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes. First, we outline the basic bootstrap procedure and argue in...
We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.