The recently developed Function Acquisition Speed Test (FAST) represents an effort to assess the relative strength of stimulus relations by traditional behavior-analytic means (i.e., acquisition curves). The current study was the first application of the FAST to the assessment of natural, preexperimentally established stimulus relations. Specifically, this experiment assessed the sensitivity of the FAST to pervasive gender stereotypes of men as stereotypically Bmasculine(e.g., dominant or competitive) and women as stereotypically Bfeminine^(e.g., nurturing or gentle). Thirty participants completed a FAST procedure consisting of two testing blocks. In one block, functional response classes were established between classes of stimuli assumed to be stereotype-consistent (i.e., men-masculine and women-feminine), and in the other, between classes of stimuli assumed to be stereotypeinconsistent (i.e., men-feminine and women-masculine). Differences in the rate of class acquisition across the two blocks were quantified using cumulative record-type scoring procedures plotting correct responses as a function of time. Acquisition rates were significantly faster (i.e., displayed steeper learning curves) for the stereotype-consistent relative to the stereotype-inconsistent block. Corroborating stereotypes were observed on an Implicit Association Test containing identical stimuli.
Implicit measures have been hypothesized to allow researchers to ascertain the existence and strength of relations between stimuli, often in the context of research on attitudes. However, little controlled behavioral research has focused on whether stimulus relations, and the degree of relatedness within such relations, are indexed by implicit measures. The current study examined this issue using a behavior-analytic implicit-style stimulus relation indexing procedure known as the Function Acquisition Speed Test (FAST). Using a matching-to-sample (MTS) procedure to train stimulus equivalence relations between nonsense syllables, the number of iterations of the procedure was varied across groups of participants, hence controlling stimulus relatedness in the resulting equivalence relations. Following final exposure to the MTS procedure, participants completed a FAST. Another group of participants was exposed to a FAST procedure with word pairs of known relatedness. Results showed that increasing relatedness resulted in a linear increase in FAST effect size. These results provide the first direct empirical support for a key process-level assumption of the implicit literature, and offer a behavior-analytic paradigm within which to understand these effects. These results also suggest that the FAST may be a viable procedure for the quantification of emergent stimulus relations in stimulus equivalence training.
The social construction of gender-as-binary plays an important role within many contemporary theories of gender inequality. However, to date, the field of psychology has struggled with the operationalization and assessment of binarist ideologies. The current article proposes a technical framework for the analysis of the gender binary and assesses the suitability of the Implicit Relational Assessment Procedure (IRAP) as a measure of binarist gender beliefs. Forty-seven undergraduate students (26 female; M age = 23.84) completed two IRAPs assessing the coordination of certain traits exclusively with women and others exclusively with men. Effects found on the IRAP were in the expected direction (i.e., relating men but not women with certain traits and women but not men with other traits). In addition, the traits ascribed to men within the IRAP were evaluated as more hirable by a large majority of participants (83%) on an explicit preference task. The results therefore support the arguments that, first, gender traits do seem to be framed oppositionally in language and, second, this binary may underpin existing gender hierarchies in certain contexts.
The Implicit Relational Assessment Procedure (IRAP) has been used to assess the probability of arbitrarily applicable relational responding or as an indirect measure of implicit attitudes. To date, IRAP effects have commonly been quantified using the DIRAP scoring algorithm, which was derived from Greenwald, Nosek and Banaji's (2003) D effect size measure. In the article, we highlight the difference between an effect size measure and a scoring algorithm, discuss the drawbacks associated with D, and propose an alternative: a probabilistic, semiparametric measure referred to as the Probabilistic Index (Thas, De Neve, Clement, & Ottoy, 2012). Using a relatively large IRAP dataset, we demonstrate how the PI is more robust to the influence of outliers and skew (which are typical of reaction time data). Finally, we conclude that PI models, in addition to producing point estimate scores, can also provide confidence intervals, significance tests, and afford the possibility to include covariates, all of which may aid single subject design studies.
The Implicit Relational Assessment Procedure (IRAP) has been used to assess the probability of arbitrarily applicable relational responding or as an indirect measure of implicit attitudes. To date, IRAP effects have commonly been quantified using the DIRAP scoring algorithm, which was derived from Greenwald, Nosek and Banaji’s (2003) D effect size measure. In the article, we highlight the difference between an effect size measure and a scoring algorithm, discuss the drawbacks associated with D, and propose an alternative: a probabilistic, semiparametric measure referred to as the Probabilistic Index (Thas, De Neve, Clement, & Ottoy, 2012). Using a relatively large IRAP dataset, we demonstrate how the PI is more robust to the influence of outliers and skew (which are typical of reaction time data), and improves internal consistency. Finally, we conclude that PI models, in addition to producing point estimate scores, can also provide confidence intervals, significance tests, and afford the possibility to include covariates, all of which may aid single subject design studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.