T his study empirically investigates consumer perceptions of remanufactured consumer products in closed-loop supply chains. A multi-study approach led to increasing levels of measure refinement and facilitated examination of various assumptions researchers have made about the consumer market for remanufactured products. Based in part on the measure building studies, an experimental study examined remanufactured product perceptions from a national panel of consumers. The consumers responded to remanufactured product descriptions that manipulated price discount and brand equity. The results indicate that discounting had a consistently positive, linear effect on remanufactured product attractiveness. Curiously, the brand equity manipulation proved less important to consumers than specific remanufactured product quality perceptions. The results also show that green consumers and consumers who consider remanufactured products green typically found remanufactured products significantly more attractive. Finally, the findings introduce the concept of negative attribute perceptions, such as disgust, that had a significantly detrimental effect on remanufactured product attractiveness.
This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statisticsda finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjectsdlowered power and increased Type II errordand the potential of capitalizing on chance alonedthe possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checksdupwards of 70% in some observed samplesdare far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents."To avoid any space error or any tendency to a stereotyped response, it seems desirable to have the different statements so worded that about one-half of them have one end of the attitude continuum corresponding to the left or upper part of the reaction alternatives … These two kinds of statements ought to be distributed throughout the attitude test in a chance or haphazard manner." eRensis Likert (1932)
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.