In a web experiment, participants were randomly assigned to two semantic differentials either made from discrete 5-point ordinal rating scales or from continuous visual analogue scales (VASs) with 250 gradations. Respondents adjusted their ratings with VASs more often to maximize the precision of answers, which had a beneficial effect on data quality. No side effects like differences in means, higher dropout, more nonresponse, or higher response times were observed. Overall, the combination of semantic differentials and VASs results in a number of advantages. Potential for further research is discussed.
This article provides evidence that there is a substantial difference between slider scales and visual analogue scales (VAS), two types of rating scales used in web surveys that are frequently mixed up. In an experimental design, both scales were compared to standard HTML radio buttons and offered three, five, or seven response options. Slider scales negatively affect response rate (especially on mobile devices), the sample composition, the distribution of values, and also increase response times. VAS and radio buttons, however, can be used without negative side effects, even on touch screen devices like smartphones. Overall, it is recommended to avoid slider scales. As small differences in rating scales-here drag and drop versus point and click-have a huge influence on data collection, an optimal implementation of VAS is suggested. However, measurement of discrete variables with a moderate number of response options should be done with radio buttons scales unless a small screen size-for example, on smartphones-requires an economical use of space.
Slider scales and radio buttons scales were experimentally compared in horizontal and vertical orientation. Slider scales lead to statistically significantly higher break-off rates (odds ratio = 6.9) and substantially higher response times. Problems with slider scales were especially prevalent in participants with less than average education, suggesting the slider scale format is more challenging in terms of previous knowledge needed or cognitive load. An alternative explanation, technology-dependent sampling (Buchanan & Reips, 2001), cannot fully account for the current results. The authors clearly advise against the use of Java-based slider scales and advocate low-tech solutions for the design of Web-based data collection. Orientation on screen had no observable effect on data quality or usability of rating scales. Implications of item format for Web-based surveys are discussed.
In an experiment dealing with the use of personal computer, tablet, or mobile, scale points (up to 5, 7, or 11) and response formats (bars or buttons) are varied to examine differences in mean scores and nonresponse. The total number of "not applicable" answers does not vary significantly. Personal computer has the lowest item nonresponse, followed by mobile and tablet, and a lower mean score than for mobile. Slider bars showed lower mean scores and more nonresponses than buttons, indicating that they are more prone to bias and difficult in use. Sider bars, which work with a drag-and-drop principle, perform worse than visual analogue scales working with a point-and-click principle and buttons. Five-point scales have more nonresponses than eleven-point scales. Respondents evaluate 11-point scales more positively than shorter scales.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.