Background The written format and literacy competence of screen-based texts can interfere with the perceived trustworthiness of health information in online forums, independent of the semantic content. Unlike in professional content, the format in unmoderated forums can regularly hint at incivility, perceived as deliberate rudeness or casual disregard toward the reader, for example, through spelling errors and unnecessary emphatic capitalization of whole words (online shouting). Objective This study aimed to quantify the comparative effects of spelling errors and inappropriate capitalization on ratings of trustworthiness independently of lay insight and to determine whether these changes act synergistically or additively on the ratings. Methods In web-based experiments, 301 UK-recruited participants rated 36 randomized short stimulus excerpts (in the format of information from an unmoderated health forum about multiple sclerosis) for trustworthiness using a semantic differential slider. A total of 9 control excerpts were compared with matching error-containing excerpts. Each matching error-containing excerpt included 5 instances of misspelling, or 5 instances of inappropriate capitalization (shouting), or a combination of 5 misspelling plus 5 inappropriate capitalization errors. Data were analyzed in a linear mixed effects model. Results The mean trustworthiness ratings of the control excerpts ranged from 32.59 to 62.31 (rating scale 0-100). Compared with the control excerpts, excerpts containing only misspellings were rated as being 8.86 points less trustworthy, those containing inappropriate capitalization were rated as 6.41 points less trustworthy, and those containing the combination of misspelling and capitalization were rated as 14.33 points less trustworthy (P<.001 for all). Misspelling and inappropriate capitalization show an additive effect. Conclusions Distinct indicators of incivility independently and additively penalize the perceived trustworthiness of online text independently of lay insight, eliciting a medium effect size.
BackgroundSpelling errors in documents lead to reduced trustworthiness, but the mechanism for weighing the psychological assessment (i.e., integrative versus dichotomous) has not been elucidated. We instructed participants to rate content of texts, revealing that their implicit trustworthiness judgments show marginal differences specifically caused by spelling errors.MethodsAn online experiment with 100 English-speaking participants were asked to rate 27 short text excerpts (∼100 words) about multiple sclerosis in the format of unmoderated health forum posts. In a counterbalanced design, some excerpts had no typographic errors, some had two errors, and some had five errors. Each participant rated nine paragraphs with a counterbalanced mixture of zero, two or five errors. A linear mixed effects model (LME) was assessed with error number as a fixed effect and participants as a random effect.ResultsUsing an unnumbered scale with anchors of “completely untrustworthy” (left) and “completely trustworthy” (right) recorded as 0 to 100, two spelling errors resulted in a penalty to trustworthiness of 5.91 ± 1.70 (robust standard error) compared to the reference excerpts with zero errors, while the penalty for five errors was 13.5 ± 2.47; all three conditions were significantly different from each other (P < 0.001).ConclusionParticipants who rated information about multiple sclerosis in a context mimicking an online health forum implicitly assigned typographic errors nearly linearly additive trustworthiness penalties. This contravenes any dichotomous heuristic or local ceiling effect on trustworthiness penalties for these numbers of typographic errors. It supports an integrative model for psychological judgments of trustworthiness.
BACKGROUND The written format and literacy competence of screen-based texts can interfere with the perceived trustworthiness of health information in online forums, independent of the semantic content. Unlike in professional content, the format in unmoderated forums can regularly hint at incivility, perceived as deliberate rudeness or casual disregard toward the reader, for example, through spelling errors and unnecessary emphatic capitalization of whole words (online <i>shouting</i>). OBJECTIVE This study aimed to quantify the comparative effects of spelling errors and inappropriate capitalization on ratings of trustworthiness independently of lay insight and to determine whether these changes act synergistically or additively on the ratings. METHODS In web-based experiments, 301 UK-recruited participants rated 36 randomized short stimulus excerpts (in the format of information from an unmoderated health forum about multiple sclerosis) for trustworthiness using a semantic differential slider. A total of 9 control excerpts were compared with matching error-containing excerpts. Each matching error-containing excerpt included 5 instances of misspelling, or 5 instances of inappropriate capitalization (<i>shouting</i>), or a combination of 5 misspelling plus 5 inappropriate capitalization errors. Data were analyzed in a linear mixed effects model. RESULTS The mean trustworthiness ratings of the control excerpts ranged from 32.59 to 62.31 (rating scale 0-100). Compared with the control excerpts, excerpts containing only misspellings were rated as being 8.86 points less trustworthy, those containing inappropriate capitalization were rated as 6.41 points less trustworthy, and those containing the combination of misspelling and capitalization were rated as 14.33 points less trustworthy (<i>P</i><.001 for all). Misspelling and inappropriate capitalization show an additive effect. CONCLUSIONS Distinct indicators of incivility independently and additively penalize the perceived trustworthiness of online text independently of lay insight, eliciting a medium effect size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.