2022
DOI: 10.1016/j.jvoice.2022.11.036
|View full text |Cite
|
Sign up to set email alerts
|

“Quality, Readability, and Understandability of Online Posterior Glottic Stenosis Information”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 45 publications
0
5
0
Order By: Relevance
“…This is similar to observations by McKearney and McKearney [ 70 ] and Grose et al [ 71 ], who both reported an average reading level of 10th grade on websites discussing ear tubes and neck dissections, respectively. Websites with information with a reading grade higher than the recommended level were also reported by Raja and Fitzpatrick [ 29 ], De La Chapa et al [ 72 ], Crabtree and Lee [ 3 ], and Kim et al [ 73 ], thus demonstrating the poor readability of health care websites in general.…”
Section: Discussionmentioning
confidence: 67%
“…This is similar to observations by McKearney and McKearney [ 70 ] and Grose et al [ 71 ], who both reported an average reading level of 10th grade on websites discussing ear tubes and neck dissections, respectively. Websites with information with a reading grade higher than the recommended level were also reported by Raja and Fitzpatrick [ 29 ], De La Chapa et al [ 72 ], Crabtree and Lee [ 3 ], and Kim et al [ 73 ], thus demonstrating the poor readability of health care websites in general.…”
Section: Discussionmentioning
confidence: 67%
“…38,50,51 However, prior studies have demonstrated that internet educational materials from traditional search engines are of variable quality and written at an unsuitably high reading level. [21][22][23][24]26,27,[32][33][34]37,[52][53][54] The results of FKGL and FRE scores demonstrate that not only are the Google webpages written at inappropriately high reading levels, but comparatively, the ChatGPT responses are written at an even higher reading level with scores of 13.9 ± 2.5 (P = < .001) and 34.9 ± 11.2 (P = .005), respectively. These ChatGPT results are equivalent to a college reading level, well above the recommended reading level for patient materials.…”
Section: Discussionmentioning
confidence: 95%
“…Health care organizations such as the American Medical Association and the National Institute of Health recommend that information for patient education be written between a fourth‐ and sixth‐grade reading level 38,50,51 . However, prior studies have demonstrated that internet educational materials from traditional search engines are of variable quality and written at an unsuitably high reading level 21‐24,26,27,32‐34,37,52‐54 . The results of FKGL and FRE scores demonstrate that not only are the Google webpages written at inappropriately high reading levels, but comparatively, the ChatGPT responses are written at an even higher reading level with scores of 13.9 ± 2.5 ( P = < .001) and 34.9 ± 11.2 ( P = .005), respectively.…”
Section: Discussionmentioning
confidence: 97%
See 2 more Smart Citations