Basic Methods Handbook for Clinical Orthopaedic Research 2019
DOI: 10.1007/978-3-662-58254-1_38
|View full text |Cite
|
Sign up to set email alerts
|

Reliability Studies and Surveys

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…The survey questions should provide reproducible results (reliability test) and be assessed in three major forms of reliability: test-retest; alternate form; and internal consistency [ 99 ]. An Rs of value 0.70 or greater is generally accepted and indicates good reliability [ 100 ].…”
Section: Discussionmentioning
confidence: 99%
“…The survey questions should provide reproducible results (reliability test) and be assessed in three major forms of reliability: test-retest; alternate form; and internal consistency [ 99 ]. An Rs of value 0.70 or greater is generally accepted and indicates good reliability [ 100 ].…”
Section: Discussionmentioning
confidence: 99%
“…r Values are equal to or greater than 0.70 indicate a strong correlation. It is stated that the ideal time for test–retest application is 2–4 weeks 40 . Therefore, we applied the test–retest applied to 17 pregnant women 2 weeks later and found a strong correlation.…”
Section: Discussionmentioning
confidence: 86%
“…Reliability and stability refer to whether a measurement is reproducible and whether the same result will be obtained when the measurement is repeated [ 26 ]. The reliability of a given tool is crucial for it to be valuable and applicable in research and clinical practice [ 36 ]. Although reliability assessment was initially introduced in psychometrics, it is equally crucial for all other measures and scientific fields.…”
Section: Introductionmentioning
confidence: 99%
“…For inter-rater reliability assessment purposes, each random sample of n targets is rated independently by k judges [ 32 ]. Inter-rater reliability is also known as interobserver reliability or between-observer consistency, as it determines the agreement between different raters assessing the same targets [ 36 ]. In other words, this type of reliability refers to whether the specific measure performed using the same tool and according to the same methodology on the same patient will produce the same results regardless of which clinician conducts the measurement.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation