Aims: To describe the validation and reliability of a new pain tool (the Alder Hey Triage Pain Score, AHTPS) for children at triage in the accident and emergency (A&E) setting. Methods: A new behavioural observational pain tool was developed because of dissatisfaction with available tools and a lack of confidence in self-assessment scores at triage. The study was conducted in a large paediatric A&E department; 575 children (aged 0-16 years) were included. Inter-rater reliability and various aspects of validity were assessed. In addition this tool was compared to the Wong-Baker selfassessment tool.1 The children were concurrently scored by a research nurse and triage nurses to assess inter-rater reliability. Construct validity was assessed by comparing the research nurse's triage score with the research nurse reassessment score after intervention and/or analgesia. Known group construct validity was assessed by comparing the research nurse's score at triage with the level of pain of the condition as judged by the discharge diagnosis. Predictive validity was assessed by comparing the research nurse's AHTPS with the level of analgesia needed by each patient. The AHTPS was also compared to a selfassessment score. Results: A high level of inter-rater reliability, kappa statistic 0.84 (95% CI 0.80 to 0.88), was shown. Construct validity was well demonstrated; known group construct validity and predictive validity were also demonstrated to a varying degree.
Conclusions:Results support the use of this observational pain scoring tool in the triage of children in A&E.
Background:Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples.Methods:We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples.Results:We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively).Conclusions:These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains.
Background
Immunohistochemistry (IHC) is often used in personalisation of cancer treatments. Analysis of large data sets to uncover predictive biomarkers by specialists can be enormously time-consuming. Here we investigated crowdsourcing as a means of reliably analysing immunostained cancer samples to discover biomarkers predictive of cancer survival.
Methods
We crowdsourced the analysis of bladder cancer TMA core samples through the smartphone app ‘Reverse the Odds’. Scores from members of the public were pooled and compared to a gold standard set scored by appropriate specialists. We also used crowdsourced scores to assess associations with disease-specific survival.
Results
Data were collected over 721 days, with 4,744,339 classifications performed. The average time per classification was approximately 15 s, with approximately 20,000 h total non-gaming time contributed. The correlation between crowdsourced and expert H-scores (staining intensity × proportion) varied from 0.65 to 0.92 across the markers tested, with six of 10 correlation coefficients at least 0.80. At least two markers (MRE11 and CK20) were significantly associated with survival in patients with bladder cancer, and a further three markers showed results warranting expert follow-up.
Conclusions
Crowdsourcing through a smartphone app has the potential to accurately screen IHC data and greatly increase the speed of biomarker discovery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.