Proceedings of the 2018 ACM Symposium on Eye Tracking Research &Amp; Applications 2018
DOI: 10.1145/3204493.3204550
|View full text |Cite
|
Sign up to set email alerts
|

Scanpath comparison in medical image reading skills of dental students

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
34
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 47 publications
(34 citation statements)
references
References 36 publications
0
34
0
Order By: Relevance
“…in ScanMatch [Cristino et al 2010], classification of attentional disorder [Galgani et al 2009], multiple scanpath sequence alignment [Burch et al 2018], and expert and novice programmer classification [Busjahn et al 2015]. Castner et al [Castner et al 2018a] found incoming dental students with no prior training in radiograph interpretation could be classified from later semester students with 80% accuracy from Needleman-Wunsch similarity scores.…”
Section: Revisiting Visual Scanpath Comparison 21 Traditional Approamentioning
confidence: 99%
See 1 more Smart Citation
“…in ScanMatch [Cristino et al 2010], classification of attentional disorder [Galgani et al 2009], multiple scanpath sequence alignment [Burch et al 2018], and expert and novice programmer classification [Busjahn et al 2015]. Castner et al [Castner et al 2018a] found incoming dental students with no prior training in radiograph interpretation could be classified from later semester students with 80% accuracy from Needleman-Wunsch similarity scores.…”
Section: Revisiting Visual Scanpath Comparison 21 Traditional Approamentioning
confidence: 99%
“…Kübler et al [Kübler et al 2014] developed a method -SubsMatchfor sequence comparison without AOI definitions, which uses a bag-of-words model and looks at the transitional behavior of subsequences. Castner et al [Castner et al 2018a] used these subsquence transitions from SubsMatch with an SVM Classifier [Kübler et al 2017] and found comparable results to sequence alignment with grid AOIs.…”
Section: Revisiting Visual Scanpath Comparison 21 Traditional Approamentioning
confidence: 99%
“…Finally, future research could not only focus on human observers of eye‐movement displays, but also on machine learning techniques (artificial intelligence). With such techniques, eye‐movement measures can be analyzed to identify aspects of performance such as cognitive load or workload (Appel et al, 2019; Halverson, Estepp, Christensen, & Monnin, 2012; Mussgnug, Singer, Lohmeyer, & Meboldt, 2017), performer expertise (Castner et al, 2018), strategic behavior (Eivazi & Bednarik, 2011), or task interest and curiosity (Baranes, Oudeyer, & Gottlieb, 2015). Future studies could investigate whether machine learning techniques are as good or better than humans at analyzing different aspects of performance (e.g., accuracy, confidence, effort) based on eye‐movement measures (or vice versa, whether humans are better than machine learning techniques at interpreting certain aspects of performance).…”
Section: Discussionmentioning
confidence: 99%
“…Scanpath comparison can be used to understand differences in eye movements between correct and incorrect solvers on physics problems [11]. Also, Novices and experts could be distinguished by the comparison of their scanpath [12] [13]. In addition, the scanpth similarity comparison might clinically help provide evidence diagnosing children with mild or moderate autism spectrum disorder (ASD) [14].…”
Section: Introductionmentioning
confidence: 99%