2016
DOI: 10.1007/s10664-015-9417-1
|View full text |Cite
|
Sign up to set email alerts
|

Extracting and analyzing time-series HCI data from screen-captured task videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(8 citation statements)
references
References 65 publications
0
8
0
Order By: Relevance
“…In software engineering, there are many works on visual artifacts including recording, tutorials, bug reports [27,29,32,38,61,78,82] with some of them specifically for usability and accessibility testing [30,49,76,81]. In detail, Bao et al [20] focused on the extraction of user interactions to facilitate behavioral analysis of developers during programming tasks, such as code editing, text selection, and window scrolling. Krieter et al [44] analyzed every single frame of a video to generate log files that describe what events are happening at the app level (e.g., the "k" key is pressed).…”
Section: Bug Replay From Visual Informationmentioning
confidence: 99%
“…In software engineering, there are many works on visual artifacts including recording, tutorials, bug reports [27,29,32,38,61,78,82] with some of them specifically for usability and accessibility testing [30,49,76,81]. In detail, Bao et al [20] focused on the extraction of user interactions to facilitate behavioral analysis of developers during programming tasks, such as code editing, text selection, and window scrolling. Krieter et al [44] analyzed every single frame of a video to generate log files that describe what events are happening at the app level (e.g., the "k" key is pressed).…”
Section: Bug Replay From Visual Informationmentioning
confidence: 99%
“…Prior approaches exist to derive approximated game logs from gameplay video. However, the majority of these approaches rely on hand-authored event definitions [15] or a secondary machine vision library like OpenCV [2] and access to all of the images in the game [1,10]. This latter approach is only viable for two-dimensional games with limited sets of possible images for each game component (called a "sprite").…”
Section: Related Workmentioning
confidence: 99%
“…We make available a public dataset of player actions for the e-sports game DOTA2 as a secondary contribution in this paper. 1…”
Section: Introductionmentioning
confidence: 99%
“…We find that the perceptions of developers about comparable technologies and the choices they make which technology to use are very likely to be influenced by how other developers see and evaluate the technologies. So developers often turn to the two information sources on the Web [4] to learn more about comparable technologies.…”
Section: Introductionmentioning
confidence: 99%