2022
DOI: 10.48550/arxiv.2201.07692
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GroupGazer: A Tool to Compute the Gaze per Participant in Groups with integrated Calibration to Map the Gaze Online to a Screen or Beamer Projection

Abstract: In this paper we present GroupGaze. It is a tool that can be used to calculate the gaze direction and the gaze position of whole groups. GroupGazer calculates the gaze direction of every single person in the image and allows to map these gaze vectors to a projection like a projector. In addition to the person-specific gaze direction, the person affiliation of each gaze vector is stored based on the position in the image. Also, it is possible to save the group attention after a calibration. The software is free… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
11
0
2

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 78 publications
0
11
0
2
Order By: Relevance
“…DNNs [30] have found their way into a variety of fields [1,27]. In eye tracking [31,49], they are already used for scanpath analysis [32,33,4], as well as other approaches based on machine learning [24,2,3,12], feature extraction [49] such as pupil [28,18,25,48,47,46,11,26,16,10,8,45,6], iris [9,19,5] and eyelids [22,21,23], eyeball estimation [7], and also for eye movement classification [41,42,14]. In recent years, there have been a number of new large data sets [38] that have also been annotated using modern machine learning approaches.…”
Section: Introductionmentioning
confidence: 99%
“…DNNs [30] have found their way into a variety of fields [1,27]. In eye tracking [31,49], they are already used for scanpath analysis [32,33,4], as well as other approaches based on machine learning [24,2,3,12], feature extraction [49] such as pupil [28,18,25,48,47,46,11,26,16,10,8,45,6], iris [9,19,5] and eyelids [22,21,23], eyeball estimation [7], and also for eye movement classification [41,42,14]. In recent years, there have been a number of new large data sets [38] that have also been annotated using modern machine learning approaches.…”
Section: Introductionmentioning
confidence: 99%
“…Preto bohužia ľ nemôžeme predpoklada ť, že Λ ι nie je odlišná od p. Chceme rozšíri ť výsledky [76,10] na degenerované kruhy. Tu je jednoznačne dôležitá jednoznačnos ť. Už dlho je známe, že j ∼ E [76,45,43,39,52]. V článku [1] sa autori zaoberajú spojitos ťou homomorfizmov za ďalšieho predpokladu, že √ 2 ≤ 12.…”
unclassified
“…stimulation [38]. A number of studies have linked fixation-related metrics to cognitive effort [23,34,44]. For example, the number of fixations within a study area (AOI) was used for comparison.…”
mentioning
confidence: 99%
“…In order to pay attention to a stimulus or object, the user must expend effort to maintain maintain a constant gaze on the object [20]. Furthermore, studies provide evidence that fixation duration increases as the information processing becomes more demanding [34,57,78].…”
mentioning
confidence: 99%
See 1 more Smart Citation