2006
DOI: 10.1007/11752967_16
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Performance in Continuous Context Recognition Using Event-Driven Error Characterisation

Abstract: Abstract. Evaluating the performance of a continuous activity recognition system can be a challenging problem. To-date there is no widely accepted standard for dealing with this, and in general methods and measures are adapted from related fields such as speech and vision. Much of the problem stems from the often imprecise and ambiguous nature of the real-world events that an activity recognition system has to deal with. A recognised event might have variable duration, or be shifted in time from the correspond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 25 publications
(27 citation statements)
references
References 15 publications
1
26
0
Order By: Relevance
“…early detection of an action onset) may be wrongly considered as classification errors. Ward et al propose to explicitly quantify the system performance taking all these aspects into account [19]. They characterized different types of errors as follows (listed in increasing order of importance), 1) Overfill: when the start and stop time of predicted labels are less and greater than actual time, respectively.…”
Section: Classification Methodsmentioning
confidence: 99%
“…early detection of an action onset) may be wrongly considered as classification errors. Ward et al propose to explicitly quantify the system performance taking all these aspects into account [19]. They characterized different types of errors as follows (listed in increasing order of importance), 1) Overfill: when the start and stop time of predicted labels are less and greater than actual time, respectively.…”
Section: Classification Methodsmentioning
confidence: 99%
“…We distinguished the six methods in the following way: the four participations were identified by their participation numbers (13,49,51,59), and the two additional methods were identified by letters A and B. The submitted method (published in [54]) uses low-level features and mid-level features calculated on detected and tracked people using [55] and detected objects specific to the dataset (doors, mailboxes etc.).…”
Section: Results Of the Icpr 2012 Harl Competitionmentioning
confidence: 99%
“…Evaluation of this variant needs metrics adapted to the problem. In [13], a measure is proposed based on alignment. It introduces six different error types: insertion, deletion, merge, fragmentation, underfill and overfill.…”
Section: Related Metrics and Datasetsmentioning
confidence: 99%
“…The comparison with sample-accurate confusion matrices confirms that the soft alignment is a sensible solution for event spotting performance analyses. For a more detailed analysis of detection errors, the error distribution diagrams [41] could be used.…”
Section: Methodsmentioning
confidence: 99%