2021
DOI: 10.48550/arxiv.2112.01513
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

OW-DETR: Open-world Detection Transformer

Abstract: Open-world object detection (OWOD) is a challenging computer vision problem, where the task is to detect a known set of object categories while simultaneously identifying unknown objects. Additionally, the model must incrementally learn new classes that become known in the next training episodes. Distinct from standard object detection, the OWOD setting poses significant challenges for generating quality candidate proposals on potentially unknown objects, separating the unknown objects from the background and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
1
0
Order By: Relevance
“…Two-stage Bipartite Matching Algorithm: Similar to OW-DETR (Gupta et al 2021), we apply a two-stage bipartite matching algorithm to match the seen pairs and the unknown pairs respectively, as shown in Fig. 3.…”
Section: Learning To Detect Potential Interactive Pairsmentioning
confidence: 99%
“…Two-stage Bipartite Matching Algorithm: Similar to OW-DETR (Gupta et al 2021), we apply a two-stage bipartite matching algorithm to match the seen pairs and the unknown pairs respectively, as shown in Fig. 3.…”
Section: Learning To Detect Potential Interactive Pairsmentioning
confidence: 99%
“…Object detection, which entails localizing and identifying objects within an image or a video sequence, is a crucial task in computer vision. Extensive researches have been conducted in this field, leading to the development of numerous works [40,47,11,10,2,14,21]. However, the majority of these works operate under the closed-world assumption, limiting their applicability to a pre-defined set of categories.…”
Section: Introductionmentioning
confidence: 99%