2020
DOI: 10.1007/s11227-020-03168-3
|View full text |Cite
|
Sign up to set email alerts
|

Pyramid context learning for object detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…This is important because it can be argued that anatomical information within the direct vicinity of a query voxel can be of great descriptive value, resolving local ambiguities (e.g. it is unlikely that tumor is detected in or near the lens) 22 . Integration of contextual information is therefore likely to enhance model performance.…”
Section: Discussionmentioning
confidence: 99%
“…This is important because it can be argued that anatomical information within the direct vicinity of a query voxel can be of great descriptive value, resolving local ambiguities (e.g. it is unlikely that tumor is detected in or near the lens) 22 . Integration of contextual information is therefore likely to enhance model performance.…”
Section: Discussionmentioning
confidence: 99%
“…The output of each branch is concatenated before passing through two dense layers (ReLu and softmax activation, respectively) to get an output of size two, representing non-tumor and tumor. To include larger-scale contextual information, a pyramid structure was implemented where two inputs (scale 0; 32 × 32 × 3 and scale 1; 64 × 64 × 3) are included for each of the three views [24]. The latter was first downsampled to 32 × 32 × 3 to fit in the network.…”
Section: Network Structurementioning
confidence: 99%
“…It extracts useful local and global features from the input image by using a deep residual network (ResNet)[23,31] and feature pyramid network[22,24,98]. • Region proposal network.…”
mentioning
confidence: 99%