2022
DOI: 10.48550/arxiv.2207.04693
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring Contextual Relationships for Cervical Abnormal Cell Detection

Abstract: Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 58 publications
0
1
0
Order By: Relevance
“…To mimic cytopathologists' diagnostic behaviors that surrounding cells should be referred to identify whether a cervical cell is abnormal, Liang et al [103] explored contextual relationships in cervical cytological images towards better abnormal cell detection. Based on Faster R-CNN equipped with FPN, they presented RoI-relationship Attention Module (RRAM) and Global RoI Attention Module (GRAM) to respectively capture the cross-cell contextual relationship and global context for context-rich features.…”
Section: Two-stage Supervised Learning Based Detectionmentioning
confidence: 99%
“…To mimic cytopathologists' diagnostic behaviors that surrounding cells should be referred to identify whether a cervical cell is abnormal, Liang et al [103] explored contextual relationships in cervical cytological images towards better abnormal cell detection. Based on Faster R-CNN equipped with FPN, they presented RoI-relationship Attention Module (RRAM) and Global RoI Attention Module (GRAM) to respectively capture the cross-cell contextual relationship and global context for context-rich features.…”
Section: Two-stage Supervised Learning Based Detectionmentioning
confidence: 99%