2021
DOI: 10.3390/app11083722
|View full text |Cite
|
Sign up to set email alerts
|

CF-CNN: Coarse-to-Fine Convolutional Neural Network

Abstract: In this paper, we present a coarse-to-fine convolutional neural network (CF-CNN) for learning multilabel classes. The basis of the proposed CF-CNN is a disjoint grouping method that first creates a class group with hierarchical association, and then assigns a new label to a class belonging to each group so that each class acquires multiple labels. CF-CNN consists of one main network and two subnetworks. Each subnetwork performs coarse prediction using the group labels created by the disjoint grouping method. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…To compare the classification performance of our proposed method with other advanced deep learning classification algorithms, we used EfficientNetV2 m [ 55 ], PyramidNet [ 51 ], and CF-CNN [ 52 ]. EfficientNetV2 m and PyramidNet models are CNN-based algorithms developed for image classification and are being utilized in various fields [ 56 , 57 , 58 , 59 , 60 ].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…To compare the classification performance of our proposed method with other advanced deep learning classification algorithms, we used EfficientNetV2 m [ 55 ], PyramidNet [ 51 ], and CF-CNN [ 52 ]. EfficientNetV2 m and PyramidNet models are CNN-based algorithms developed for image classification and are being utilized in various fields [ 56 , 57 , 58 , 59 , 60 ].…”
Section: Resultsmentioning
confidence: 99%
“…Short connections mitigate gradient loss by skipping one or more layers during backpropagation. This short connection has been applied to various CNN models [ 50 , 51 , 52 ]. The residual unit is defined as follows: where and represent the input and output features of units, respectively.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hierarchical information is important in many other applications such as food recognition [55], [25], protein function prediction [6], [7], [56], [57], [58], [59] , image annotation [60], text classification [61], [62], [63]. Some major approaches include imposing logical constraints [4], using hyperbolic embeddings [64], prototype learning [14], label smearing and soft labels, loss modifications [3], multiple learning heads for different levels of the hierarchy [5], hierarchical post-processing [65] and others [66], [67], [68].…”
Section: Related Workmentioning
confidence: 99%
“…Single label hierarchical (SLH) algorithms encode domain knowledge using tree structures with labels to improve classification performance and make better errors by penalizing a misprediction depending on its severity as per the given hierarchy tree. To achieve this, popular ideas are hierarchy aware custom loss functions [3], label embeddings [4] and custom model architectures [5]. Extending these ideas from single-label hierarchical systems to multilabel scenarios is difficult primarily because these ideas are tailormade for multi-class classification loss functions encountered in SLH but not for multiple separate classification problems as encountered in HMC.…”
Section: Introductionmentioning
confidence: 99%