Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.190
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Multi-label Text Classification with Horizontal and Vertical Category Correlations

Abstract: Hierarchical multi-label text classification (HMTC) deals with the challenging task where an instance can be assigned to multiple hierarchically structured categories at the same time. The majority of prior studies either focus on reducing the HMTC task into a flat multi-label problem ignoring the vertical category correlations or exploiting the dependencies across different hierarchical levels without considering the horizontal correlations among categories at the same level, which inevitably leads to fundame… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…The latter can be found in Tables 9 and 10, while the results over the individual splits are reported in Table 11. [149] 0.810 0.533 --HiLAP [99] 0.833 0.601 --HiAGM-TP [97] 0.840 0.634 0.858 0.803 RLHR [167] --0.785 0.792 HCSM [168] 0.858 0.609 0.921 0.807 HiMatch [124] 0.847 0.641 0.862 0.805 HIDDEN [169] 0.793 0.473 --HE-AGCRCNN [170] 0.778 0.513 --HVHMC [171] --0.743 -SASF [126] --0.867 0.811 HTCInfoMax [177] 0.835 0.627 0.856 0.800 PAAM-HiA-T5 [178] 0.872 0.700 0.904 0.816 HPT [136] 0.873 0.695 0.872 0.819 HGCLR [91] 0.865 0.683 0.871 0.812 Seq2Tree [93] 0.869 0.700 0.872 0.825 HBGL [180] 0.872 0.711 0.874 0.820 P-tuning v2 (SPP-tuning) [138] --0.875 0.800 LD-GGNN [186] 0.842 0.641 0.851 0.805 LSE-HiAGM [123] 0.839 0.646 0.860 0.800 Seq2Label [121] 0.874 0.706 0.873 0.819 HTC-CLIP [190] --0.879 0.816 GACaps [191] 0.868 0.698 0.876 0.828 HiDEC [194] 0.855 0.651 --UMP-MG [129] --0.859 0.813 LED [196] 0.883 0.697 0.870 0.813 (HGCLR-based + aug) [198] 0.862 0.679 0.874 0.821 K-HTC [135] --0.873 0.817 HiTIN (BERT) [200] 0.867 0.699 0.872 0.816 HierVerb (few-shot) [137] 0 The standard deviation over the 2 repetitions of 3-fold cross-validation is reported in brackets.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The latter can be found in Tables 9 and 10, while the results over the individual splits are reported in Table 11. [149] 0.810 0.533 --HiLAP [99] 0.833 0.601 --HiAGM-TP [97] 0.840 0.634 0.858 0.803 RLHR [167] --0.785 0.792 HCSM [168] 0.858 0.609 0.921 0.807 HiMatch [124] 0.847 0.641 0.862 0.805 HIDDEN [169] 0.793 0.473 --HE-AGCRCNN [170] 0.778 0.513 --HVHMC [171] --0.743 -SASF [126] --0.867 0.811 HTCInfoMax [177] 0.835 0.627 0.856 0.800 PAAM-HiA-T5 [178] 0.872 0.700 0.904 0.816 HPT [136] 0.873 0.695 0.872 0.819 HGCLR [91] 0.865 0.683 0.871 0.812 Seq2Tree [93] 0.869 0.700 0.872 0.825 HBGL [180] 0.872 0.711 0.874 0.820 P-tuning v2 (SPP-tuning) [138] --0.875 0.800 LD-GGNN [186] 0.842 0.641 0.851 0.805 LSE-HiAGM [123] 0.839 0.646 0.860 0.800 Seq2Label [121] 0.874 0.706 0.873 0.819 HTC-CLIP [190] --0.879 0.816 GACaps [191] 0.868 0.698 0.876 0.828 HiDEC [194] 0.855 0.651 --UMP-MG [129] --0.859 0.813 LED [196] 0.883 0.697 0.870 0.813 (HGCLR-based + aug) [198] 0.862 0.679 0.874 0.821 K-HTC [135] --0.873 0.817 HiTIN (BERT) [200] 0.867 0.699 0.872 0.816 HierVerb (few-shot) [137] 0 The standard deviation over the 2 repetitions of 3-fold cross-validation is reported in brackets.…”
Section: Resultsmentioning
confidence: 99%
“…First, a considerable number of works on HTC have recently been taking into consideration the semantics of labels to improve classification performance [110,123,124,171,179]. Some of them devise a way to obtain label embeddings and then use this information to produce label-aware document embeddings.…”
Section: Future Work and Research Directionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Others use graph neural networks (GNN) to learn the hierarchical relationships of the label space [22], [23]. Xu et al [24] combined label correlation and a GNN to learn a precise representation of the hierarchy. Shen et al [25] instead used weak supervision to create a hierarchical structure of the label space.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the previous methods being successful, their approaches classify labels sequentially by choosing them from the predefined label set in the training dataset. It is still an open problem for real-world applications to handle unseen labels that do not appear in the pre-defined label set from the training dataset (Banerjee et al, 2019;Aly et al, 2019;Xu et al, 2021). Due to severe deficiencies in annotating data for labels in a hierarchy and handling unseen labels for real-world applications , we need a general modeling framework for handling unseen labels while explicitly incorporating a label hierarchy to overcome the restriction of the pre-defined label set for the development of real-world text classification applications.…”
Section: Introductionmentioning
confidence: 99%