2023
DOI: 10.1109/tpami.2023.3241756
|View full text |Cite
|
Sign up to set email alerts
|

Continual Image Deraining With Hypergraph Convolutional Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(12 citation statements)
references
References 84 publications
0
12
0
Order By: Relevance
“…Several common connection methods are utilized in neural networks, such as fully connected networks, recurrent networks, and convolutional networks. [110][111][112] Following the establishment of these connections, synaptic weights are assigned and regulated using either analog circuits or digital logic circuits. During the mapping process of a neural network, meticulous attention must be given to several crucial factors, such as several key factors, such as the topology of the neural network and the rational allocation of resources.…”
Section: Design Of Neural Networkmentioning
confidence: 99%
“…Several common connection methods are utilized in neural networks, such as fully connected networks, recurrent networks, and convolutional networks. [110][111][112] Following the establishment of these connections, synaptic weights are assigned and regulated using either analog circuits or digital logic circuits. During the mapping process of a neural network, meticulous attention must be given to several crucial factors, such as several key factors, such as the topology of the neural network and the rational allocation of resources.…”
Section: Design Of Neural Networkmentioning
confidence: 99%
“…Drawing on the concept of closedloop control, Li et al [69] designed a robust representation learning network structure by incorporating feedback mechanism into the CNN. Recently, Fu et al [10] developed a patch-wise hypergraph convolutional network architecture to help the model to explore non-local content of the images.…”
Section: Network Architecturesmentioning
confidence: 99%
“…In this track, multiple pretrained models are obtained to evaluate the corresponding testing sets, including Rain200L [8], Rain200H [8], DID-Data [36], DDN-Data [35], and SPA-Data [15]. Here, we provide 22 representative methods in Table 6, i.e., DDN [35], DID-MDN [36], RESCAN [38], NLEDN [39], JORDER-E [53], ID-CGAN [48], SIRR [45], PReNet [42], SPANet [15], FBL [62], MSPFN [18], RCDNet [57], Syn2Real [58], SGCN [104], MPRNet [19], DualGCN [13], SPDNet [12], Uformer [81], Restormer [20], IDT [11], HCN [10], and DRSformer [9]. The quantitative results are quoted from previous works [9], [10].…”
Section: Evaluation On Independent Training Trackmentioning
confidence: 99%
See 2 more Smart Citations