2022
DOI: 10.1016/j.cviu.2022.103582
|View full text |Cite
|
Sign up to set email alerts
|

Balanced softmax cross-entropy for incremental learning with and without memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Utilizing pretrained models for parameter-efficient finetuning is gaining attention due to their strong feature extraction capabilities [166], [167], [59], [168]. Generating pseudodata of previous tasks using generative models like diffusion model [169], [170] still provides a promising direction. Beyond conventional forgetting-related research, combining continual learning with other learning paradigms represents an important trend.…”
Section: Algorithmmentioning
confidence: 99%
“…Utilizing pretrained models for parameter-efficient finetuning is gaining attention due to their strong feature extraction capabilities [166], [167], [59], [168]. Generating pseudodata of previous tasks using generative models like diffusion model [169], [170] still provides a promising direction. Beyond conventional forgetting-related research, combining continual learning with other learning paradigms represents an important trend.…”
Section: Algorithmmentioning
confidence: 99%
“…Adding more instances of the less dominant class to training data might potentially solve the problem. Therefore, we propose to use the Balanced Cross-Entropy loss (BCE) function [ 34 ] as in Equation (1): where is the class SoftMax probability and is the ground truth of the corresponding prediction. and presents the total of pixels in the image.…”
Section: Lightweight Semantic Segmentation Fcn-mobilenetv2mentioning
confidence: 99%
“…Adding more instances of the less dominant class to training data might potentially solve the problem. Therefore, we propose to use the Balanced Cross-Entropy loss (BCE) function [34] as in Equation ( 1):…”
Section: Model Trainingmentioning
confidence: 99%
See 1 more Smart Citation