2020
DOI: 10.1109/access.2020.2973722
|View full text |Cite
|
Sign up to set email alerts
|

Robust T-S Fuzzy Model Identification Approach Based on FCRM Algorithm and L1-Norm Loss Function

Abstract: The Takagi-Sugeno (T-S) fuzzy model identification is a very powerful tool for modelling of complicated nonlinear system. However, the traditional T-S fuzzy model typically uses the L2-norm loss function, which is very sensitive to outliers or noises. So an unreliable model may be obtained due to the presence of outliers or noises. In this paper, the outliers and noises robust T-S fuzzy model identification method based on the fuzzy c-regression model (FCRM) clustering and the L1-norm loss function is proposed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 56 publications
(31 reference statements)
0
2
0
Order By: Relevance
“…As we know, there are a multitude of common loss functions in a deep neural network such as mean square error (MSE) loss [ 17 ], cross-entropy (CE) loss [ 18 ], L1 loss [ 19 ], hinge loss [ 20 ], etc. One of which, the loss function deployed in the nonlinear equalization (NLE) framework, is the mean square error (MSE) loss and given as, where is the target complex signal, and is the output complex value from the CVNN equalizer in Figure 3 a.…”
Section: Operation Principle Of Deep Learning Algorithms For Ps-m-qam...mentioning
confidence: 99%
“…As we know, there are a multitude of common loss functions in a deep neural network such as mean square error (MSE) loss [ 17 ], cross-entropy (CE) loss [ 18 ], L1 loss [ 19 ], hinge loss [ 20 ], etc. One of which, the loss function deployed in the nonlinear equalization (NLE) framework, is the mean square error (MSE) loss and given as, where is the target complex signal, and is the output complex value from the CVNN equalizer in Figure 3 a.…”
Section: Operation Principle Of Deep Learning Algorithms For Ps-m-qam...mentioning
confidence: 99%
“…To realize the rule selection (RS), the Group Lasso regularization is added to the objective function. Compared with other Lasso regularization [35][36][37], it can induce row or column sparsity, thus producing sparsity of rules in a grouped manner, which provides the possibility for rule selection [38]. Therefore, the following objective function of each SFNN-1 contains two parts, that is, the mean square error (MSE) and the Group Lasso penalty term:…”
Section: First-order Sparse Tsk Nonstationary Fuzzy Neural Network (S...mentioning
confidence: 99%