2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) 2016
DOI: 10.1109/isspit.2016.7886011
|View full text |Cite
|
Sign up to set email alerts
|

Informed Split Gradient Non-negative Matrix factorization using Huber cost function for source apportionment

Abstract: Source apportionment is usually tackled with blind Positive/Non-negative Matrix factorization (PMF/NMF) methods. However, the obtained results may be poor due to the dependence between some rows of the second factor. We recently proposed to inform the estimation of this factor using some prior knowledge provided by chemists-some entries are set to some fixed values-and the sum-to-one property of each row. These constraints were recently taken into account by using a parameterization which gathers all of them. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…In future work, we will investigate some outlier-robust extensions of these approaches, using a similar low-rank modeling. We will compare such a formalism to robust informed matrix factorization using parametric divergences [24], [25] or the Huber norm [26], that we recently proposed for another application [27]. Moreover, the proposed techniques need to know the low-rank subspace where the sensed phenomenon lie.…”
Section: Discussionmentioning
confidence: 99%
“…In future work, we will investigate some outlier-robust extensions of these approaches, using a similar low-rank modeling. We will compare such a formalism to robust informed matrix factorization using parametric divergences [24], [25] or the Huber norm [26], that we recently proposed for another application [27]. Moreover, the proposed techniques need to know the low-rank subspace where the sensed phenomenon lie.…”
Section: Discussionmentioning
confidence: 99%
“…where c represents the threshold parameter of the data using the L 1 -norm or the L 2 -norm. This function is a bounded and convex function that minimizes the effects of a single anomaly point (Chreiky et al, 2016). Huber losses often apply to the insensitive outliers and noise contained in the data, which are often difficult to find using the squared loss function (Du et al, 2012).…”
Section: Related Workmentioning
confidence: 99%
“…As a consequence, robust NMF methods were proposed to deal with a predefined number of outliers. While some of them decompose the data matrix into the sum of a low-rank and a sparse matrix-where the latter contains the outlying component [18]-most ones consider some modified cost functions as dissimilarity measures which gave rise to flexible and robust algorithms, e.g., Bregman-NMF [19], α-NMF [20], β-NMF [21,22], αβ-NMF [23], Correntropy-NMF [24], Huber-NMF [25] (it should be noticed that the Huber cost function has also been considered for robust PMF [26]).…”
Section: Introductionmentioning
confidence: 99%