1979
DOI: 10.1080/00401706.1979.10489819
|View full text |Cite
|
Sign up to set email alerts
|

Lower Rank Approximation of Matrices by Least Squares With Any Choice of Weights

Abstract: Reduced rank approximation of matrices has hitherto been possible only by unweighted least squares. This paper presents iterative techniques for obtaining such approximations when weights are introduced. The techniques involve criss-cross regressions with careful initialization.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
110
0

Year Published

1984
1984
2012
2012

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 220 publications
(110 citation statements)
references
References 17 publications
0
110
0
Order By: Relevance
“…Wiberg [23] has proposed a method based on the weighted least squares technique, which was later extended by Shum et al [20]. Gabriel and Zamir [7] proposed a method for subspace learning with any choice of weights, where each data point can have a different weight determined on the basis of reliability.…”
Section: Related Workmentioning
confidence: 99%
“…Wiberg [23] has proposed a method based on the weighted least squares technique, which was later extended by Shum et al [20]. Gabriel and Zamir [7] proposed a method for subspace learning with any choice of weights, where each data point can have a different weight determined on the basis of reliability.…”
Section: Related Workmentioning
confidence: 99%
“…The environmental indexes adjusted in this way are called L 2 environmental indexes, because the L 2 norm was used. The described zigzag algorithm is a version of the iterative algorithms existing in the literature; see for example, Digby (1979); Gabriel and Zamir (1979) and Ng and Williams (2001). Pereira and Mexia (2010) proved the convergence of the zigzag algorithm and that the adjusted parameters could be seen as maximum likelihood estimators.…”
Section: The Zigzag Algorithmmentioning
confidence: 99%
“…Mixing matrices and coefficients are learned in a supervised manner using a criss-cross regression algorithm [18] [15]. As the number of required parameters to represent shape and appearance within a given accuracy can greatly differ, two reduced-rank matrices R 1 and R 2 are also learned during the model construction process, coupling the two spaces.…”
Section: Learning the Bilinear Modelmentioning
confidence: 99%