Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop
DOI: 10.1109/nnsp.1997.622430
|View full text |Cite
|
Sign up to set email alerts
|

A neural network approach to blind source separation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…h i is the i-th element of the vector h(z[n]), and ′ denotes the first derivative. The maximum of this cost function can be obtained using a gradient algorithm [2], or a relative gradient algorithm [1,22]. Both approaches use the gradient of Equation (20) given by where adj(·) is the adjunct of a matrix and…”
Section: Unsupervised Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…h i is the i-th element of the vector h(z[n]), and ′ denotes the first derivative. The maximum of this cost function can be obtained using a gradient algorithm [2], or a relative gradient algorithm [1,22]. Both approaches use the gradient of Equation (20) given by where adj(·) is the adjunct of a matrix and…”
Section: Unsupervised Approachmentioning
confidence: 99%
“…where we have used that z[n] = W H [n]y[n]. The expression in Equation (22) admits an interesting interpretation by means of the use of the nonlinear function g(z) = z * (1 − |z| 2 ). In this case, Castedo and Macchi [7] have shown that the Bell and Sejnowski rule is equivalent to the Constant Modulus Algorithm (CMA) proposed by Godard [12].…”
Section: Unsupervised Approachmentioning
confidence: 99%
“…where h i is the i-th element of the vector h(z[n]) and ′ denotes the first derivative. The maximum of this cost function can be obtained by means of using a gradient algorithm [3] or a relative gradient algorithm [10], [11]. Both approaches use the gradient of Equation ( 16) which is obtained as follows…”
Section: B Unsupervised Approachmentioning
confidence: 99%
“…This means that the sources are recovered in the same order as were transmitted. Taking into account, Equation (11) implies that optimum separation matrix produces a diagonal matrix Γ [n] and, therefore, the mismatch of Γ [n] with respect to a diagonal matrix allows us to measure the variations in the channel.…”
Section: A Decision Rulementioning
confidence: 99%