Preface to the Third Edition xi Preface to the Second Edition xüi Preface to the First Edition xv Part I Probability and Random Variables 1 The Meaning of Probability 1-1 Introduction 3 1-2 The Definitions 5 1-3 Probability and'Induction 12 1-4 Causality versus Randomness 13 2 The Axioms of Probability 2-1 Set Theory 15 2-2 Probability Space 20 2-3 Conditional Probability Problems 36 3 Repeated Trials 3-1 Combined Experiments 3-2 Bernoulli Trials 3-3 Asymptotic Theorems 3-4 Poisson Theorem and Random Points Problems 4 The Concept of a Random Variable 4-1 Introduction 4-2 Distribution and Density Functions vii
In the context of coherent signal classification, a spatial smoothing scheme first suggested by Evans et al., and subsequently studied by Shan et al., is further investigated. It is proved here that by making use of a set of forward and complex conjugated backward subarrays simultaneously, it is always possible to estimate any K directions of arrival using at most 3 K / 2 sensor elements. This is achieved by creating a smoothed array output covariance matrix that is structurally identical to a covariance matrix in some noncoherent situation. By incorporating the eigenstructure-based techniques on this smoothed covariance matrix, it then becomes possible to correctly identify all directions of arrival irrespective of their correlation.
For matrices with all nonnegative entries, the Perron-Frobenius theorem guarantees the existence of an eigenvector with all nonnegative components. We show that the existence of such an eigenvector is also guaranteed for a very different class of matrices, namely real symmetric matrices with exactly two eigenvalues. We also prove a partial converse, that among real symmetric matrices with any more than two eigenvalues there exist some having no nonnegative eigenvector.A nonnegative vector is one whose components are all nonnegative. This concept has no place in pure linear algebra, as it is highly basis dependent. However, nonnegative vectors (and their cousins, positive vectors) sometimes crop up and prove useful in applications. For example, one of the consequences of the Perron-Frobenius theorem is that a matrix with nonnegative entries has a nonnegative (or even positive, under appropriate hypotheses) eigenvector, which fact is of great consequence for, e.g., ranking pages in search engine results [2].In this note, we prove that the existence of a nonnegative eigenvector is also guaranteed for a very different class of matrices, namely real symmetric matrices having only two distinct eigenvalues. Recall that a symmetric matrix has a set of orthogonal eigenvectors that span the ambient space. This is the only fact about symmetric matrices that we will need.Let M ∈ R n×n be our matrix of interest. Since we suppose M has only two eigenvalues, it has two eigenspaces V and W which are orthogonal and satisfy V + W = R n . Hence W = V ⊥ (with respect to the standard inner product on R n ) and vice versa. Thus the existence of a nonnegative eigenvector of M is an immediate corollary of the following proposition.Proposition. For any subspace V ⊆ R n , either V contains a nonzero, nonnegative vector or V ⊥ does.Some commentary before commencing with the proof: Although this is ostensibly a result about linear algebra, we have noted already that the notion of nonnegativity is inherently not a purely linear algebraic property. Hence it should not be surprising that the proof should require other ideas. It turns out that convexity is the key here. Proof of Proposition. Define setsR n ≥0 is the set of all nonnegative vectors, and proving the proposition amounts to showing that V or V ⊥ intersects R n ≥0 in a nonzero vector. Because V and R n ≥0 are both January 2014] NOTES 1
Space-time adaptive array processing has emerged as a key technology thrust area for the next generation of airborne radar systems due t o its inherent potential for vastly improving moving target indicator (MTI) performance. Unfortunately, these performance gains come with a commensurate increase in on-line computational complexity if full degree-of-freedom (DOF) STAP processors are employed. In this paper we introduce a class of efficient STAP processors which exploit the fact that the full DOF space-time clutter covariance matrix is rank deficient with respect to a coherent processing interval (CPI).
In this paper, we address the problem of element placement in a linear aperiodic array for use in spatial spectrum estimation. By making use of a theorem by Carathedory, it is shown that, for a given number of elements, there exists a distribution of element positions which, for uncorreleated sources, results in superior spatial spectrum estimators than are otherwise achievable. The improvement is obtained by constructing an augmented covariance matrix, made possible by the choice of eiement positions, with dimension greater than the number of array eiements. The auBmented matrix ie then used in any of the known apectrum estimation methods in conjunction with a correspondingly augmented search pointing vector. Examples are given to show the superior detection capability, the iarger dynamic range for spectral peak to background level, the lower eidelobes and the relatively low bias values, when one of the known spectrum estimation techniques based on eigenstructure is used.A recurring question in array design for both signal reception mod spatial spectrum estimation is that of how to beneficially deploy the elements of a sparse array. Toward this purpose, we consider the problem of eetiaatinE the directions of arrival of E uncorreiated narrowband sources 5k(t) for k = 1, 2...K, which are spatially distributed in the directions 2' .O. Let d. represent the ith eieaent location with resect to a reference point for an M element linear array. Then the ith element output at time t can be written as [1) K -jk drosS 01 k x.(t)= kkl 5k(t)e +n.(t); il,2,...M (i) where k represents the wavenumber common to ali sources°and n.(t) stands for additive white noise of eprectral àeneiry N. In vector form (i) becomes * X(r)=A s(t)+n(r) (2) where K (r) + [x1(t), x0(t), . ..xM (t)JT s (t) = [sjt), s2(t), ...eK (r)]T and n (t) = [ni(t) n2(t), ...nN(t)I Here T stands for the transpose and * reprente the compiex conjugate. Also A = [1(01) , 1(02)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.