ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1983.1172202
|View full text |Cite
|
Sign up to set email alerts
|

Identification and estimation algorithms for a Markov chain plus AR process

Abstract: Identification and recursive filtering problems related to a signal described by a Markov chain plus autoregressive(MCPAR) process are considered.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
4
0
6

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 6 publications
0
4
0
6
Order By: Relevance
“…The GPBA due to Ackerson and Fu (1970) and Chang and Athans (1978) are the special case. The result of Sugimoto and Ishizuka (1983) is entirely the same as that of Jailer and Gupta (1971).For the second question, Jaffer and Gupta (1971) also proposed a method for obtaining the s "+~ conditional estimates at time k by updating the one-step past s" conditional estimates at time k -1, which will be called here the 'one-step measurement update method', where s denotes the number of Markov chain states. It is naturally noted that another measurement update method will exist so that the s "+1 conditional estimates at time k can be computed by updating the n-step past s conditional estimates at time k -n. This means that, in the framework of GPBA approach, n -1 measurement data can be further reprocessed for a case when n>~2.…”
Section: Introductionmentioning
confidence: 85%
See 1 more Smart Citation
“…The GPBA due to Ackerson and Fu (1970) and Chang and Athans (1978) are the special case. The result of Sugimoto and Ishizuka (1983) is entirely the same as that of Jailer and Gupta (1971).For the second question, Jaffer and Gupta (1971) also proposed a method for obtaining the s "+~ conditional estimates at time k by updating the one-step past s" conditional estimates at time k -1, which will be called here the 'one-step measurement update method', where s denotes the number of Markov chain states. It is naturally noted that another measurement update method will exist so that the s "+1 conditional estimates at time k can be computed by updating the n-step past s conditional estimates at time k -n. This means that, in the framework of GPBA approach, n -1 measurement data can be further reprocessed for a case when n>~2.…”
Section: Introductionmentioning
confidence: 85%
“…To circumvent this problem, there are some suboptimal algorithms, e.g. (a) random sampling algorithm (RSA) (Akashi and Kumamoto, 1977), (b) detection-estimation algorithm (DEA) (Tugnait and Haddad, 1979;Tugnait, 1982a;Mathews and Tugnait, 1983), (c) generalized pseudo-Bayes algorithm (GPBA) (Ackerson and Fu, 1970;Jaffer and Gupta, 1971;Chang and Athans, 1978;Sugimoto and Ishizuka, 1983), and (d) interacting multiple model algorithm (IMMA) (Blom, 1984(Blom, , 1985Blom and Bar-Shalom, 1988). It is worth noting here that the IMMA performs nearly as well as the second-order GPBA method with notably less computation (see also Chang and Bar-Shalom, 1987).…”
Section: Introductionmentioning
confidence: 99%
“…In the following discussion, for simplicity we omit formulas for MSE matrices (error covariances). a) GPB: A straightforward and probably the most natural implementation of this idea is the so-called generalized pseudo-Bayesian algorithms of order n (GPBn) [148,321,18]. They reduce the hypothesis tree by having a fixed memory depth such that all the hypotheses that are the same in the latest n time steps are merged and thus each of the M filters runs M n¡1 times at each recursion.…”
Section: ) Gpb and Imm Merging Strategiesmentioning
confidence: 99%
“…They reduce the hypothesis tree by having a fixed memory depth such that all the hypotheses that are the same in the latest n time steps are merged and thus each of the M filters runs M n¡1 times at each recursion. The GPB1 and GPB2 algorithms are the most popular ones in this class [1,148,74,321,18]. Although for simplicity our discussion below is based on the GPB2 algorithm explicitly, it can be extended to the general GPBn case straightforwardly.…”
Section: ) Gpb and Imm Merging Strategiesmentioning
confidence: 99%
“…Цепи Маркова высокого порядка имеют широкое применение при исследовании зависимостей в стохастических последовательностях [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Однако число параметров (вероятностей переходов) цепи Маркова s-го порядка растет экспоненциально при увеличении s, что усложняет идентификацию этой модели на практике.…”
Section: Introductionunclassified