1992
DOI: 10.1007/bf00930646
|View full text |Cite
|
Sign up to set email alerts
|

High-speed digital filtering: Structures and finite wordlength effects

Abstract: This paper is a study of high-throughput filter structures such as block structures and their behavior in finite precision environments. Block structures achieve high throughput rates by using a large number of processors working in parallel. It has been believed that block structures which are relatively robust to round-off noise must also be robust to coefficient quantization errors. However, our research has shown that block structures, in fact, have high coefficient sensitivity. A potential problem that ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

1999
1999
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…However, as increases the complexities of the new filters and the scattered lookahead filters increase rapidly, whereas for the clustered lookahead filters, the complexity increases slowly. 6 On the other hand, algorithm transformation techniques are based upon pole-zero cancellations, which have potential drawbacks under finite-arithmetic conditions [9], [10]. In addition, the clustered lookahead technique requires that the filters be realized with direct-form structures, which generally have problems under finite-arithmetic conditions, especially when stability is concerned [28].…”
Section: Design Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, as increases the complexities of the new filters and the scattered lookahead filters increase rapidly, whereas for the clustered lookahead filters, the complexity increases slowly. 6 On the other hand, algorithm transformation techniques are based upon pole-zero cancellations, which have potential drawbacks under finite-arithmetic conditions [9], [10]. In addition, the clustered lookahead technique requires that the filters be realized with direct-form structures, which generally have problems under finite-arithmetic conditions, especially when stability is concerned [28].…”
Section: Design Examplesmentioning
confidence: 99%
“…To the former belong the well-known clustered and scattered lookahead techniques and block realization; see, e.g., [6]- [8] for a review and a comprehensive reference list. Algorithm transformation techniques are based upon pole and zero cancellations, which can be achieved theoretically, but under finite-arithmetic conditions, the cancellations become inexact, which may impose problems such as increased coefficient sensitivity and time-variant behavior [9], [10]. Pole and zero cancellations can be circumvented by using constrained filter-design techniques in which the denominator polynomial of the transfer function is restricted to be a function of .…”
Section: Introductionmentioning
confidence: 99%
“…Further, look-ahead filters are based on pole and zero cancellations. When the filter coefficients are quantized, these cancellations become inexact which may impose problems [28], [29]. For the new structures, such problems do not exist.…”
Section: B Design Examplementioning
confidence: 99%
“…The loop(s) that determine the maximal sample frequency is called the critical loop(s). From (1) we see that the maximal sample frequency can be increased either by increasing the number of delay elements in the loops or by decreasing the total latency of the operations. We want to increase the maximal sample frequency for two main reasons: for high-speed applications, naturally, but also for low-power applications, as any excess speed can be traded for a lower power consumption by decreasing the power supply voltage [2].…”
Section: Introductionmentioning
confidence: 99%
“…Among the algorithm transformation techniques are scattered look-ahead, clustered look-ahead, and block processing [14]. These are based on pole-zero cancellations, which become inexact under finite-arithmetic conditions and may lead to time-varying behavior and higher coefficient sensitivity [1]. These problems can be avoided by using constrained filter design techniques.…”
Section: Introductionmentioning
confidence: 99%