2010
DOI: 10.1109/tvlsi.2009.2020396
|View full text |Cite
|
Sign up to set email alerts
|

New Architectural Design of CA-Based Codec

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…Several error detecting and correcting codes are already available. Many adjacent error correcting codes [3], [4], [5] and CA-based error detecting and correcting codes [6], [7], [8] have already been introduced to detect and correct adjacent errors in communication and storage systems. Alternatively, Bose-Chaudhuri-Hocquenghem (BCH) code [9] and Reed Solomon (RS) code [10], [11] can protect MBUs.…”
Section: Introductionmentioning
confidence: 99%
“…Several error detecting and correcting codes are already available. Many adjacent error correcting codes [3], [4], [5] and CA-based error detecting and correcting codes [6], [7], [8] have already been introduced to detect and correct adjacent errors in communication and storage systems. Alternatively, Bose-Chaudhuri-Hocquenghem (BCH) code [9] and Reed Solomon (RS) code [10], [11] can protect MBUs.…”
Section: Introductionmentioning
confidence: 99%
“…Only one rule vector for each n-length CA has been provided in [3]. A new architectural design of CA-based codec based on linear maximum length CA has been proposed in [5]. In [2] authors proposed an algorithm for determining minimal cost n-cell maximum length CA of degree up to 500.…”
Section: Introductionmentioning
confidence: 99%
“…Although these codes are much more complex than standard DEC codes, they could be very useful when transmitting both real-time and non-real-time data. The same goes for RS codes, as well as for all other codes that can correct double (spotty) byte errors [14]- [16].…”
mentioning
confidence: 96%
“…The reason is that these codes use finite field (FF) arithmetic, which is not supported by modern processors. Hence, to achieve high throughputs, the codes from [6]- [16] must be implemented in dedicated hardware (e.g. the software-based DEC-BCH decoders need several tens of clock cycles to process one bit [17], [18]).…”
mentioning
confidence: 99%