2015 Information Theory and Applications Workshop (ITA) 2015
DOI: 10.1109/ita.2015.7308978
|View full text |Cite
|
Sign up to set email alerts
|

Fault-resilient decoders and memories made of unreliable components

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
6
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 28 publications
1
6
0
Order By: Relevance
“…A similar SF result shows that the errors introduced in estimating Markov random field models can be partially canceled and benefit end-to-end inference performance [36]. SF effects due to noise in computational elements, rather than graphical model structure errors as here, have been observed in [27], [29], [37] and later specifically in LDPC decoders [23], [25].…”
Section: B Performance Analysissupporting
confidence: 62%
See 1 more Smart Citation
“…A similar SF result shows that the errors introduced in estimating Markov random field models can be partially canceled and benefit end-to-end inference performance [36]. SF effects due to noise in computational elements, rather than graphical model structure errors as here, have been observed in [27], [29], [37] and later specifically in LDPC decoders [23], [25].…”
Section: B Performance Analysissupporting
confidence: 62%
“…SF in decoding was observed with transient errors in computation, rather than with missing connections, initially in memory recall[27],[29] and then in communications[23],[30].…”
mentioning
confidence: 99%
“…In addition, a simple probabilistic gradient decent bit-flipping decoder, recently proposed by Al Rasheed et al [21], achieves high level of fault-tolerance. Recently, Vasić et al [22] showed that probabilistic behavior of the Gallager-B decoder due to unreliable components can lead to the improved performance. This resulted in an increased interest in hard-decision decoders.…”
Section: Introductionmentioning
confidence: 99%
“…This surprising effect we first observed in the case of von Neumann failures [20], [39] and showed that message perturbations caused by gate failures can help decoder to escape from trapping sets. Furthermore, in the infinite number of iterations all trapping sets break, and the decoding process always converge to a valid codeword.…”
Section: Definitionmentioning
confidence: 73%