2013 IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors 2013
DOI: 10.1109/asap.2013.6567594
|View full text |Cite
|
Sign up to set email alerts
|

A low-power Content-Addressable Memory based on clustered-sparse networks

Abstract: Abstract-A low-power Content-Addressable-Memory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clustered-sparse-network using binary-weighted connections that on-average will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventiona… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…We propose to replace all the arithmetical-integer computations by logical equations, that is, to define a full binary neural network model. This property, which has first been introduced in Chavet et al [2012] and later partially used to design content addressable memories in Jarollahi et al [2013], is detailed in this section. The proposed fully binary model allows removing both the WTA step and achieving the same performances as the enhanced GBNN model [Gripon and Berrou 2012].…”
Section: Full Binary Computationmentioning
confidence: 91%
“…We propose to replace all the arithmetical-integer computations by logical equations, that is, to define a full binary neural network model. This property, which has first been introduced in Chavet et al [2012] and later partially used to design content addressable memories in Jarollahi et al [2013], is detailed in this section. The proposed fully binary model allows removing both the WTA step and achieving the same performances as the enhanced GBNN model [Gripon and Berrou 2012].…”
Section: Full Binary Computationmentioning
confidence: 91%
“…The index (position) of the selected bits for extraction are consistent in all inputs. In databases, where a field receives inputs with high degrees of similarity, an SCN-based associative memory still generates the correct results, as shown in [10]. However, higher number of pattern similarities result in the generation of a larger number of output search results that include the desired ones.…”
Section: A Trainingmentioning
confidence: 99%
“…The same authors implement SUM-OF-MAX in [21] and runs 1.9× faster than [20], since bitwise operations are used in place of a resource-demanding module required by SUM-OF-SUM. In [22], the same group of authors also develop a content addressable memory using GBNNs which saves 90% of the energy consumption. Larras et al [23] implement an analog version of the network which consumes 1165× less energy but is 2× more efficient both in the surface of the circuit and speed, compared with an equivalent digital circuit.…”
Section: B Related Workmentioning
confidence: 99%