Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0
1

Year Published

2006
2006
2013
2013

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 71 publications
(4 citation statements)
references
References 35 publications
0
3
0
1
Order By: Relevance
“…These noisy images were generated using Matlab. Remarkably, we got perfect recall: error zero, so our GP-generated AM had better behavior compared to the morphological and the alpha-beta models [15] and [34].…”
Section: Results and Analysismentioning
confidence: 96%
See 1 more Smart Citation
“…These noisy images were generated using Matlab. Remarkably, we got perfect recall: error zero, so our GP-generated AM had better behavior compared to the morphological and the alpha-beta models [15] and [34].…”
Section: Results and Analysismentioning
confidence: 96%
“…If a distorted version of a pattern, denoted as X, is fed to M and the obtained output is exactly Y k , then recalling is considered as perfect. The simplicity of AMs mod-els is a result of a great development, which has been carried out during the last 50 years; for some examples, refer to [6,22,24,14,15,25,23,31]. Most of these models have several limitations: limited storage capacity, difficulty to deal with more than one type of pattern (binary, integer or realvalued), lack of robustness to different kinds of noises (additive, subtractive, mixed, Gaussian, etc.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, the studies in [6][7][8][9][10][11][12] are based on the notion of Strong Lattice Independence (SLI), following the conjecture in [13] that SLI vectors are affine independent vectors and thus its convex hull defines a simplex. Using lattice auto-associative memories (LAAM) [14,15] built from the hyperspectral image data, sets of SLI vectors where induced and used as endmembers. Recent study [10] has shown how to obtain sets of affine independent vectors from the rows and columns of the LAAM constructed using the hyperspectral data.…”
Section: Introductionmentioning
confidence: 99%
“…Las memorias asociativas morfológicas [42] en lugar de sumar los productos de valores y los pesos sinápticos, toman el máximo o el mínimo de las sumas de los valores y sus correspondientes pesos sinápticos, las propiedades de las redes neuronales morfológicas son drásticamente diferentes de los modelos tradiciones de redes neuronales. Para la recuperación de patrones las memorias asociativas morfológicas aun con ruido erosivo [54] (cuando las componentes del vector son menores o iguales a las componentes originales) o con ruido dilatativo (si sus componentes son mayores o iguales a las componentes originales).…”
Section: Memorias Asociativas Morfológicas De Ritter (1998)unclassified