DOI: 10.4995/thesis/10251/18066
|View full text |Cite
|
Sign up to set email alerts
|

Aportaciones al modelado conexionista de lenguaje y su aplicación al reconocimiento de secuencias y traducción automática

Abstract: A Josepa por apoyarme en estos últimos años de escritura. A mis padres, Natividad y Crescencio, y hermanos, José y Nati, por creer en mí. AGRADECIMIENTOSQuiero aprovechar este espacio para agradecer el apoyo, críticas, y buenas intenciones que he encontrado en la gente en el camino de obtener el título de Doctor. Doy las gracias a mis amigos de Aldaia, que he tenido muy olvidados en estos últimos años, pero que siguen confiando en mí. A Germán, Jorge y Jesús compañeros de andadura durante la carrera, y compañe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(12 citation statements)
references
References 114 publications
(224 reference statements)
0
12
0
Order By: Relevance
“…NN LMs [Bengio et al 2003;Castro et al 2001a;Castro and Prat 2003;Schwenk 2007;Zamora-Martínez 2012] are easy to understand in this ANNs learn posteriors context since they just take profit of the general capability of artificial neural networks (ANNs) to estimate posterior probabilities. In order to estimate p(w|h), a suitable representation of h is required to feed the network inputs and a way to classify among the different words of the vocabulary Ω is also required.…”
Section: Neural Network N-grams (Nnlms)mentioning
confidence: 99%
See 4 more Smart Citations
“…NN LMs [Bengio et al 2003;Castro et al 2001a;Castro and Prat 2003;Schwenk 2007;Zamora-Martínez 2012] are easy to understand in this ANNs learn posteriors context since they just take profit of the general capability of artificial neural networks (ANNs) to estimate posterior probabilities. In order to estimate p(w|h), a suitable representation of h is required to feed the network inputs and a way to classify among the different words of the vocabulary Ω is also required.…”
Section: Neural Network N-grams (Nnlms)mentioning
confidence: 99%
“…• use the shortlist approach at the input layer. The use of the shortlist approach at the output layer was described in Section 6.3.3 and is widely reported in the literature, but it can also be used at the input layer as described in [Zamora-Martínez 2012];…”
Section: Nn-mentioning
confidence: 99%
See 3 more Smart Citations