Machine Learning Proceedings 1990 1990
DOI: 10.1016/b978-1-55860-141-3.50007-9
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping

Abstract: The performance of the error backpropagation BP and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this di erence were explored: a ID3 is over tting the training data, b BP is able to share hidden units across several output units and hence can learn the output units be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
29
0

Year Published

1991
1991
2018
2018

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(30 citation statements)
references
References 9 publications
1
29
0
Order By: Relevance
“…A noteworthy implementation of such a decomposition is the recent DNN 'Uber-Net' (Kokkinos, 2016), which solves 7 vision related tasks (boundary, surface normals, saliency, semantic segmentation, semantic boundary and human parts detection) with a single multi-scale DNN network to reduce the memory footprint. It can be assumed that such a multi-task training improves convergence speed and better generalization to unseen data, something that already has been observed on other multi-task setups related to speech processing, vision and maze navigation (Bilen & Vedaldi, 2016;Caruana, 1998;Dietterich, Hild, & Bakiri, 1990, 1995Mirowski et al, 2016).…”
Section: Box 1 Deep Neural Networkmentioning
confidence: 77%
“…A noteworthy implementation of such a decomposition is the recent DNN 'Uber-Net' (Kokkinos, 2016), which solves 7 vision related tasks (boundary, surface normals, saliency, semantic segmentation, semantic boundary and human parts detection) with a single multi-scale DNN network to reduce the memory footprint. It can be assumed that such a multi-task training improves convergence speed and better generalization to unseen data, something that already has been observed on other multi-task setups related to speech processing, vision and maze navigation (Bilen & Vedaldi, 2016;Caruana, 1998;Dietterich, Hild, & Bakiri, 1990, 1995Mirowski et al, 2016).…”
Section: Box 1 Deep Neural Networkmentioning
confidence: 77%
“…Both symbolic and artificial neural network (or connectionist) learning algorithms have been developed; however, until recently Mooney, Shavlik, Towell, & Gove, 1989;Weiss & Kapouleas, 1989;Atlas, et al, 1990;Dietterich, Hild, & Bakiri, 1990) there has been little direct comparison of these two basic approaches to machine learning. Consequently, despite the fact that symbolic and connectionist learning systems frequently address the same general problem, very little is known regarding their comparative strengths and weaknesses.…”
Section: Introductionmentioning
confidence: 99%
“…These domains have received considerable attention from connectionist researchers who employed the back propagation learning algorithm (Sejnowski & Rosenberg, 1986;Qian & Sejnowski, 1988;Towell et al, 1990). In addition, the word pronunciation problem has been the subject of a number of comparisons using other machine learning algorithms (Stanfill & Waltz, 1986;Shavlik et al, 1989;Dietterich et al, 1990). All of these domains represent problems of considerable practical importance, and all have symbolic feature values, which makes them difficult for conventional nearest neighbor algorithms.…”
Section: Instance-based Learning Versus Other Modelsmentioning
confidence: 99%