1992
DOI: 10.1093/teamat/11.3.112
|View full text |Cite
|
Sign up to set email alerts
|

Teaching and Learning of Mathematics and its Applications: First Results from a Comparative Empirical Study in England and Germany

Abstract: We will also describe the ongoing research taking place in the next phases of the project.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0
2

Year Published

1993
1993
2020
2020

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 0 publications
0
6
0
2
Order By: Relevance
“…With enough hidden units, an FFNN with at least two layers can approximate an arbitrary mapping from a finite input space to a finite output space [121,122,123]. However, for an FFNN, finding the optimum set of weights w is an NP-complete problem [124]. To train the model, there is a variety range of learning methods such as stochastic gradient descent, adaptive delta, adaptive gradient, adaptive moment estimation, Nesterov's accelerated gradient and RM-Sprob.…”
Section: Feed Forward Neural Networkmentioning
confidence: 99%
“…With enough hidden units, an FFNN with at least two layers can approximate an arbitrary mapping from a finite input space to a finite output space [121,122,123]. However, for an FFNN, finding the optimum set of weights w is an NP-complete problem [124]. To train the model, there is a variety range of learning methods such as stochastic gradient descent, adaptive delta, adaptive gradient, adaptive moment estimation, Nesterov's accelerated gradient and RM-Sprob.…”
Section: Feed Forward Neural Networkmentioning
confidence: 99%
“…Unfortunately, they require a large amount of computation for their training (based on network complexity). Finding the optimum set of weights in cases of multiple hidden layer structures is a NP-complete problem [27]. Therefore, an alternative for more than one hidden layer MLP networks was proposed (deep neural network), in which layers have their functions, i.e., analyses of higher-level features based on the low-level features of previous layers [17,20].…”
Section: Related Workmentioning
confidence: 99%
“…A drawback of this approach is that the training procedure requires a large amount of data and computation power to obtain a high model accuracy. Moreover, finding the optimum set of weights in the case of a multiple hidden layer structure is an NP-complete problem [35]. Thus, a substitute for classic multiple layer perceptron networks was proposed (deep neural network), in which the network is divided into layers with specific functions [28,32].…”
Section: State-of-the-artmentioning
confidence: 99%
“…Thus, a substitute for classic multiple layer perceptron networks was proposed (deep neural network), in which the network is divided into layers with specific functions [28,32]. Significant results using deep neural networks have led them to be the most commonly employed classifiers in machine learning [35,36].…”
Section: State-of-the-artmentioning
confidence: 99%