2015
DOI: 10.1109/tsmc.2015.2389752
|View full text |Cite
|
Sign up to set email alerts
|

Autolanding Control Using Recurrent Wavelet Elman Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 32 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Recently, with the development of deep learning, scholars proposed some neural network-based models to explore the semantic relations of Q&A texts [35], [36], which achieved great success on measuring answer quality [3], [4], [37]. Despite the deep belief networks (DBNs) [38] and recursive neural networks [39] have shown some nonlinear fitting capability, the great success of convolutional neural networks (CNNs) [40]- [42] and recurrent neural networks (RNNs) [43]- [45] on various tasks completely changed the research direction. For example, Severyn and Moschitti [3] employed a CNN to generate a representation for each sentence and then used similarity matrix to compute a relevant score.…”
Section: B Neural Network-based Approachesmentioning
confidence: 99%
“…Recently, with the development of deep learning, scholars proposed some neural network-based models to explore the semantic relations of Q&A texts [35], [36], which achieved great success on measuring answer quality [3], [4], [37]. Despite the deep belief networks (DBNs) [38] and recursive neural networks [39] have shown some nonlinear fitting capability, the great success of convolutional neural networks (CNNs) [40]- [42] and recurrent neural networks (RNNs) [43]- [45] on various tasks completely changed the research direction. For example, Severyn and Moschitti [3] employed a CNN to generate a representation for each sentence and then used similarity matrix to compute a relevant score.…”
Section: B Neural Network-based Approachesmentioning
confidence: 99%
“…The connection results in that the context units always maintain a copy of the previous values of hidden units. Thus, the network can keep the past state, which is useful for applications such as sequence prediction [44][45][46]. In Figure 6, there are 46 neurons in the hidden layer.…”
Section: Ennmentioning
confidence: 99%
“…The fixed back connections result in context units always maintaining a copy of previous values of hidden units (since they propagate over the connections before the BP learning rule is applied). Thus, the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multi-layer perceptron [18,19]. In Figure 3, the hidden layer contains 46 neurons, sigmoid activation f(x)=1/(1+ex) is selected in each layer, and the number of input and output layers’ neurons are determined by the vector dimension, respectively.…”
Section: Waveform Classifiermentioning
confidence: 99%