2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA) 2020
DOI: 10.1109/citisia50690.2020.9371770
|View full text |Cite
|
Sign up to set email alerts
|

RNN-CNN MODEL:A Bi-directional Long Short-Term Memory Deep Learning Network For Story Point Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…This renders the Bi-LSTM model very effective and efficient at handling sequential dependencies between words and phrases in both directions of the sequence [23]. Marapelli et al [18] proposed an RNN-CNN model, a Bidirectional LSTM Deep Learning Network for estimating story point effort for user stories based on the historical dataset of user story descriptions along with their estimated story point effort and observed that the Bi-LSTM forward and backward feature learning allowed their model to conserve the sequence data resulting in the CNN component to produce more accurate feature extraction.…”
Section: ) Long Short-term Memory (Lstm)mentioning
confidence: 99%
See 2 more Smart Citations
“…This renders the Bi-LSTM model very effective and efficient at handling sequential dependencies between words and phrases in both directions of the sequence [23]. Marapelli et al [18] proposed an RNN-CNN model, a Bidirectional LSTM Deep Learning Network for estimating story point effort for user stories based on the historical dataset of user story descriptions along with their estimated story point effort and observed that the Bi-LSTM forward and backward feature learning allowed their model to conserve the sequence data resulting in the CNN component to produce more accurate feature extraction.…”
Section: ) Long Short-term Memory (Lstm)mentioning
confidence: 99%
“…The output from the previous layer is fed to one or more fully connected layers to produce the classification. The convolutional Neural Network, although primarily targeted towards image processing, has been effectively exploited for text classification as illustrated by [18] and [19]. Ochodek et al [17] observed that the CNN model yields better accuracy around estimating COSMIC size using use-case names as compared to the RNN and CNN-RNN approaches.…”
Section: ) Convolutional Neural Network (Cnn)mentioning
confidence: 99%
See 1 more Smart Citation
“…as well as machine learning algorithms like, "fuzzy, 43,44 DNN, 45,46 LSTM, 47 and ELM 48,49 " in terms of various error measures like, "MEP, SMAPE, rate is a hyper-parameter that controls how much we are adjusting the weights of our network concerning the loss gradient. Learning rate is a hyper-parameter that controls the weights of the network concerning the loss gradient."…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Gating RNN is a technique for solving the problem of disappearing gradients. Long shortterm memory (LSTM) and gated recurrent unit (GRU) are two well-known gating techniques [7], [35]- [37]. The convolutional neural network (CNN) is a sort of multi-layer neural network.…”
Section: Introductionmentioning
confidence: 99%