2017
DOI: 10.1186/s40537-017-0065-8
|View full text |Cite
|
Sign up to set email alerts
|

Improving deep neural network design with new text data representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(32 citation statements)
references
References 16 publications
0
32
0
Order By: Relevance
“…Thus, the parameters settings used were strongly influenced by the characteristics of the data set used, such as the number of data set rows. In addition to parameters setting, accuracy was also influenced by the CNN architecture used, as well in previous studies which used a more complex CNN architecture on the twitter data set and got the better results [16], [17].…”
Section: Resultsmentioning
confidence: 86%
See 2 more Smart Citations
“…Thus, the parameters settings used were strongly influenced by the characteristics of the data set used, such as the number of data set rows. In addition to parameters setting, accuracy was also influenced by the CNN architecture used, as well in previous studies which used a more complex CNN architecture on the twitter data set and got the better results [16], [17].…”
Section: Resultsmentioning
confidence: 86%
“…Although Parameters Setting IV and Parameters Setting III have the same parameters setting values, they have not able yet to make the proposed study (indicated by bold numbers) getting the greater accuracy (79.3%, 81.10%, 80.67%). The higher accuracies are (82.7%, 86.8%) [4], (82.3%) [17], and (88.3%) [16], indicated by numbers in italics), as shown in Table IX. It shows that although the Parameters Setting III and IV values were able to produce a better accuracy on the movie review data set, the parameters setting is not capable in producing better accuracy when applied to the Twitter data set.…”
Section: Resultsmentioning
confidence: 92%
See 1 more Smart Citation
“…Dropout randomly ignores neural connections during training and this significantly reduces overfitting, often obtaining major improvements when compared with other regularization methods. 77 The grid search ranges for the MLP hyperparmeters were set to: 78 and Mahendhiran et al 73 that used a fixed number of epochs (e.g., 100) for each experiment, this hyperparameter value was set to 100.…”
Section: Multilayer Perceptronmentioning
confidence: 99%
“…They use a combination of text mining and machine-learning techniques to identify ideas hidden in large amounts of texts. Prusa and Khoshgoftaar (2017) propose a new method of creating character-level representations of text to reduce the computational costs associated with training a deep convolutional network. Further, they show that we can use the proposed embedding with padded convolutional layers to enable the use of current convolutional network architectures, while still facilitating faster training and higher performance than the previous approach for learning from character level.…”
Section: Literature Reviewmentioning
confidence: 99%