2019
DOI: 10.1007/978-981-13-5953-8_43
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Bengali Document Categorization Based on Deep Convolution Nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Bangyal et al [112] [122] employed a deep convolution neural network (DCNN) for both feature extraction and relation classification. The DCNN model is comprised of three convolution layers of unique filters, i.e., 16, 32, and 64, with activation and max pooling layers coming after each convolution layer, followed by two fully connected layers with 512 neurons each and a softmax layer which does the classification using the sigmoid activation [121], the DCNN architecture is shown in Figure 18. Each word in the text is passed through the look-up table (W v.d ) to form the input document representation vector n * d, where n is the document's word count and d is its feature dimension.…”
Section: Relation Discovery In Ontology Learning Involves Attribute O...mentioning
confidence: 99%
See 1 more Smart Citation
“…Bangyal et al [112] [122] employed a deep convolution neural network (DCNN) for both feature extraction and relation classification. The DCNN model is comprised of three convolution layers of unique filters, i.e., 16, 32, and 64, with activation and max pooling layers coming after each convolution layer, followed by two fully connected layers with 512 neurons each and a softmax layer which does the classification using the sigmoid activation [121], the DCNN architecture is shown in Figure 18. Each word in the text is passed through the look-up table (W v.d ) to form the input document representation vector n * d, where n is the document's word count and d is its feature dimension.…”
Section: Relation Discovery In Ontology Learning Involves Attribute O...mentioning
confidence: 99%
“…Each word in the text is passed through the look-up table (W v.d ) to form the input document representation vector n * d, where n is the document's word count and d is its feature dimension. Once the input tensor extracts the local feature, the convolution layer learns the filter weights [121]. Furthermore, ReLU is used in deep convolution neural networks (DCNNs) to introduce nonlinearity and rectification layers for computation of gradient, mitigation of vanishing gradient problem, and reduction of computational time complexity [123].…”
Section: Relation Discovery In Ontology Learning Involves Attribute O...mentioning
confidence: 99%
“…The intrinsic and extrinsic evaluations are used for evaluating the embedding model. The intrinsic evaluators evaluate the semantic, syntactic and relatedness quality whereas the extrinsic evaluators evaluate the downstream tasks e.g., classification [14], machine translation [61], word-sense disambiguation [62], and question answering [63]. Spearman (ρ) and Pearson (r) correlations are used for intrinsic evaluation such as semantic word similarity (S sρ /S sr ) and syntactic word similarity (S yρ /S yr ).…”
Section: A Evaluation Measuresmentioning
confidence: 99%