2019
DOI: 10.1109/access.2019.2902640
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Network Regularization for Feature Selection in Learning-to-Rank

Abstract: Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a variety of applications. The proposed model makes use of the deep neural network for learning-to-rank for document retrieval. It employs a regularization technique particularly suited for the deep neural network to im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(17 citation statements)
references
References 42 publications
0
16
0
Order By: Relevance
“…Existing studies [32], [33], [37]- [39] have proven that semantic similarities among classes are consistent with the visual features. This notion indicates that if two semantic data belonging to two different classes are similar, then the visual instances should also be similar to each other.…”
Section: ) Similar Sample Selectionmentioning
confidence: 83%
“…Existing studies [32], [33], [37]- [39] have proven that semantic similarities among classes are consistent with the visual features. This notion indicates that if two semantic data belonging to two different classes are similar, then the visual instances should also be similar to each other.…”
Section: ) Similar Sample Selectionmentioning
confidence: 83%
“…EGRank [18] uses exponentiated gradient updates to solve a convex optimization problem on a sparsity-promoting 1 constraint and a pairwise ranking loss. In [53], a deep neural LTR model was provided. The authors used group 1 regularization to optimize the weights of a neural network, select the relevant features with active neurons at the input layer, and remove inactive neurons from hidden layers.…”
Section: Feature Selection Methods For Ltrmentioning
confidence: 99%
“…The L 1 regularization method aims at minimizing the neural network's weights with respect to the λ 1 coefficient, representing a feature selection mechanism. As the input of the proposed model consists of a significant number of categorical variables, leading to a degree of sparsity in the data, the L 1 regularization may reduce the value of weights associated with insignificant features [54]. Described by the sum of squared weights, the L 2 regularization technique prevents overfitting of data during training by hindering the weights from reaching high values if it is not necessary [55].…”
Section: Convolutional Neural Network (Cnn) Training and Performance Evaluationmentioning
confidence: 99%