2017
DOI: 10.1093/bioinformatics/btx226
|View full text |Cite
|
Sign up to set email alerts
|

Rectified factor networks for biclustering of omics data

Abstract: MotivationBiclustering has become a major tool for analyzing large datasets given as matrix of samples times features and has been successfully applied in life sciences and e-commerce for drug design and recommender systems, respectively. Factor Analysis for Bicluster Acquisition (FABIA), one of the most successful biclustering methods, is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
218
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 187 publications
(218 citation statements)
references
References 29 publications
0
218
0
Order By: Relevance
“…More specifically, we investigated the role of the learning rate, minibatch size, class weight (explained in Section 3.2), and the use of the Exponential Linear Unit (ELU) [45] activation function compared with the Rectified Linear Unit (ReLU) preceded by Batch Normalization [46] (as discussed by Mishkin et al in [47]). Regarding the minibatch size, for some experiments we used batch training (whole data is seen in each update) instead of minibatch training (different data chunks per update).…”
Section: Methodsmentioning
confidence: 99%
“…More specifically, we investigated the role of the learning rate, minibatch size, class weight (explained in Section 3.2), and the use of the Exponential Linear Unit (ELU) [45] activation function compared with the Rectified Linear Unit (ReLU) preceded by Batch Normalization [46] (as discussed by Mishkin et al in [47]). Regarding the minibatch size, for some experiments we used batch training (whole data is seen in each update) instead of minibatch training (different data chunks per update).…”
Section: Methodsmentioning
confidence: 99%
“…The first group are related to ReLU, i.e. LeakyReLU [24] and Very Leaky ReLU [25], RReLU [26],PReLU [27] and its generalized version -APL [28], ELU [29]. Others are based on different ideas, e.g.…”
Section: Activation Functions 311 Previous Workmentioning
confidence: 99%
“…The activation functions of the NNs are chosen such that the NN is able to produce physically reasonable outputs. The ELU function75 used throughout hidden layers is continuously differentiable so that the loading surface predicted by the NN always has continuous derivatives. The predicted fractional loadings of both components are produced by the sigmoid function in the output layer, which ensures that the fractional loadings always satisfies 0 ≤ ŷi ≤ 1, i.e.…”
Section: Methodsmentioning
confidence: 99%