2024
DOI: 10.1186/s13677-024-00612-0
|View full text |Cite
|
Sign up to set email alerts
|

RNA-RBP interactions recognition using multi-label learning and feature attention allocation

Huirui Han,
Bandeh Ali Talpur,
Wei Liu
et al.

Abstract: In this study, we present a sophisticated multi-label deep learning framework for the prediction of RNA-RBP (RNA-binding protein) interactions, a critical aspect in understanding RNA functionality modulation and its implications in disease pathogenesis. Our approach leverages machine learning to develop a rapid and cost-efficient predictive model for these interactions. The proposed model captures the complex characteristics of RNA and recognizes corresponding RBPs through its dual-module architecture. The fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…A popular alternative representation is using k-mers, whereby the sequence is broken up into overlapping segments of length k, before often being collapsed into a vector of counts for each possible sequence. On the whole, there is a trend towards using wider sequence contexts around positions of interest, with some even including full transcript sequences [33,56], although note that with larger models there may be a trade-off between the maximum input sequence length and the availability of GPU capacity for training. Interestingly, recent deep learning models for RBP appear to show that sequence alone can achieve high performance scores for determining the binding status for a large number of RBPs [29,34,43,57], the locations of m 6 A sites [33], and A-to-I editing sites [51].…”
Section: Features and Model Architecturementioning
confidence: 99%
“…A popular alternative representation is using k-mers, whereby the sequence is broken up into overlapping segments of length k, before often being collapsed into a vector of counts for each possible sequence. On the whole, there is a trend towards using wider sequence contexts around positions of interest, with some even including full transcript sequences [33,56], although note that with larger models there may be a trade-off between the maximum input sequence length and the availability of GPU capacity for training. Interestingly, recent deep learning models for RBP appear to show that sequence alone can achieve high performance scores for determining the binding status for a large number of RBPs [29,34,43,57], the locations of m 6 A sites [33], and A-to-I editing sites [51].…”
Section: Features and Model Architecturementioning
confidence: 99%