Deep recurrent neural networks (DRNNs) have recently demonstrated strong performance in sequential data analysis, such as natural language processing. These capabilities make them a promising tool for inferential analysis of sequentially structured bioinformatics data as well. Here, we assessed the ability of Long Short-Term Memory (LSTM) networks, a class of DRNNs, to predict properties of proteins based on their primary structures. The proposed architecture is trained and tested on two different datasets to predict whether a given sequence falls into a certain class or not. The first dataset, directly imported from Uniprot, was used to train the network on whether a given protein contained or did not contain a conserved sequence (homeodomain), and the second dataset, derived by literature mining, was used to train a network on whether a given protein binds or doesn't bind to Artemisinin, a drug typically used to treat malaria. In each case, the model was able to differentiate between the two different classes of sequences it was given with high accuracy, illustrating successful learning and generalization. Upon completion of training, an ROC curve was created using the homeodomain and artemisinin validation datasets. The AUC of these datasets was 0.80 and 0.87 respectively, further indicating the models' effectiveness. Furthermore, using these trained models, it was possible to derive a protocol for sequence detection of homeodomain and binding motif, which are well-documented in literature, and a known Artemisinin binding site, respectively [1][2][3]. Along with these contributions, we developed a python API to directly connect to Uniprot data sourcing, train deep neural networks on this primary sequence data using TensorFlow, and uniquely visualize the results of this analysis. Such an approach has the potential to drastically increase accuracy and reduce computational time and, current major limitations in informatics, from inquiry to discovery in protein function research.
One goal of general intelligence is to learn novel information without overwriting prior learning. The utility of learning without forgetting (CF) is twofold: first, the system can return to previously learned tasks after learning something new. In addition, bootstrapping previous knowledge may allow for faster learning of a novel task. Previous approaches to CF and bootstrapping are primarily based on modifying learning in the form of changing weights to tune the model to the current task, overwriting previously tuned weights from previous tasks. However, another critical factor that has been largely overlooked is the initial network topology, or architecture. Here, we argue that the topology of biological brains likely evolved certain features that are designed to achieve this kind of informational conservation. In particular, we consider that the highly conserved property of modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints. Final considerations are then made on how to combine these two learning objectives in a dynamical, general learning system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.