The technology of docking molecules in-silico has evolved significantly in recent years and has become a crucial component of the drug discovery tool process that includes virtual screening, lead optimization, and side-effect predictions. To date over 43,000 abstracts/papers have been published on docking, thereby highlighting the importance of this computational approach in the context of drug development. Considering the large amount of genomic and proteomic consortia active in the public domain, docking can exploit this data on a correspondingly ‘large scale’ to address a variety of research questions. Over 160 robust and accurate molecular docking tools based on different algorithms have been made available to users across the world. Further, 109 scoring functions have been reported in the literature till date. Despite these advancements, there continue to be several bottlenecks during the implementation stage. These problems or issues range from choosing the right docking algorithm, selecting a binding site in target proteins, performance of the given docking tool, integration of molecular dynamics information, ligand-induced conformational changes, use of solvent molecules, choice of docking pose, and choice of databases. Further, so far, not always have experimental studies been used to validate the docking results. In this review, basic features and key concepts of docking have been highlighted, with particular emphasis on its applications such as drug repositioning and prediction of side effects. Also, the use of docking in conjunction with wet lab experimentations and epitope predictions has been summarized. Attempts have been made to systematically address the above-mentioned challenges using expert-curation and text mining strategies. Our work shows the use of machine-assisted literature mining to process and analyze huge amounts of available information in a short time frame. With this work, we also propose to build a platform that combines human expertise (deep curation) and machine learning in a collaborative way and thus helps to solve ambitious problems (i.e. building fast, efficient docking systems by combining the best tools or to perform large scale docking at human proteome level).
The technology of docking molecules in-silico has evolved significantly in recent years and has become a crucial component of the drug discovery tool process that includes virtual screening, lead optimization, and side-effect predictions. To date over 43,000 abstracts/papers have been published on docking, thereby highlighting the importance of this computational approach in the context of drug development. Considering the large amount of genomic and proteomic consortia active in the public domain, docking can exploit this data on a correspondingly ‘large scale’ to address a variety of research questions. Over 160 robust and accurate molecular docking tools based on different algorithms have been made available to users across the world. Further, 109 scoring functions have been reported in the literature till date. Despite these advancements, there continue to be several bottlenecks during the implementation stage. These problems or issues range from choosing the right docking algorithm, selecting a binding site in target proteins, performance of the given docking tool, integration of molecular dynamics information, ligand-induced conformational changes, use of solvent molecules, choice of docking pose, and choice of databases. Further, so far, not always have experimental studies been used to validate the docking results. In this review, basic features and key concepts of docking have been highlighted, with particular emphasis on its applications such as drug repositioning and prediction of side effects. Also, the use of docking in conjunction with wet lab experimentations and epitope predictions has been summarized. Attempts have been made to systematically address the above-mentioned challenges using expert-curation and text mining strategies. Our work shows the use of machine-assisted literature mining to process and analyze huge amounts of available information in a short time frame. With this work, we also propose to build a platform that combines human expertise (deep curation) and machine learning in a collaborative way and thus helps to solve ambitious problems (i.e. building fast, efficient docking systems by combining the best tools or to perform large scale docking at human proteome level).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.