In today's world of advanced and fast growing network, each and every type of digital information is communicated via internet. The demand of the time is the implementation of an effective, sensitive and less time consuming encryption system that can be secured from unauthorized access. Typical encryption systems have played marvelous role for the systematic working of modern day cryptography as it is necessary to encrypt the digital image before their transmission over the network. So, an enhanced approach is proposed for Image Compression by Discrete Cosine Transform (DCT), Encryption and Decryption by Pixel Shuffling and Steganography by double image hiding to affirm the increased security of the previous approach.
The technology of docking molecules in-silico has evolved significantly in recent years and has become a crucial component of the drug discovery tool process that includes virtual screening, lead optimization, and side-effect predictions. To date over 43,000 abstracts/papers have been published on docking, thereby highlighting the importance of this computational approach in the context of drug development. Considering the large amount of genomic and proteomic consortia active in the public domain, docking can exploit this data on a correspondingly ‘large scale’ to address a variety of research questions. Over 160 robust and accurate molecular docking tools based on different algorithms have been made available to users across the world. Further, 109 scoring functions have been reported in the literature till date. Despite these advancements, there continue to be several bottlenecks during the implementation stage. These problems or issues range from choosing the right docking algorithm, selecting a binding site in target proteins, performance of the given docking tool, integration of molecular dynamics information, ligand-induced conformational changes, use of solvent molecules, choice of docking pose, and choice of databases. Further, so far, not always have experimental studies been used to validate the docking results. In this review, basic features and key concepts of docking have been highlighted, with particular emphasis on its applications such as drug repositioning and prediction of side effects. Also, the use of docking in conjunction with wet lab experimentations and epitope predictions has been summarized. Attempts have been made to systematically address the above-mentioned challenges using expert-curation and text mining strategies. Our work shows the use of machine-assisted literature mining to process and analyze huge amounts of available information in a short time frame. With this work, we also propose to build a platform that combines human expertise (deep curation) and machine learning in a collaborative way and thus helps to solve ambitious problems (i.e. building fast, efficient docking systems by combining the best tools or to perform large scale docking at human proteome level).
We describe a qualitative user study that we conducted with 64 people living with HIV/AIDS (PLHA) in India recruited from private sector clinics. Our aim was to investigate information gaps, problems, and opportunities for design of relevant technology solutions to support HIV treatment. Our methodology included clinic visits, observations, discussion with doctors and counsellors, contextual interviews with PLHA, diary studies, technology tryouts, and home visits. Analysis identified user statements, observations, breakdowns, insights, and design ideas. We consolidated our findings across users with an affinity. We found that despite several efforts, PLHA have limited access to authentic information. Some know facts and procedures, but lack conceptual understanding of HIV. Challenges include low education, no access to technology, lack of socialisation, less time with doctors and counsellors, high power-distance between PLHA and doctors and counsellors, and information overload. Information solutions based on mobile phones can lead to better communication and improve treatment adherence and effectiveness if they are based on the following: repetition, visualisation, organisation, localisation, and personalisation of information, improved socialisation, and complementing current efforts in clinics.
The technology of docking molecules in-silico has evolved significantly in recent years and has become a crucial component of the drug discovery tool process that includes virtual screening, lead optimization, and side-effect predictions. To date over 43,000 abstracts/papers have been published on docking, thereby highlighting the importance of this computational approach in the context of drug development. Considering the large amount of genomic and proteomic consortia active in the public domain, docking can exploit this data on a correspondingly ‘large scale’ to address a variety of research questions. Over 160 robust and accurate molecular docking tools based on different algorithms have been made available to users across the world. Further, 109 scoring functions have been reported in the literature till date. Despite these advancements, there continue to be several bottlenecks during the implementation stage. These problems or issues range from choosing the right docking algorithm, selecting a binding site in target proteins, performance of the given docking tool, integration of molecular dynamics information, ligand-induced conformational changes, use of solvent molecules, choice of docking pose, and choice of databases. Further, so far, not always have experimental studies been used to validate the docking results. In this review, basic features and key concepts of docking have been highlighted, with particular emphasis on its applications such as drug repositioning and prediction of side effects. Also, the use of docking in conjunction with wet lab experimentations and epitope predictions has been summarized. Attempts have been made to systematically address the above-mentioned challenges using expert-curation and text mining strategies. Our work shows the use of machine-assisted literature mining to process and analyze huge amounts of available information in a short time frame. With this work, we also propose to build a platform that combines human expertise (deep curation) and machine learning in a collaborative way and thus helps to solve ambitious problems (i.e. building fast, efficient docking systems by combining the best tools or to perform large scale docking at human proteome level).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.