Current World Wide Web also recognized as Web 2.0 is an immense library of interlinked documents that are transferred by computers and presented to people. Search engine is considered the most important tool to discover any information from WWW. Inspite of having lots of development and novel research in current search engines techniques, they are still syntactic in nature and display search results on the basis of keyword matching without understanding the meaning of query, resulting in the production of list of WebPages containing a large number of irrelevant documents as an output. Semantic Web (Web 3.0), the next version of World Wide Web is being developed with the aim to reduce the problem faced in Web 2.0 by representing data in structured form and to discover such data from Semantic Web, Semantic Search Engines (SSE) are being developed in many domains. This paper provides a survey on some of the prevalent SSEs focusing on their architecture; and presents a comparative study on the basis of technique they follow for crawling, reasoning, indexing, ranking etc.
In the pharmaceutical industry, poorly water-soluble drugs require enabling technologies to increase apparent solubility in the biological environment. Amorphous solid dispersion (ASD) has emerged as an attractive strategy that has been used to market more than 20 oral pharmaceutical products. The amorphous form is inherently unstable and exhibits phase separation and crystallization during shelf life storage. Polymers stabilize the amorphous drug by antiplasticization, reducing molecular mobility, reducing chemical potential of drug, and increasing glass transition temperature in ASD. Here, drug-polymer miscibility is an important contributor to the physical stability of ASDs. The current Review discusses the basics of drug-polymer interactions with the major focus on the methods for the evaluation of solubility and miscibility of the drug in the polymer. Methods for the evaluation of drug-polymer solubility and miscibility have been classified as thermal, spectroscopic, microscopic, solid–liquid equilibrium-based, rheological, and computational methods. Thermal methods have been commonly used to determine the solubility of the drug in the polymer, while other methods provide qualitative information about drug-polymer miscibility. Despite advancements, the majority of these methods are still inadequate to provide the value of drug-polymer miscibility at room temperature. There is still a need for methods that can accurately determine drug-polymer miscibility at pharmaceutically relevant temperatures.
Abstract-Users of current World Wide Web (WWW) themselves have to involve in refining their search queries in order to find the exact answers because current WWW is web of documents representing only text, audio, video, images and metadata information (unstructured data) not conceptual information. Computers are used to present those documents only and not for retrieving the desired results which ultimately overburdens the users task. Therefore, to deal with this issue, Tim Berner Lee inventor of WWW envisioned semantic web which prioritizes data than documents and uses ontologies to manage the data. Ontologies have been realized as the key technology to shaping and exploiting information for the effective management of knowledge by establishing a common vocabulary for community members to interlink, combine, and communicate knowledge. But, due to the availability large number of ontologies of same domain, integrating data using ontology has become a big challenge. Therefore, there is a need to unify desperate ontologies belonging to same domain to bridge the gap between conceptualization of the same domain. To deal with this challenge, a framework is being proposed in this paper which works to unify the desperate ontologies by a merging technique. Ontology merging process collects the ontologies of the same domain, unifies their entities (class, property) and forms a global ontology. The empirical result shows the construction of global concept indexer which collects unique concepts by applying matching operation between concepts taken from desperate ontologies.Keyword-Ontology, Concept, concept matching, concept indexer, ontology alignment. I. INTRODUCTION Despite of the huge development in the techniques and tools for making current web more expressive, it is merely an information-publishing medium directed towards human consumption. In the web, computers are used as the information space; their ability is not exploited yet. Moreover, Current tools are not that much expressive to provide direct solutions against user's requirements. Users have to search for a number of web documents for finding the required solution corresponding to their queries. A lot of research is going on to deal with this problem and one of the solutions is data integration but because of data heterogeneity on the web, this task has become a big challenge. Tim-Berner-Lee envisioned semantic web [1] as an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Its primary concern is to have data on the Web defined and linked in a way that machines can also understand its meaning. Thus machines can also be used for automation, integration and reuse of data across various applications. In order to make computer understand the meaning of the content, semantic web uses the concept of ontology which is specialized in formalizing information of a domain at the conceptual level. Ontology [2] is considered as the backbone of the semantic web. It is an explicit ...
The initial impetus for image databases originated with the image interpretation community. Most of the proposals from this community, however, were quite narrowly conceived and hence, after a brief flurry of activity in the late 1970's and early mid-1980's. interest in this activity decreased drastically. In our opinion. interest could not be sustained in this area due to its unsophisticated conception. At this time, the database community largely ignored such nonstandard applications due, we believe, to the then unsophisticated nature of the then current database management systems. It has only been since the development of various object-oriented approaches to database management that the field has expanded into these areas. 'In the last half of the 1980's. however, the situation had largely been reversed. The database community had expressed much interest in the development of nonstandard database management systems, including image databases, due, as mentioned above, to the development of the object-oriented paradigm as well as various data-driven approaches to iconic indexing. However, the interest of the image interpretation community had wavered. Only in the decade of the 1990's have the two communities been converging on a common conception of what an image database should be. This is due to the acceptance of the belief that image and textual information should be treated equally. Images should be able to be retrieved by content and should also be integral components of the query language. Thus, image interpretation should be an important component of any query processing strategy.There are many interesting problems in the field of image database management, including issues in data modeling, sensor data representation and interpretation. user interfaces, and query processing. The object-oriented paradigm has been and continues to be a great impetus to this work. We have seen major advances in image databases only since this paradigm has become accepted throughout the computer science community. However, in our opinion, this paradigm must further develop before image databases really comes into its own. In our opinion, methods should be treated as objects also and should be efficiently managed. We are only now starting to see image database systems where there are a fixed number of elementary methods and the user has the privilege (some would say the burden) of defining more complex methods in terms of more elementary ones. One of the reasons why we haven't seen much work in this area is quite apparenr it is difficult.Most researchers in image interpretation haven't been too concerned with the efficiency aspects of their work.Just developing a technique which works on real world data has been reward enough. However, there have been researchers, especially in the area of object recognition, who have been concerned with efficient methods for image interpretation. The field of image databases owes a lot to these people. While in the past, many of the issues addressed by such people were considered somewhat ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.