Background Purposive sampling has a long developmental history and there are as many views that it is simple and straightforward as there are about its complexity. The reason for purposive sampling is the better matching of the sample to the aims and objectives of the research, thus improving the rigour of the study and trustworthiness of the data and results. Four aspects to this concept have previously been described: credibility, transferability, dependability and confirmability. Aims The aim of this paper is to outline the nature and intent of purposive sampling, presenting three different case studies as examples of its application in different contexts. Results Presenting individual case studies has highlighted how purposive sampling can be integrated into varying contexts dependent on study design. The sampling strategies clearly situate each study in terms of trustworthiness for data collection and analysis. The selected approach to purposive sampling used in each case aligns to the research methodology, aims and objectives, thus addressing each of the aspects of rigour. Conclusions Making explicit the approach used for participant sampling provides improved methodological rigour as judged by the four aspects of trustworthiness. The cases presented provide a guide for novice researchers of how rigour may be addressed in qualitative research.
The Open Targets Platform (https://platform.opentargets.org/) is an open source resource to systematically assist drug target identification and prioritisation using publicly available data. Since our last update, we have reimagined, redesigned, and rebuilt the Platform in order to streamline data integration and harmonisation, expand the ways in which users can explore the data, and improve the user experience. The gene–disease causal evidence has been enhanced and expanded to better capture disease causality across rare, common, and somatic diseases. For target and drug annotations, we have incorporated new features that help assess target safety and tractability, including genetic constraint, PROTACtability assessments, and AlphaFold structure predictions. We have also introduced new machine learning applications for knowledge extraction from the published literature, clinical trial information, and drug labels. The new technologies and frameworks introduced since the last update will ease the introduction of new features and the creation of separate instances of the Platform adapted to user requirements. Our new Community forum, expanded training materials, and outreach programme support our users in a range of use cases.
Limited resources and increasing environmental concerns have prompted calls to identify the critical questions that most need to be answered to advance conservation, thereby providing an agenda for scientific research priorities. Cetaceans are often keystone indicator species but also high profile, charismatic flagship taxa that capture public and media attention as well as political interest. A dedicated workshop was held at the conference of the Society for Marine Mammalogy (December 2013, New Zealand) to identify where lack of data was hindering cetacean conservation and which questions need to be addressed most urgently. This paper summarizes 15 themes and component questions prioritized during the workshop. We hope this list will encourage cetacean conservation-orientated research and help agencies and policy makers to prioritize funding and future activities. This will ultimately remove some of the current obstacles to science-based cetacean conservation.
The authors detail an integrated system which combines natural language processing with speech understanding in the context of a problem solving dialogue. The MINDS system uses a variety of pragmatic knowledge sources to dynamically generate expectations of what a user is likely to say.Understanding speech is a difficult problem. The ultimate goal of all speech recognition research is to create an intelligent assistant, who listens to what a user tells it and then carries out the instructions. An apparently simpler goal is the listening typewriter, a device which merely transcribes whatever it hears with only a few seconds delay. The listening typewriter seems simple, but in reality the process of transcription requires almost complete understanding as well. Today, we are still quite far from these ultimate goals. But progress is being made.One of the major problems in computer speech recognition and understanding is coping with large search spaces. The search space for speech recognition contains all the acoustic associated with words in the lexicon as well as all the legal word sequences. Today, the most widely used recognition systems are based on hidden Markov models (HMM) [Z]. In these systems, typically, each word is represented as a sequence of phonemes, and each phoneme is associated with a sequence of phonemes, and each phoneme is associated with a sequence of states. In general, the search space size increases as the size of the network of states increases. As search space size increases, speech recognition performance decreases. Knowledge can be used to constrain the exponential growth of a search space and hence increase processing speed and recognition accuracy [9, 171. Currently, the most common approach to constraining search space is to use a grammar. The grammars used for speech recognition constrain legal word sequences. Normally they are used in a strict left to right fashion and embody syntactic and semantic constraints on individual sentences. These constraints are represented in some form of probabilistic or semantic network which does not change from utterance to utterance [16-181. As we move toward habitable systems and spontaneous speech, the search space problem is greatly magnified. Habitable systems permit users to speak naturally. Grammars for naturally spoken sentences are significantly larger than the small grammars typically used by speech recognition systems. When one considers interjections, restarts and additional natural speech phenomena, the search space problem is further compounded. These problems point to the need for using knowledge sources beyond syntax and semantics to constrain the speech recognition process.There are many other knowledge sources besides syntax and semantics. Typically, these are clustered into the category of pragmatic knowledge. Pragmatic knowledge includes inferring and tracking plans, using context across clausal and sentence boundaries, determining local and global constraints on utterances and dealing with definite and pronominal reference. Work...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.