There is a growing call for greater public involvement in establishing science and technology policy, in line with democratic ideals. A variety of public participation procedures exist that aim to consult and involve the public, ranging from the public hearing to the consensus conference. Unfortunately, a general lack of empirical consideration of the quality of these methods arises from confusion as to the appropriate benchmarks for evaluation. Given that the quality of the output of any participation exercise is difficult to determine, the authors suggest the need to consider which aspects of the process are desirable and then to measure the presence or quality of these process aspects. To this end, a number of theoretical evaluation criteria that are essential for effective public participation are specified. These comprise two types: acceptance criteria, which concern features of a method that make it acceptable to the wider public, and process criteria, which concern features of the process that are liable to ensure that it takes place in an effective manner. Future research needs to develop instruments to measure these criteria more precisely and identify the contextual and environmental factors that will mediate the effectiveness of the different participation methods.
Imprecise definition of key terms in the “public participation” domain have hindered the conduct of good research and militated against the development and implementation of effective participation practices. In this article, we define key concepts in the domain: public communication, public consultation, and public participation. These concepts are differentiated according to the nature and flow of information between exercise sponsors and participants. According to such an information flow perspective, an exercise’s effectiveness may be ascertained by the efficiency with which full, relevant information is elicited from all appropriate sources, transferred to (and processed by) all appropriate recipients, and combined(when required) to give an aggregate/consensual response. Key variables that may theoretically affect effectiveness—and on which engagement mechanisms differ—are identified and used to develop a typology of mechanisms. The resultant typology reveals four communication, six consultation, and four participation mechanism classes. Limitations to the typology are discussed, and future research needs identified.
Trust in risk information about food related-hazards may be an important determinant of public reactions to risk information. One of the central questions addressed by the risk communication literature is why some individuals and organizations are trusted as sources of risk information and others are not. Industry and government often lack public trust, whereas other sources (for example, consumer organizations, the quality media, medical doctors) are highly trusted. Problematically, previous surveys and questionnaire studies have utilized questions generated by the investigators themselves to assess public perceptions of trust in different sources. Furthermore, no account of the hazard domain was made. In the first study reported here, semistructured interviewing was used to elicit underpinning constructs determining trust and distrust in different sources providing food-related risk information (n = 35). In the second study, the repertory grid method was used to elicit the terminology that respondents use to distinguish between different potential food-related information sources (n = 3 9 , the data being submitted to generalised Procrustes analysis. The results of the two studies were combined and validated in survey research (n = 888) where factor analysis indicated that knowledge in itself does not lead to trust, but that trusted sources are seen to be characterised by multiple positive attributes. Contrary to previous research, complete freedom does not lead to trust-rather sources which possess moderate accountability are seen to be the most trusted.
The concept of public participation is one of growing interest in the UK and elsewhere, with a commensurate growth in mechanisms to enable this. The merits of participation, however, are difficult to ascertain, as there are relatively few cases in which the effectiveness of participation exercises have been studied in a structured (as opposed to highly subjective) manner. This seems to stem largely from uncertainty in the research community as to how to conduct evaluations. In this article, one agenda for conducting evaluation research that might lead to the systematic acquisition of knowledge is presented. This agenda identifies the importance of defining effectiveness and of operationalizing one’s definition (i.e., developing appropriate measurement instruments and processes). The article includes analysis of the nature of past evaluations, discussion of potential difficulties in the enactment of the proposed agenda, and discussion of some potential solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.