Introduction
Effective communication is a central part of risk regulation, and it is a key component in helping the public make sense of the risk that they face (Bennett 2010; Fischbacher-Smith et al. 2010; Veland and Aven 2013). It enables people to participate and be heard in decisions about risks that affect them. It is also vital in shaping how policies are formulated and how people understand them and adapt their practices or behaviour as a consequence, to reduce the threat from the risk. Risk communication is the exchange of information about risk (Lƶfstedt 2008; Veland and Aven 2013) between two or more persons or stakeholder groups, and this may include government agencies, organisations, scientists or individual citizens (Covello et al. 1986). However, risk communication has become the means by which powerful individuals or groups (with vested interest) exploit resources within their means to shape risk arguments and the policy perspective taken thereof (Smith 1988; Warner and Kinslow 2013; Veland and Aven 2013; Demeritt and Nobert 2014; Hardy and Maguire 2016; McKell and De Barro 2016). Yet, the extent to which individuals or public groups use resources within their means to their advantage in risk communication remains an area that has received too little scientific attention.
This book examines the processes of risk communication within the context of the smoking; measles, mumps and rubella (MMR) vaccine; and sugar debates within the UK. It considers how policy decisions are made in times of risk and uncertainty and especially where there is little or no scientific evidence on which to base policy decision-making. Through the lens of the Policy Evaluation Risk Communication (PERC) framework (Adekola et al. 2018), the study analyses the case study of the smoking, MMR vaccine and sugar debates to describe the evolution of risk argument between an initial conceptualisation or identification of the risk to its policy formulation. Within this, the study expands on the role of power and expertise in risk communication, and the evidence from this study has enabled the study to extend the understanding of social amplification of risk from the power perspective.
The understanding of how power and expertise shape risk communication about public health and safety is important because it can reveal underlying yet salient factors that shape public understanding and policy perspectives taken towards risk (which otherwise would go unnoticed or unscrutinised) in ways that may benefit or disadvantage certain public groups. Powerful or resourced stakeholdersā groups, for instance, can use the resources within their means to influence the credibility of information flow stations (such as media, technical expertise and educational institutions), which in effect may influence public perception of risk. In addition, they can extend their influence to different response mechanisms of society by introducing bias to individual perception (Lukes 2004) through media such as marketing, advertising, and film and documentary production. There is even the possibility that stakeholder groups may use their influence to engage in relationships with powerful groups, which in turn influences member responses and the type of rationality brought to risk issues (Collingridge and Reeve 1986). Furthermore, powerful groups can also extend their influence to tarnish the reputation of persons or groups who are opposed to their interests by amplifying negative events associated with these people or places in order to reduce their credibility, and therefore any claims made by them.
The analysis carried out in this book is timely, especially in this post-truth1 era (Keyes 2004; Pazzanese 2016; Flood 2016) where big voices (such as the UKās former justice secretary, Michael Gove or in the case of the United States, Donald Trump) are challenging intellectualism and the role of evidence and experts in making sense of risk issues in times of uncertainty. Gove, in the last days leading up to the UKās EU referendum campaigns, attempted to dissuade the public from expert interpretations (of gloom and doom if Britain exited from the EU) (Brown 2016). He stated that āpeople in this country have had enough of expertsā (Brown 2016). Goveās contention was met with fierce criticism and was immediately challenged, especially by the scientific community. It would, therefore, be interesting to understand the role of experts in shaping our understanding of risk in public health risk communication.
The Construct of Risk
The term āriskā is typically associated with unwanted events or outcomes (Renn and Roco 2006) and framed differently by different authors to include an occurrence of an āadverse eventā (Warner et al. 1992), ālossā (Brearley and Hall 1982) or where āvalue is at stakeā. The literature identifies three schools of thought: the objective school, the subjective school and one that combines both perspectives. The objective school determines risk by physical facts and believes that what constitutes risk is independent of any bias, assumptions or values (Hansson 2010). However, this assumption has been criticised for ignoring underlying factors (such as personal, structural, institutional and organisational issues) that shape how risk is identified, measured and interpreted (Wynne 1992). The subjective school of thought believes that all risks are socially constructed (Douglas and Wildavsky 1982), and thus an understanding of risk is a reflection of perceived harm or hazard (Slovic and Weber 2002). The core argument here is that our perception of risk cannot be separated from our values, perception and worldview (Gephart et al. 2009). This subjective assumption has, however, been criticised for over-emphasising the āvalueā associated with risk (Shrader-Frechette 1991a), and denies that harm does occur whether you believe it or not.
Because of the weaknesses of the first two perspectives of risk, the third perspective combines both objective and subjective elements (Kasperson et al. 1988; Shrader-Frechette 1991b). The assumption made here is that regardless of our subjectivity or the value we place on a risk, risk could pose a real threat or hazard. However, this is only effectively realised when harm is shown to have occurred (Shrader-Frechette 1991a). Shrader-Frechette (1991a) accused the first school of thought of viewing ordinary citizens as ignorant of science and assuming that a technical expert alone has the expertise and ability to make a judgement about the potential risk they face. On the other hand, the subjective school was criticised for assuming that citizensā unwanted behaviour about risk arises because t...
