2021
DOI: 10.3389/fpsyg.2021.715159
|View full text |Cite
|
Sign up to set email alerts
|

Ensuring Effective Public Health Communication: Insights and Modeling Efforts From Theories of Behavioral Economics, Heuristics, and Behavioral Analysis for Decision Making Under Risk

Abstract: Public health (PH) messaging can have an enormous impact on shaping how individuals within society behave, and can ensure it is in a safe and responsible way, consistent with up-to-date evidence-based PH guidelines. If done effectively, messaging can save lives and improve the health of those within society. However, unfortunately, those within Government PH bodies typically have little training about how to effectively represent PH messages in a way that is consistent with psychological theories of cognitive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 129 publications
0
17
0
Order By: Relevance
“…This approach is more relevant at the ideographic level as individuals each form their own unique relational networks through their unique learning histories. It could also be extended even further with an RFT implemented reinforcement machine learning interpretation through discrete time series Markov chains (Edwards, 2021 ) to analyze reinforced behavior (and any other EMA data) through time in a complex distributed system (i.e., the real-world environment, and outside of the laboratory) (Dabrowski and Hunt, 2011 ).…”
Section: Resultsmentioning
confidence: 99%
“…This approach is more relevant at the ideographic level as individuals each form their own unique relational networks through their unique learning histories. It could also be extended even further with an RFT implemented reinforcement machine learning interpretation through discrete time series Markov chains (Edwards, 2021 ) to analyze reinforced behavior (and any other EMA data) through time in a complex distributed system (i.e., the real-world environment, and outside of the laboratory) (Dabrowski and Hunt, 2011 ).…”
Section: Resultsmentioning
confidence: 99%
“…They are risk-seeking when confronted with information about losses, but risk-averse when confronted with information about gains [ 19 ]. Thus, in the health field, gain-frames may be more beneficial to promote preventive behaviors, as well as loss-frames to favor detection behaviors [ 24 ]. One possible explanation is that prevention behaviors are perceived as low risk, while detection behaviors are perceived as high risk [ 19 , 25 , 26 ].…”
Section: Resultsmentioning
confidence: 99%
“…However, equation 4 can be expressed more succinctly in the form of ME if the symbol | | | is used to denote a shared relation (AND) within the set as suggested in a previous studies ( Gilroy, 2015 ; Edwards, 2021 ). In the following example of ME, describing a five stripe snake (A) as being “more dangerous” (R x ) than a three stripe snake (B) derives the relation through ME that, therefore, a three stripe snake (B) must be “less dangerous” (R y ) than a five stripe snake (A), whereby a contextual relation is expressed by C rel within the set.…”
Section: Some Mathematical Formalization Considerationsmentioning
confidence: 99%
“…This approach has the advantage of not needing to specify which categories are stored within background knowledge hence gets around the knowledge selection problem, instead the semantic network derived a function between the input and output properties through its distributed activations across layers. Further to this, it is important to note that this model could be expanded further (as an additional module) to include a reinforcement learning agent structured through a Markov decision process (MDP) as specified in previous work (Edwards, 2021), when more complex decision making is needed which requires the extracting of background knowledge for category decision making. This specifies the probability P given some action a, and is denoted as P a s, s 1 , s is the current state of 1 See (Edwards, 2021) for full details of this extended reinforcement framework.…”
Section: Deep Learning Neural Network Semantic Architecture With Repr...mentioning
confidence: 99%
See 1 more Smart Citation