Over the past decades, consumer adoption of online grocery shopping has increased steadily. Yet, overall market share is still comparatively low and retailers start questioning the prospects of the maturing distribution channel. The existing landscape of online grocery channels has seen little innovation nor diversity in terms of business models, reflecting the prevailing assumption that consumer online grocery shopping behavior is largely homogeneous. The present research challenges this notion by updating the understanding of consumer online grocery shopping behavior in a large-scale, representative study of Danish consumers. The results reveal distinct segments of online grocery adopters, which differ in their importance placed on perceived benefits of online grocery shopping. These segments can be targeted based on differences in preference for price, convenience, and service. The findings imply potential for retailers' differentiation in the market of online grocery shopping.
The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.
The research Ethics committee of the Faculty of Pedagogy and Psychology (ELTE) granted a central permission (permission nr: 2019/47). Many other labs obtained IRB approval too, which approvals can be found here: https://osf.io/j6kte/ . Participants had to give informed consent before starting the experiment. Only participants recruited through Mturk or Prolific received monetary compensation.Note that full information on the approval of the study protocol must also be provided in the manuscript.
Innovative technologies often feature inherently conflicting properties. This poses a challenge for marketers because negative properties not only weigh heavily on consumers' technology adoption decisions but potentially do more so than positive ones. To shed light on the paradox of technology and its underlying processes, the present research develops a conceptual model drawing on technology adoption and valence perception theories about the prevalence of negativity bias in consumers' technology adoption decisions with its unique effect through the serial chain of consumers' perceptions of risk and trust regarding the technology. Results of three studies (N = 1309) demonstrate that the effect of negative valence consistently outperforms that of positive valence in consumers' technology adoption intentions (Studies 1–2) and decisions (Study 3). Furthermore, results show that the disproportionate effect of negative (vs. positive) valence can be explained by the proposed serial causal chain through consumers' perception of risk of the technology and trust in the technology (Studies 2–3) while ruling out company trust and consumer knowledge as alternative drivers of the effects (Study 3). These findings contribute to the pertinent literature on consumer psychology in decisions to adopt novel technologies in that they quantify and explain the potential outcome stemming from the ambiguous properties of novel technology. Moreover, this study finds negativity bias to be an often overlooked consumer bias with implications for marketing practice and useful for understanding and lowering resistance towards artificial intelligence technology.
The COVID-19 pandemic continues to impact people worldwide–steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people’s intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people’s trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people’s medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures–without eroding trust in human physicians.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.