Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.
Personally managing and protecting online privacy has become an essential part of everyday life. This research draws on the protection motivation theory (PMT) to investigate privacy protective behavior online. A two-wave panel study (N = 928) shows that (1) people rarely to occasionally protect their online privacy and (2) people most often delete cookies and browser history or decline cookies to protect their online privacy. In addition, (3) the perceived threat is high: People perceive the collection, usage, and sharing of personal information as a severe problem to which they are susceptible. The coping appraisal is mixed: Although people do have confidence in some protective measures, they have little confidence in their own efficacy to protect their online privacy. Moreover, privacy protective behavior is affected by perceived severity and response efficacy. These findings emphasize the relevance of the PMT in the context of privacy threats, and have important implications for regulators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.