Summary
A growing body of research illustrates consensus between researchers and practitioners that developing rapport facilitates cooperation and disclosure in a range of professional information gathering contexts. In such contexts, rapport behaviors are often intentionally used in an attempt to facilitate a positive interaction with another adult, which may or may not result in genuine mutual rapport. To examine how rapport has been manipulated and measured in professional contexts we systematically mapped the relevant evidence‐base in this field. For each of the 35 studies that met our inclusion criteria, behaviors associated with building rapport were coded in relation to whether they were verbal, non‐verbal, or para‐verbal. Methods to measure rapport were also coded and recorded, as were different types of disclosure. A Searchable Systematic Map was produced to catalogue key study characteristics. Discussion focuses on the underlying intention of the rapport behaviors that featured most frequently across studies.
In the digital era, we witness the increasing use of artificial intelligence (AI) to solve problems, while improving productivity and efficiency. Yet, inevitably costs are involved with delegating power to algorithmically based systems, some of whose workings are opaque and unobservable and thus termed the “black box”. Central to understanding the “black box” is to acknowledge that the algorithm is not mendaciously undertaking this action; it is simply using the recombination afforded to scaled computable machine learning algorithms. But an algorithm with arbitrary precision can easily reconstruct those characteristics and make life-changing decisions, particularly in financial services (credit scoring, risk assessment, etc.), and it could be difficult to reconstruct, if this was done in a fair manner reflecting the values of society. If we permit AI to make life-changing decisions, what are the opportunity costs, data trade-offs, and implications for social, economic, technical, legal, and environmental systems? We find that over 160 ethical AI principles exist, advocating organisations to act responsibly to avoid causing digital societal harms. This maelstrom of guidance, none of which is compulsory, serves to confuse, as opposed to guide. We need to think carefully about how we implement these algorithms, the delegation of decisions and data usage, in the absence of human oversight and AI governance. The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism to demystify governance complexity and to establish an equitable digital society.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the option for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma -revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed separately from financial function.
Interviewing of suspects, victims, and eyewitnesses contributes significantly to the investigation process. While a great deal is known about the investigative interviewing practices in the United Kingdom and the Nordic region, very little is known about the framework used by Malaysian police officers. A survey was administered to 44 Royal Malaysian Police interviewers serving in the Sexual, Women and Child Investigations Division (D11) of the Crime Investigation Department. Respondents were asked about the investigative interviewing techniques they use with suspects, witnesses, and victims; how effective they think these techniques are; and the training they had received. Findings revealed that many police officers currently possess limited knowledge of best practice investigative interviewing. More training, feedback, and supervision is needed and desired.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.