“…On the other hand, as long as the common interests of the team are met, it will not cause dissatisfaction if some of the individual’s interests cannot be satisfied, meaning that individuals will adjust their moral decisions according to the behavioral decisions that result from the moral types of their teammates ( Baumard et al, 2013 ; Bostyn and Roets, 2017 ); that is, the team has a constraining/limiting effect on the behavior of individuals. Human–computer interaction research generally selects typical moral (utilitarian and deontology) populations for research ( Baniasadi et al, 2018a ; Liu and Liu, 2021 ; Yokoi and Nakayachi, 2021 ). When people think that robots are reliable and trustworthy, human–computer interaction goes more smoothly.…”