Traditionally, strategy-proofness is considered an extreme form of strategic simplicity. When a strategy-proof mechanism is in place, honesty is the best policy: no matter what actions others are taking, the best course of action is to report one's true preferences. For this reason, such mechanisms are often referred to as truthful, and the preferences that are reported to them are interpreted at face value.Truthfulness is thought to reduce the costs of strategizing, promote equity (by not giving advantage to more sophisticated players), provide robustness, eliminate the costs associated with the collection of information on others, and simplify the interpretation of reported preferences. 1 These desirable features led many centralized markets, especially in education and entry-level labor markets, to adopt truthful mechanisms. More specifically, the mechanisms in use are often based on the applicant-proposing version of the Deferred Acceptance (DA) algorithm, which is strategy-proof for applicants.But do truthful mechanisms actually induce truthful reporting? Recent evidence that emerged from the Israeli Psychology Master's Match (IPMM) strongly indicates that the assumption of truthful reporting is false. Additional evidence from the field and from the lab suggests that this phenomenon of preference misrepresentation is pervasive. A recurring finding is that misinterpretation rates are higher in weaker segments of markets. This motivates us to further investigate who are the individuals that misrepresent their preferences under DA and what drives this behavior.Our hope is that our findings will inform market designers and policy makers about the prevalence of misrepresentation and its systematic nature. A better understanding of when and why individuals misrepresent their preferences can guide these practitioners in designing specific and targeted interventions to promote truthful reporting. For example, if members of particular groups are more likely to (err and) misrepresent their preferences, then this behavior may have * Hassidim: Bar-Ilan
Artificial intelligence (AI) algorithms hold promise to reduce inequalities across race and socioeconomic status. One of the most important domains of racial and economic inequalities is medical outcomes; Black and low-income people are more likely to die from many diseases.Algorithms can help reduce these inequalities because they are less likely than human doctors to make biased decisions. Unfortunately, people are generally averse to algorithms making important moral decisions-including in medicine-undermining the adoption of AI in healthcare. Here we use the COVID-19 pandemic to examine whether the threat of racial and economic inequality increases the preference for algorithm decision-making. Four studies (N=2,819) conducted in the United States and Singapore show that emphasizing inequality in medical outcomes increases the preference for algorithm decision-making for triage decisions.These studies suggest that one way to increase the acceptance of AI in healthcare is to emphasize the threat of inequality and its negative outcomes associated with human decision-making.
Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior–posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.