This paper proposes a new regulatory approach that implements capital requirements contingent on executive incentive schemes. We argue that excessive risk taking in the financial sector originates from the shareholder moral hazard created by government guarantees rather than from corporate governance failures within banks. The idea behind the proposed regulatory approach is thus that the more the compensation structure decouples the interests of bank managers from those of shareholders by curbing risk-taking incentives, the higher the leverage the bank is permitted to take on. Consequently, the risk-shifting incentives caused by government guarantees and the risk-mitigating incentives created by the compensation structure offset each other such that the manager chooses the socially efficient investment policy. This paper was accepted by Amit Seru, finance.
The financial industry has been struggling with widespread misconduct and public mistrust. Here we argue that the lack of trust into the financial industry may stem from the selection of subjects with little, if any, trustworthiness into the financial industry. We identify the social preferences of business and economics students, and follow up on their first job placements. We find that during college, students who want to start their career in the financial industry are substantially less trustworthy. Most importantly, actual job placements several years later confirm this association. The job market in the financial industry does not screen out less trustworthy subjects. If anything the opposite seems to be the case: Even among students who are highly motivated to work in finance after graduation, those who actually start their career in finance are significantly less trustworthy than those who work elsewhere.
Predictive algorithmic scores can significantly impact the lives of assessed individuals by shaping decisions of organizations and institutions that affect them, for example, influencing the hiring prospects of job applicants or the release of defendants on bail. To better protect people and provide them the opportunity to appeal their algorithmic assessments, data privacy advocates and regulators increasingly push for disclosing the scores and their use in decision-making processes to scored individuals. Although inherently important, the response of scored individuals to such algorithmic transparency is understudied. Inspired by psychological and economic theories of information processing, we aim to fill this gap. We conducted a comprehensive empirical study to explore how and why disclosing the use of algorithmic scoring processes to (involuntarily) scored individuals affects their behaviors. Our results provide strong evidence that the disclosure of fundamentally erroneous algorithmic scores evokes self-fulfilling prophecies that endogenously steer the behavior of scored individuals toward their assessment, enabling algorithms to help produce the world they predict. Our results emphasize that isolated transparency measures can have considerable side effects with noticeable implications for the development of automation bias, the occurrence of feedback loops, and the design of transparency regulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.