Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders -including developers, end-users and third-parties -there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them. This survey provides a "fish-eye view," examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment -bias detection, fairness management and explainability management -and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.