This paper reports on empirical work conducted to study perceptions of unfair treatment caused by automated computational systems. While the pervasiveness of algorithmic bias has been widely acknowledged, and perceptions of fairness are commonly studied in Human Computer Interaction, there is a lack of research on how unfair treatment by automated computational systems is experienced by users from disadvantaged and marginalised backgrounds. There is a need for more diversification in terms of the investigated users, domains, and tasks, and regarding the strategies that users employ to reduce harm. To unpack these issues, we ran a prescreened survey of 663 participants, oversampling those with at-risk characteristics. We collected occurrences and types of conflicts regarding unfair and discriminatory treatment and systems, as well as the actions taken towards resolving these situations. Drawing on intersectional research, we combine qualitative and quantitative approaches in order to highlight the nuances around power and privilege in the perceptions of automated computational systems. Among our participants, we discuss experiences of computational essentialism, attribute-based exclusion, and expected harm. We derive suggestions to address these perceptions of unfairness as they occur.