Automatic Decision Support Systems (DSS) are widely adopted for screening purposes in socially sensitive tasks, including access to credit, mortgage, insurance, labor market and other benefits. While less arbitrary decisions can potentially be guaranteed, automatic DSS can still be discriminating in the socially negative sense of resulting in unfair or unequal treatment of people. We present a reference model for finding (prima facie) evidence of discrimination in automatic DSS which is driven by a few key legal concepts. First, frequent classification rules are extracted from the set of decisions taken by the DSS over an input pool dataset. Key legal concepts are then used to drive the analysis of the set of classification rules, with the aim of discovering patterns of discrimination. We present an implementation, called LP2DD, of the overall reference model integrating induction, through data mining classification rule extraction, and deduction, through a computational logic implementation of the analytical tools.