“…A growing body of work has successfully demonstrated that it is possible to extract meaningful, potentially privacyviolating, information from DNNs. Novel attacks such as property-inference [4], model inversion [26], or membership inference [79] have shown that it is possible to extract additional properties from a model and correlate them to a specific subset of data contributors [4,29], reconstruct training data by simply querying the DNN [15,26,27,36,89], and determine the presence of a given input data in the training set used for a DNN [76,77,79], emphasising the need for privacy-preserving ML (PPML) mechanisms.…”