Data-driven profiling allows uncovering complex hidden structures in a dataset and has been used as a diagnostic tool in various fields of work. In audiology, the clinical characterization of hearing deficits for hearing-aid fitting is typically based on the pure-tone audiogram only. Implicitly, this relies on the assumption that the audiogram can predict a listener's supra-threshold hearing abilities. Sanchez-Lopez et al. [Trends in hearing vol. 22 (2018)] hypothesized that the hearing deficits of a given listener, both at hearing threshold and at supra-threshold sound levels, result from two independent types of "auditory distortions". The authors performed a data-driven analysis of two large datasets with results from numerous tests, which led to the identification of four distinct "auditory profiles". However, the definition of the two types of distortion was challenged by differences between the two datasets in terms of the selected tests and type of listeners included in the datasets. Here, a new dataset was generated with the aim of overcoming those limitations. A heterogeneous group of listeners (N = 75) was tested using measures of speech intelligibility, loudness perception, binaural processing abilities and spectro-temporal resolution. The subsequent data analysis allowed refining the auditory profiles proposed by Sanchez-Lopez et al. (2018). Besides, a robust iterative data-driven method is proposed here to reduce the influence of the individual data in the definition of the auditory profiles. The updated auditory profiles may provide a useful basis for improved hearing rehabilitation, e.g. through profile-based hearing-aid fitting.