Electroencephalography (EEG) recordings have been rarely included in large-scale studies. This is arguably not due to a lack of information that lies in EEG recordings but mainly on account of methodological issues. In many cases, particularly in clinical, pediatric and aging populations, the EEG has a high degree of artifact contamination and the quality of EEG recordings often substantially differs between subjects. Although there exist a variety of standardized preprocessing methods to clean EEG from artifacts, currently there is no method to objectively quantify the quality of preprocessed EEG. This makes the commonly accepted procedure of excluding subjects from analyses due to exceeding contamination of artifacts highly subjective. As a consequence, P-hacking is fostered, the replicability of results is decreased, and it is difficult to pool data from different study sites. In addition, in large-scale studies, data are collected over years or even decades, requiring software that controls and manages the preprocessing of ongoing and dynamically growing studies. To address these challenges, we developed AUTOMAGIC, an open-source MATLAB toolbox that acts as wrapper to run currently available preprocessing methods and offers objective standardized quality assessment for growing studies. The software is compatible with the Brain Imaging Data Structure (BIDS) standard and hence facilitates data sharing. In the present paper we outline the functionality of AUTOMAGIC and examine the effect of applying combinations of methods on a sample of resting EEG data. This examination suggests that a applying a pipeline of algorithms to detect artifactual channels in combination with Multiple Artifact Rejection Algorithm (MARA), an independent component analysis (ICA)-based artifact correction method, is sufficient to reduce a large extent of artifacts.