Data reduction analyses like principal components and exploratory factor analyses identify relationships within a set of potentially correlated variables, and cluster correlated variables into a smaller overall quantity of groupings. Because of their relative objectivity, these analyses are popular throughout the animal literature to study a wide variety of topics. Numerous authors have highlighted "best practice" guidelines for component/factor "extraction", i.e. determining how many components/factors to extract from a data reduction analysis, because this can greatly impact the interpretation, comparability, and replicability of one's results. Statisticians agree that Kaiser's criterion, i.e. extracting components/factors with eigenvectors >1.0, should never be used yet within the animal literature, a considerable number of authors still use it, including publications as recent as 2018, and across a wide range of taxa (e.g. insects, birds, fish, mammals) and topics (e.g. personality, cognition, health, morphology, reproduction). It is therefore clear that further awareness is needed to target the animal sciences to ensure that results optimise structural stability, and thus, comparability and reproducibility. In the present commentary, we first clarify the distinction between principal components and exploratory factor analyses in terms of analysing simple versus complex structures, and how this relates to component/factor extraction. Second, we highlight empirical evidence from simulation studies to explain why certain extraction methods are more reliable than others, including why automated methods are better, and why Kaiser's criterion is inappropriate and should therefore never be used. Third, we provide recommendations on what to do if multiple automated extraction methods "disagree" which can arise when dealing with complex structures. Finally, we explain how to perform and interpret more robust and automated extraction tests using R.