“…The strength of the LMBAS is that it is a theoretically-driven, knowledge-based system that allows users to evaluate the link between speech production and perception. For future work, the performance of LMBAS can be compared to the performance of other acoustic features commonly extracted for use in voice quality analysis, and other neuropsychiatric disorders that commonly affect voice and emotion recognition (Aguiar et al, 2019; Agurto et al, 2019; Agurto et al, 2020; Bone et al, 2017; Cummins et al, 2015; Deshpande et al, 2020; Eyben et al, 2010; Harati et al, 2018; Huang et al, 2018; Konig et al, 2015; Low et al, 2020; Maor et al, 2020; Marmar et al, 2019; Norel et al, 2018; Orozco-Arroyave et al, 2016; Perez et al, 2018; Pinkas et al, 2020; Rusz et al, 2011; Sara et al, 2020). Some of these features include autocorrelation, zero crossing rate, entropy/entropy ratios across targeted spectral ranges, energy/intensity, Mel/Bark Frequency Cepstral Coefficients (MFCC), linear predictive coefficients (LPC), perceptual linear predictive coefficients (PLP), perceptual linear predictive Cepstral Coefficients (PLP-CC), spectral features, psychoacoustic sharpness, spectral harmonicity, F0, F0 Harmonics ratios, jitter/shimmer, and a variety of statistical and mathematical summary measurements for these frame-level values.…”