The paper aims to discuss the results of testing a neural network which classifies the vowels of the vocalic system with the [ATR] (Advanced Tongue Root) contrast based on the data of Akebu (Kwa family). The acoustic nature of the [ATR] feature is yet understudied. The only reliable acoustic correlate of [ATR] is the magnitude of the first formant (F1) which can be also modulated by tongue height, resulting in significant overlap between high [-ATR] vowels and mid [+ATR] vowels. Other acoustic metrics which had been associated with the [ATR], such as F1 bandwidth (B1), relative intensity of F1 to F2 (A1-A2), etc., are typically inconsistent across vowel types and speakers. The values of four metrics – F1, F2, A1-A2, B1 – were used for training and testing the neural network. We tested four versions of the model differing in the presence of the fifth variable encoding the speaker and the number of hidden layers. The models which included the variable encoding the speaker achieved slightly higher accuracy, whereas the precision and recall metrics of the three-layer model were generally higher than those with two hidden layers.