Partial face coverings such as sunglasses and facemasks have now become the ‘new norm’, especially since the increase of infectious diseases. Unintentionally, they obscure facial expressions. Therefore, humans and artificial systems have been found to be less accurate in emotion categorization. However, it is unknown how similar the performance of humans compared with artificial systems is affected based on the exact same stimuli, varying systematically in types of coverings. Such a systematic direct comparison would allow conclusions about the relevant facial features in a naturalistic context. Therefore, we investigated the impact of facemasks and sunglasses on the ability to categorize emotional facial expressions in humans and artificial systems. Artificial systems, represented by the VGG19 deep learning algorithm, and humans assessed images of people with varying emotional facial expressions and with four different types of coverings, i.e. unmasked (original images), mask (mask covering lower-face), partial mask (with transparent mouth window), and sunglasses. Artificial systems performed significantly better than humans when no covering is present (> 15% difference). However, the achieved accuracy of both humans and artificial systems differed significantly depending on the type of coverings and, importantly, emotion, e.g. the use of sunglasses reduced accuracy for recognition of fear in humans. It was also noted that while humans mainly classify unknown expressions as neutral across all coverings, the misclassification varied in the artificial systems. These findings show humans and artificial systems classify and misclassify various emotion expressions differently depending on both the type of face covering and type of emotion.