Cats employ vocalizations for communicating information, thus their sounds can carry a widerange of meanings. Concerning vocalization, an aspect of increasing relevance directly connected withthe welfare of such animals is its emotional interpretation and the recognition of the production context.To this end, this work presents a proof of concept facilitating the automatic analysis of cat vocalizationsbased on signal processing and pattern recognition techniques, aimed at demonstrating if the emissioncontext can be identified by meowing vocalizations, even if recorded in sub-optimal conditions. Werely on a dataset including vocalizations of Maine Coon and European Shorthair breeds emitted in threedifferent contexts: waiting for food, isolation in unfamiliar environment, and brushing. Towards capturing theemission context, we extract two sets of acoustic parameters, i.e., mel-frequency cepstral coefficients andtemporal modulation features. Subsequently, these are modeled using a classification scheme based ona directed acyclic graph dividing the problem space. The experiments we conducted demonstrate thesuperiority of such a scheme over a series of generative and discriminative classification solutions. Theseresults open up new perspectives for deepening our knowledge of acoustic communication betweenhumans and cats and, in general, between humans and animals.