Though there are many challenges to Ekman's thesis that there are basic emotions with universal corresponding facial expressions, our main criticism revolves around the extent to which grounding situations alter how people read faces. To that end, we recruit testifying experimental studies that show identical faces expressing varying emotions when contextualized differently. Rather than dismissing these as illusions, we start with the position-generally favored by embodied thinkers-that situations are primary: they are where specifiable and hence knowable properties first show up. We further argue that situationally inflected emotional expressions are informationally meaningful. We reject the idea that reading expressions is primarily about ascertaining internal mental states, arguing instead that people are registering overall situations when looking at faces. However, if mind is understood as a situated phenomenon that extends into active ecological frames, then one can still argue that mindreading is going on. Although we do not claim isolated things like cliffs or cars have agency, we speculate networked systems with cliffs, people, cars, bears, etc., collectively function with intentionality, more so if advancing a robust situated mind thesis, contra figures like Dennett who argue that people overimpute mind to things. Our position has practical implications insofar as it casts doubts on recent attempts to develop AI systems that extract emotional intent out of facial expressions since many of these systems are grounded on Ekman's basic view.