Among other uses, co-speech gestures can contribute additional semantic content to the spoken utterances with which they coincide. A growing body of research is dedicated to understanding how inferences from gestures interact with logical operators in speech, including negation ("not"/"n’t"), modals (e.g., "might"), and quantifiers (e.g., "each", "none", "exactly one"). A related but less-addressed question is what kinds of meaningful content other than gestures can evince this same behavior; this is in turn connected to the much broader question of what properties of gestures are responsible for how they interact with logical operators. We present two experiments investigating sentences with co-speech sound effects and co-text emoji in lieu of gestures, revealing a remarkably similar inference pattern to that of co-speech gestures. The results suggest that gestural inferences do not behave the way they do because of any traits specific to gestures, and that the inference pattern extends to a much broader range of content.