Active listening is a well-known skill applied in human communication to build intimacy and elicit selfdisclosure to support a wide variety of cooperative tasks. When applied to conversational UIs, active listening from machines can also elicit greater self-disclosure by signaling to the users that they are being heard, which can have positive outcomes. However, it takes considerable engineering effort and training to embed active listening skills in machines at scale, given the need to personalize active-listening cues to individual users and their specific utterances. A more generic solution is needed given the increasing use of conversational agents, especially by the growing number of socially isolated individuals. With this in mind, we developed an Amazon Alexa skill that provides privacy-preserving and pseudo-random backchanneling to indicate active listening. User study (N = 40) data show that backchanneling improves perceived degree of active listening by smart speakers. It also results in more emotional disclosure, with participants using more positive words. Perception of smart speakers as active listeners is positively associated with perceived emotional support. Interview data corroborate the feasibility of using smart speakers to provide emotional support. These findings have important implications for smart speaker interaction design in several domains of cooperative work and social computing.CCS Concepts: • Human-centered computing → Interaction design theory, concepts and paradigms; Personal digital assistants; • Applied computing → Consumer health.