Unobtrusive recognition of the user's mood is an essential capability for affect-adaptive systems. Mood is a subtle, long-term affective state, often misrecognized even by humans. The challenge to train a machine to recognize it from, for example, a video of the user, is significant, and already begins with the lack of ground truth for supervised learning. Existing affective databases consist mainly of short videos, annotated in terms of expressed emotions rather than mood. In very few cases, we encounter perceived mood annotations, of questionable reliability, however, due to the subjectivity of mood estimation and the small number of coders involved. In this work, we introduce a new database for mood recognition from video. Our database contains 180 long, acted videos, depicting typical daily scenarios, and subtle facial and bodily expressions. The videos cover three visual modalities (face, body, Kinect data), and are annotated in terms of emotions (via G-trace) and mood (via the Self-Assessment Manikin and the AffectButton). To annotate the database exhaustively, we exploit crowdsourcing to reach out to an extensive number of nonexpert coders. We validate the reliability of our crowdsourced annotations by (1) adopting a number of criteria to filter out unreliable coders, and (2) comparing the annotations of a subset of our videos with those collected in a controlled lab setting.