Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion.