Two experiments examined the judged quality of videoconferencing as a function of three measures of IP network performance: bandwidth, latency, and packet loss. The experiments were realized in a laboratory using a network emulator and a commercial videoconferencing system. Experiment 1 used a fractional factorial design and all three parameters:• Bandwidth: 128 kbits/s, 384 kbits/s, 768 kbits/s • Latency: 0, 150, 300 ms one-way • Bursty packet loss: 0, 2%, 4% (using the "Gilbert-Eliot" method).Experiment 2 was designed (a) to use random packet loss rather than bursty packet loss, and (b) so that a statistical interaction between bandwidth and packet loss could be detected, if present. Bandwidth levels were the same as in Experiment 1, but packet loss was set to 0, 1 and 2%. The experimental design was full factorial.In both experiments, pairs of non-expert judges held five-minute videoconferences for each combination of parameters, then rated the quality of system performance immediately after each videoconference. In both experiments statistical analysis showed that packet loss was the most important network performance parameter in predicting subjective quality of videoconferencing. These results agree with results on VoIP quality. Bandwidth and latency also affected the judges' ratings, but to a smaller extent. In Experiment 2, an interaction between packet loss and bandwidth was detected: At lower bandwidths and greater packet loss, the subjective ratings were not as low as would be expected, contrary to the idea that quality would degrade catastrophically when both bandwidth and packet loss were simultaneously unfavorable.