In an increasingly competitive environment it is more important than ever for operators to keep their end users satisfied. User satisfaction is often characterised in terms of Quality of Experience (QoE), a subjective metric with multiple dimensions such as expectations, content, terminal, environment, cost and performance. QoE is typically quantified as MOS, mean opinion score, which is obtained by averaging the ranks of a number of voluntary users for controlled combinations content/terminals/performance etc. While this approach has many advantages, there are also a number of difficulties such as representativeness (the number of users as well as the number of objects and devices all have to be kept small); validity (the results may be biased by the situation, the setting, the renumeration and so on); and applicability (it is not clear how different numbers map to notions such as "acceptable" or "unacceptable" and operators alone cannot do very much about factors such as content).We thus investigate the possibilities of detecting user opinions in the above, simplified, terms and from the network itself; with actual expectations, content, terminals, environments, costs and performance for virtually all users all the time. To this end we revisit the earlier suggestion that user opinions be reflected in their behaviour such that poor performance may result in interrupted requests. These works have, however, considered single flows hence we extend that idea to web pages which are groups of flows. In this paper we present our methods to group flows, interpret users, and characterise performance and we make a first assessment of the correlations between web page interruptions and network performance characteristics.