Abstract-Modern systems are becoming highly configurable to satisfy the varying needs of customers and users. Software product lines are hence becoming a common trend in software development to reduce cost by enabling systematic, large-scale reuse. However, high levels of configurability entail new challenges. Some faults might be revealed only if a particular combination of features is selected in the delivered products. But testing all combinations is usually not feasible in practice, due to their extremely large numbers. Combinatorial testing is a technique to generate smaller test suites for which all combinations of t features are guaranteed to be tested. In this paper, we present several theorems describing the probability of random testing to detect interaction faults and compare the results to combinatorial testing when there are no constraints among the features that can be part of a product. For example, random testing becomes even more effective as the number of features increases and converges toward equal effectiveness with combinatorial testing. Given that combinatorial testing entails significant computational overhead in the presence of hundreds or thousands of features, the results suggest that there are realistic scenarios in which random testing may outperform combinatorial testing in large systems. Furthermore, in common situations where test budgets are constrained and unlike combinatorial testing, random testing can still provide minimum guarantees on the probability of fault detection at any interaction level. However, when constraints are present among features, then random testing can fare arbitrarily worse than combinatorial testing. As a result, in order to have a practical impact, future research should focus on better understanding the decision process to choose between random testing and combinatorial testing, and improve combinatorial testing in the presence of feature constraints.