In software engineering practice, evaluating and selecting the software testing tools that best fit the project at hand is an important and challenging task. In scientific studies of software engineering, practitioner evaluations and beliefs have recently gained interest, and some studies suggest that practitioners find beliefs of peers more credible than empirical evidence. To study how software practitioners evaluate testing tools, we applied online opinion surveys (n=89). We analyzed the reliability of the opinions utilizing Krippendorff's alpha, intra-class correlation coefficient (ICC), and coefficients of variation (CV). Negative binomial regression was used to evaluate the effect of demographics. We find that opinions towards a specific tool can be conflicting. We show how increasing the number of respondents improves the reliability of the estimates measured with ICC. Our results indicate that on average, opinions from seven experts provide a moderate level of reliability. From demographics, we find that technical seniority leads to more negative evaluations. To improve the understanding, robustness, and impact of the findings, we need to conduct further studies by utilizing diverse sources and complementary methods.