This paper describes a testing environment for commercial intrusion-detection systems, shows results of an actual test run and presents a number of conclusions drawn from the tests. Our test environment currently focuses on IP denial-of-service attacks, Trojan horse traffic and HTTP traffic. The paper focuses on the point of view of an analyst receiving alerts sent by intrusion-detection systems and the quality of the diagnostic provided. While the analysis of test results does not solely targets this point of view, we feel that the diagnostic accuracy issue is extremely relevant for the actual success and usability of intrusion-detection technology. The tests show that the diagnostic proposed by commercial intrusiondetection systems sorely lack in precision and accuracy, lacking the capability to diagnose the multiple facets of the security issues occurring on the test network. In particular, while they are sometimes able to extract multiple pieces of information from a single malicious event, the alerts reported are not related to one another in any way, thus loosing significant background information for an analyst. The paper therefore proposes a solution for improving current intrusion-detection probes to enhance the diagnostic provided in the case of an alert, and qualifying alerts in relation to the intent of the attacker as perceived from the information acquired during analysis.