PurposeThe aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET‐auto‐segmentation (PET‐AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM).MethodsThe recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET‐AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET‐AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform.ResultsA selection of clinical, physical, and simulated phantom data, including “best estimates” reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET‐AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET‐AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET‐AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state‐of‐the art.Conclusions
PETASset provides a platform that allows standardizing the evaluation and comparison of different PET‐AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET‐AS methods and contribute with more evaluation datasets.