To avoid costly security patching after software deployment, security-by-design techniques (e.g. threat analysis) are adopted in organizations to find and mitigate security issues before the system is ever implemented. Organizations are ramping up such (heavily manual) activities, but there is a global gap in the security workforce. Favorable performance indicators would result in cost savings for organizations with scarce security experts. However, past empirical studies were inconclusive regarding some performance indicators of threat analysis techniques, thus practitioners have little evidence for choosing the technique to adopt. To address this issue, we replicated a controlled experiment with STRIDE. Our study aimed to measure and compare the performance indicators (productivity and precision) of two STRIDE variants (per-element and per-interaction). Since we made some similar observations to the original study, we conclude that the two approaches are not different enough to make a practical impact. To this end, the choice of which variant to adopt should be informed by the needs of the organization performing threat analysis. We conclude by discussing some of the unexplored yet relevant topic domains in the context of STRIDE that will be considered in future work.