With the accelerating growth of the academic corpus, doubling every nine years, machine learning is a promising avenue to make systematic review manageable. Though several notable advancements have already been made, the incorporation of machine learning is less than optimal, still relying on a sequential, staged process designed to accommodate a purely human approach, exemplified by PRISMA. Here, we test a spiral, alternating or oscillating approach, where full-text screening is done intermittently with title/abstract screening, which we examine in three datasets by simulation under 360 conditions comprised of different algorithmic classifiers, feature extractions, prioritization rules, data types, and information provided (e.g., title/abstract, full-text included). Overwhelmingly, the results favored a spiral processing approach with Logistic Regression, TF-IDF for vectorization, and Maximum Probability for prioritization. Results demonstrate up to a 90\% improvement over traditional machine learning methodologies, especially for databases with fewer eligible articles. With these advancements, the screening component of most systematic reviews should remain functionally achievable for another one to two decades.