Efficient sound source detection and location with microphone arrays is important for many applications, including teleconferencing, surveillance, and smart rooms. While the steered response power algorithms exhibit robust performance relative to other approaches, their applications are limited by the high computational load required. For dynamic auditory scenes, the entire space must be scanned at regular intervals due to moving sound sources switching between active and inactive states. This paper introduces a time segmentation and parallelization strategy to speed up the steered response power algorithm for dynamic auditory scenes with multiple speech sources. The primary application targeted by this work is for immersive arrays and off-line auditory scene analysis with beamforming for speaker separation in cocktail party environments. Results from a Monte Carlo simulation with 6 speech sources in a mildly reverberant environment demonstrate a speed-up factor of 45, with a modest loss in the number of detections and a significant reduction in anomalous detections.Experimental results with real recordings demonstrate a performance consistent with those of the simulation.