High-resolution plantar pressure recordings have the potential to be used in gait biometrics, biomechanics, and clinical gait analysis. To accurately assess side-specific patterns and asymmetries, it is essential to differentiate between left and right steps, which can be challenging when manual labeling is not feasible and shoe type can vary. This research aimed to create and evaluate the performance of six distinct algorithms (two inspired by existing literature and four novel ones) that take advantage of spatial and temporal features combined with basic decision rules, machine learning, and deep learning to automatically classify left and right footsteps from underfoot pressure recordings, taking into account difficulties associated with footwear variability. A collection of more than 20,000 footsteps from 20 people and 41 different types of shoes was used to assess the six proposed classification algorithms. The results demonstrate that classification techniques based on spatial representations (peak pressure or binary images of footsteps) are more effective than those based on center-of-pressure (COP) time series. The most successful approach, which compares the area of the sole in different parts of the midfoot and forefoot, achieved an accuracy of 99.7% in determining left and right footsteps, with a convolutional neural network (CNN) algorithm at a close second (99.4%). These techniques were found to be robust to many types of footwear and may be valuable for a variety of practical, community-based gait classification tasks.