Accurate and robust pedestrian detection is fundamental for indoor robotic systems to navigate safely and seamlessly alongside humans in spatially constrained, unpredictable indoor environments. This paper presents a novel method, IRBGHR-PIXOR, a detection framework specifically engineered for pedestrian perception in indoor mobile robots. This novel approach employs an enhanced adaptation of the cutting-edge PIXOR model, integrating two pivotal augmentations: a remodeled convolutional backbone leveraging Inverted Residual Blocks (IRB) in unison with Gaussian Heatmap Regression (GHR), as well as a Modified Focal Loss (MFL) function to tackle data imbalance issues. The IRB component notably bolsters the network's aptitude for processing intricate spatial representations generated from sparse 3D LiDAR scans. Meanwhile, integrating GHR further elevates accuracy by enabling precise localization of pedestrian subjects. This is achieved by modeling the probability distribution and predicting the central location of individuals in the point cloud data. Extensively evaluated on the large-scale JRDB dataset comprising intense scans from 16-beam Velodyne LiDAR sensors, IRBGHR-PIXOR accomplishes exceptional results, attaining 97.17% Average Precision (AP) at the 0.5 IOU threshold. Notably, this level of accuracy is achieved without significantly increasing model complexity. By enhancing algorithms to overcome challenges in confined indoor environments, this research paves the way for safe and effective deployment of autonomous technologies once encumbered by perceptual limitations in human-centered spaces. Nonetheless, evaluating performance in diverse edge cases and integration with complementary sensory cues promise continued progress. The developments contribute towards the vital capacity of reliable dynamic perception for next-generation robotic systems coexisting in human-centric environments.