Robot perception heavily relies on camera-based visual input for navigating and interacting in its environment. As robots become integral parts of various applications, the need to efficiently compute their visibility regions in complex environments has grown. The key challenge addressed in this paper is to devise an innovative solution that not only accurately computes the visibility region V of a robot operating in a polynomial environment but also optimizes memory utilization to ensure real-time performance and scalability.The main objective of this research is to propose an algorithm that achieves optimal-time complexity and significantly reduces memory requirements for visibility region computation. By focusing on sub-linear memory utilization, we aim to enhance the robot's ability to perceive its surroundings effectively and efficiently.Previous approaches have provided solutions for visibility region computation in non-spiral environments, but most were not tailored to memory limitations. In contrast, the proposed algorithm is designed to achieve optimal time complexity that is O(n) while reducing memory usage to O(c/log n) variables, where c < n represents the number of critical corners in the environment. Leveraging the constant-memory model and memory-constrained algorithm, we aim to strike a balance between computational efficiency and memory usage.The algorithm's performance is rigorously evaluated through extensive simulations and practical experiments. The results demonstrate its linear-time complexity and substantial reduction in memory usage without compromising the accuracy of the visibility region computation. By efficiently handling memory constraints, the robot gains a cost-effective and reliable perception mechanism, making it well-suited for a wide range of real-world applications.The constant-memory model and memory-constrained algorithm presented in this paper offer a significant advancement in robot perception capabilities. By optimizing the visibility region computation in polynomial environments, our approach contributes to the efficient operation of robots, enhancing their performance and applicability in complex real-world scenarios. The results of this research hold promising potential for future developments in robotics, computer vision, and related fields.