In reinforcement learning (RL), precise observations are crucial for agents to learn the optimal policy from their environment. While Unity ML-Agents offers various sensor components for automatically adjusting the observations, it does not support hexagon clusters—a common feature in strategy games due to their advantageous geometric properties. As a result, users can attempt to utilize the existing sensors to observe hexagon clusters but encounter significant limitations. To address this issue, we propose a hexagon sensor and a layer-based conversion method that enable users to observe hexagon clusters with ease. By organizing the hexagon cells into structured layers, our approach ensures efficient handling of observation and spatial coherence. We provide flexible adaptation to varying observation sizes, which enables the creation of diverse strategic map designs. Our evaluations demonstrate that the hexagon sensor, combined with the layer-based conversion method, achieves a learning speed up to 1.4 times faster and yields up to twice the rewards compared to conventional sensors. Additionally, the inference performance is improved by up to 1.5 times, further validating the effectiveness of our approach.