This paper introduces a novel approach to lucid dream communication, leveraging electrooculographical signals for real-time, natural language messaging within dreams. Beyond traditional horizontal eye movement techniques, our methodology incorporates vertical eye movements, enhancing the expressive range of communication. Recognizing the potential for lucid dream communication in diverse settings, including home and laboratory environments with varying conditions, we explore two distinct approaches.Methodologically, we employ a grid system enabling dreamers to encode messages into eye gestures. We develop two Deep Neural Network models for classifying and decoding eye movement data. One model integrates a dynamic threshold algorithm with a convolutional neural network for gesture classification, while the other uses a static threshold algorithm and rate of change analysis for classification, followed by transformer-based decoding of grid-encoded natural language sentences. Tested in two pilot studies with awake participants, our setups exhibit promising results.In Pilot Study 1, we attained an average accuracy of 77.6% in classifying eye gestures across subjects. Pilot Study 2 demonstrates an average accuracy of 97.6% in classifying elementary gestures, sustaining a 90.3% average accuracy post-decoding, successfully recovering the original English language message in most cases. Our method exhibits an average eye typing speed of 19.7 characters per minute.Our results emphasize the practicality of real-time natural language dream communication, demonstrating its applicability in both laboratory and home settings. If successfully replicated during lucid dreaming states, this pioneering method holds the potential to pave the way for novel research on dreams, memory, and consciousness.