Some texts that are challenging to recognize on their own may become more understandable in a neighborhood of related texts with similar contexts. Motivated by this intuition, a novel deep text sentiment classification (DTSC) model is proposed to improve the model’s performance by incorporating the neighborhood of related texts. Our framework uses a nonparametric approach to construct neighborhoods of related texts based on Jaccard similarities. Then, a new deep recurrent neural network architecture is proposed, comprising two distinct modules: bidirectional long short-term memory (Bi-LSTM) and gated recurrent unit (GRU). The proposed model aims to effectively capture informative features from the input text and its neighbors. The result of each module is processed through the maximum operation, which selects the most pertinent data. Finally, the extracted features are concatenated and subjected to classification to achieve accurate sentiment prediction. Previous studies have commonly employed a parametric approach to represent textual metadata. However, our approach utilizes a nonparametric approach, enabling our model to perform strongly even when the text vocabulary varies between training and testing. The proposed DTSC model has been evaluated on five real-world sentiment datasets, achieving 99.60% accuracy on the Binary_Getty (BG) dataset, 98.32% accuracy on the Binary_iStock (BIS) dataset, 96.13% accuracy on Twitter, 82.19% accuracy on the multi-view sentiment analysis (MVSA) dataset, and 87.60% accuracy on the IMDB dataset. These findings demonstrate that the proposed model outperforms established baseline techniques in terms of model evaluation criteria for text sentiment classification.