Automated Essay Scoring (AES) systems are designed to expedite the assessment process, where human scoring is frequently slow and subject to inconsistencies and inaccuracies. This study, therefore, investigates the role of sentence tokenization in the performance of Indonesian Automated Essay Scoring, given that Natural Language Processing (NLP) techniques are requisite in AES to handle student responses that present identical semantic meanings but vary in length. A distinct approach was adopted in which full answers were not vectorized; instead, they were fragmented into sentences prior to vectorization. This method was deemed potentially more effective due to the high probability of discrepancies in sentence order between reference and student responses. Sentence embeddings, which encapsulate a sentence as a sole vector, were utilized. Pretrained SBERT-based sentence embeddings were employed to vectorize sentences from both reference answers and student responses, serving as semantic features for the Siamese Manhattan LSTM (MaLSTM) model. The MaLSTM model possesses the ability to process two inputs and evaluate their similarity using the Manhattan distance metric and use this similarity value as a predictive scoring output. This score was subsequently compared to human scores using the Root Mean Square Error (RMSE) and Pearson Correlation. Interestingly, sentence embeddings without tokenization slightly outperformed those with sentence splitting, as evidenced by a 0.61% improvement in RMSE and a 0.01 increase in Pearson Correlation. The results obtained indicate that sentence tokenization, as applied to the Indonesian Automated Essay Scoring dataset, does not have a notable impact on essay scoring performance. Therefore, it may be concluded that the application of sentence tokenization is not a necessary step in this dataset's text-processing phase of AES.