This research evaluates the performance of End-to-End Neural Coreference Resolution models in English and Indonesian linguistic contexts, drawing particular attention to the model by K. Lee, recognized for its simplified preprocessing methodology. In this research, we use raw text to find and link mentions in documents, and handle different languages. For the evaluation metrics, the English model achieved F1-Scores of 67.94% and 67.14% on the OntoNotes-5.0 development and training sets. The Indonesian model, prepared using a CoNLL-2012 formatted dataset, attained an F1-Score of 68.88% on a 25% segment of the Book of Markus. We also analyzed the additional features integrated into the model to assess their contributions to performance improvement. The findings indicate that while the English model demonstrates generalizability across various coreference challenges, the Indonesian model's performance is more domain-specific, being particularly effective within the confines of the Book of Mark.