In this paper, we perform the neural cryptanalysis and comprehensively analyze five block ciphers, Data Encryption Standard (DES), Simplified DES (SDES), Advanced Encryption Standard (AES), Simplified AES (SAES), and SPECK. The vulnerability of the block ciphers is investigated in three different attacks, such as Encryption Emulation (EE), Plaintext Recovery (PR), Key Recovery (KR), and Ciphertext Classification (CC) attacks. For the plaintexts, randomly generated block-sized bit arrays and texts are used. The block ciphers apply different numbers of round functions in the encryption of the block-sized bit arrays, and they are investigated by using the deep learning models trained with different numbers of data in EE, PR, and KR attacks. Moreover, the block ciphers use two different text encryption methods, Word-based Text Encryption (WTE) and Sentence-based Text Encryption (STE), to encrypt the texts in various operation modes, and they are analyzed with deep learning models in EE, PR, and CC attacks. As a result, the block ciphers can be vulnerable to deep learning-based EE and PR attacks using a large amount of train data, and STE can improve the strength of the block ciphers, unlike WTE, which shows almost the same classification accuracy as the plaintexts, especially in CC attack. Additionally, when the keys that are the same as the plaintexts are used in encryption, the block ciphers can be perfectly broken in KR attack. Moreover, especially in KR attack, the RNN-based deep learning model shows higher average Bit Accuracy Probability (BAPavg) than the fully connected-based deep learning model, which is used more often in previous works of neural cryptanalysis. Furthermore, although the transformer-based deep learning model is a state-of-the-art model in Natural Language Processing (NLP), the RNN-based deep learning model is more suitable for CC attack and shows higher classification accuracy.