This paper explores two techniques to improve the performance of text-dependent speaker verification systems based on deep neural networks. Firstly, we propose a general alignment mechanism to keep the temporal structure of each phrase and obtain a supervector with the speaker and phrase information, since both are relevant for a text-dependent verification. As we show, it is possible to use different alignment techniques to replace the global average pooling providing significant gains in performance. Moreover, we also present a novel back-end approach to train a neural network for detection tasks by optimizing the Area Under the Curve (AUC) as an alternative to the usual triplet loss function, so the system is end-to-end, with a cost function close to our desired measure of performance. As we can see in the experimental section, this approach improves the system performance, since our triplet neural network based on an approximation of the AUC (aAUC) learns how to discriminate between pairs of examples from the same identity and pairs of different identities. The different alignment techniques to produce supervectors in addition to the new back-end approach were tested on the RSR2015-Part I database for text-dependent speaker verification, providing competitive results compared to similar size networks using the global average pooling to extract supervectors and using a simple back-end or triplet loss training.The performance in many speaker verification tasks has improved thanks to deep learning advances in signal representations [1,2] or optimization metrics [3,4,5] that have been adapted from the state-of-the-art deep learning face verification systems. In this paper, we propose alternatives to the two following aspects. First, the generation of signal representations or embeddings which preserve the subject identity and uttered phrase. Second, a metric for training the system which, as we will show, is more appropriate for a detection task.In many recent verification systems, deep neural networks (DNNs) are trained for multi-class classification. A common approach is to apply a global average reduction mechanism [1, 2, 6], which produces a vector representing the whole utterance which is called embedding. A simple back-end as a similarity metric can be applied directly for the verification process [7] or more sophisticated methods like the one presented in [8]. However, this approach does not work efficiently in text-dependent tasks since the uttered phrase is a relevant piece of information to correctly determine the identities and the system has to detect a match in the speaker and the phrase to be correct [6,9]. In our previous work [10], we have noted that part of the imprecisions may be derived from the use of the average as a representation of the utterance, and how this problem can be solved by adding a new internal layer into the deep neural network architecture which uses an alignment method to encode the temporal structure of the phrase in a supervector. In this paper, we propose a generalization ...