Image-text hashing approaches have been widely applied in large-scale similarity search applications due to their efficiency in both search speed and storage efficiency. Most recent supervised hashing approaches learn a hash function by constructing a pairwise similarity matrix or directly learning the hash function and hash code (i.e.,1 or-1) procedure based on class labels. However, the former suffers from high training complexity and storage cost, and the latter ignores the semantic correlation of the original data, both of which prevent discriminative hash codes. To this end, we propose a novel discrete hashing algorithm called supervised matrix factorization hashing with quantitative loss (SMFH-QL). The proposed SMFH-QL first generates hash codes via the class label, avoiding the construction of a pairwise similarity; then, matrix factorization is used to design hash codes from original image-text data, thereby eliminating the impact of class labels and reducing the quantization error. Moreover, we introduce a quantitative loss function term to learn hash codes by incorporating class labels and the original data information, facilitating learning a similarity-preserving hash function in image-text search. Extensive experiments show that SMFH-QL outperforms several existing hashing methods on three representative datasets. INDEX TERMS Image-text search, supervised hashing, hash function, hash codes, quantitative loss function.