Practical ability manifested through robust and reliable task performance, as well as information relevance and well-structured representation, are key factors indicative of understanding in the philosophical literature. We explore these factors in the context of deep learning, identifying prominent patterns in how the results of these algorithms represent information. While the estimation applications of modern neural networks do not qualify as the mental activity of persons, we argue that coupling analyses from philosophical accounts with the empirical and theoretical basis for identifying these factors in deep learning representations provides a framework for discussing and critically evaluating potential machine understanding given the continually improving task performance enabled by such algorithms.