The Coronavirus Disease (COVID-19) has caused millions of casualties across the globe. One inexpensive and noninvasive screening method for COVID-19 is the analysis of chest X-ray (CXR) images for pathological features in the lungs. These features are difficult to detect by humans, but convolutional neural networks (CNN) have proven effective at extracting them. This paper uses four ImageNet-pre-trained CNNs: VGG16, DenseNet201, ResNet50, and EfficientNetB3 to perform transfer learning to a task of COVID-19 CXR image detection on a dataset containing COVID-19, healthy, and viral pneumonia CXR images. We compare the performance of the retrained CNNs using standard measures and investigate the features they use for their predictions using local interpretable model-agnostic explanations (LIME). The networks are retrained on two classification tasks: Task 1 consists of classifying healthy and COVID-19 CXR images and task 2 consists of classifying viral pneumonia and COVID-19 CXR images. We find that DenseNet201 and VGG16 achieve higher accuracies than ResNet50 and EfficientNetB3 in both tasks. However, the LIME explanations reveal that VGG16 does not learn disease-relevant features in the lungs, while DenseNet201, ResNet50, and EfficientNetB3 use regions in the lungs to make their predictions. This observation is reinforced by comparing LIME explanations with ground-truth lung regions on an unseen dataset. The prospect of using "black box" deep neural networks for automatic screening of CXRs for COVID-19 can be improved with LIME-enabled investigations of model performance.