Recent efforts in biomedical visual question answering (VQA) research rely on combined information gathered from the image content and surrounding text supporting the figure. Biomedical journals are a rich source of information for such multimodal content indexing. For multipanel figures in these journals, it is critical to develop automatic figure panel splitting and label recognition algorithms to associate individual panels with text metadata in the figure caption and the body of the article. Challenges in this task include large variations in figure panel layout, label location, size, contrast to background, and so on. In this work, we propose a deep convolutional neural network, which splits the panels and recognizes the panel labels in a single step. Visual features are extracted from several layers at various depths of the backbone neural network and organized to form a feature pyramid. These features are fed into classification and regression networks to generate candidates of panels and their labels. These candidates are merged to create the final panel segmentation result through a beam search algorithm. We evaluated the proposed algorithm on the ImageCLEF data set and achieved better performance than the results reported in the literature. In order to thoroughly investigate the proposed algorithm, we also collected and annotated our own data set of 10,642 figures. The experiments, trained on 9,642 figures and evaluated on the remaining 1,000 figures, show that combining panel splitting and panel label recognition mutually benefit each other. 1 | INTRODUCTION Multipanel figures are figure images in journal articles that contain more than one subfigure. About 40-60% of the figures in the biomedical literature are multipanel (Garcia Seco de Herrera, Kalpathy-Cramer, Demner-Fushman, Antani, & Muller, 2013). In these multipanel figures, panel labels help associate them with the biomedical text associated with the figures, for example, captions, mentions in the body text (You, Apostolova, Antani, Demner-Fushman, & Thoma, 2009, January). Precise image feature extraction and indexing for each panel is critical for many applications such as biomedical visual question answering (VQA), subfigure classification, and multimodal (image and text) search and retrieval (Apostolova et al., 2013; Chhatkuli, Markonis, Foncubierta-Rodriguez, Meriaudeau, & Muller, 2013, February). In these tasks, image features extracted from each image panel need to be associated with the biomedical concepts contained in the surrounding texts. In this † Dr. George Thoma has retired, and this work was done during his tenure with the National Library of Medicine.