Accurate segmentation of medical images is essential for diagnosis and treatment of diseases. These problems are solved by highly complex models, such as deep networks (DN), requiring a large amount of labeled data for training. Thereby, many DNs possess task-or imaging modality specific architectures with a decisionmaking process that is often hard to explain and interpret. Here, we propose a framework that embeds existing DNs into a low-dimensional subspace induced by the learnable explicit feature map (LEFM) layer. Compared to the existing DN, the framework adds one hyperparameter and only modestly increase the number of learnable parameters. The method is aimed at, but not limited to, segmentation of lowdimensional medical images, such as color histopathological images of stained frozen sections. Since features in the LEFM layer are polynomial functions of the original features, proposed LEFM-Nets contribute to the interpretability of network decisions. In this work, we combined LEFM with the known networks: DeepLabv3+, UNet, UNet++ and MA-net. New LEFM-Nets are applied to the segmentation of adenocarcinoma of a colon in a liver from images of hematoxylin and eosin (H&E) stained frozen sections. LEFM-Nets are also tested on nuclei segmentation from images of H&E stained frozen sections of ten human organs. On the first problem, LEFM-Nets achieved statistically significant performance improvement in terms of micro balanced accuracy and F 1 score than original networks. When averaged over ten runs, LEFM-MA-net achieved balanced accuracy of 89.36% ± 1.28% compared to 88.02% ± 1.22% by the MA-net. Corresponding results for F 1 score are 84.96% ± 1.14% and 82.75% ± 1.10%. On the second problem, LEFM-Nets achieved only better performance in comparison with the original networks. LEFM-MA-net achieved balanced accuracy of 89.41% ± 0.29% compared to 89.30% ± 0.44% by the original MA-net. Results for F 1 score are 85.35% ± 0.25% and 85.12% ± 0.51%. The source code is available at https://github.com/dsitnik/lefm