Direct Visual Servoing (DVS) is a technique used in robotics and computer vision where visual information, typically obtained from camera pixels brightness, is directly used for controlling the motion of a robot. DVS is known for its ability to achieve accurate positioning, thanks to the redundancy of information all without the necessity to rely on geometric features.In this paper, we introduce a novel approach where pixel brightness is replaced with learned feature maps as the visual information for the servoing loop. The aim of this paper is to present a procedure to extract, transform and integrate deep neural networks feature maps toward replacing the brightness in a DVS control loop.