Computer vision algorithms implementations, especially for real-time applications, are present in a variety of devices that we are currently using (from smartphones or automotive applications to monitoring/security applications) and pose specific challenges, memory bandwidth or energy consumption (e.g., for mobility) being the most notable ones. This paper aims at providing a solution to improve the overall quality of real-time object detection computer vision algorithms using a hybrid hardware–software implementation. To this end, we explore the methods for a proper allocation of algorithm components towards hardware (as IP Cores) and the interfacing between hardware and software. Addressing specific design constraints, the relationship between the above components allows embedded artificial intelligence to select the operating hardware blocks (IP cores)—in the configuration phase—and to dynamically change the parameters of the aggregated hardware resources—in the instantiation phase, similar to the concretization of a class into a software object. The conclusions show the benefits of using hybrid hardware–software implementations, as well as major gains from using IP Cores, managed by artificial intelligence, for an object detection use-case, implemented on a FPGA demonstrator built around a Xilinx Zynq-7000 SoC Mini-ITX sub-system.