“…The model abstracts the problem, providing a framework for either developing a brand-new algorithm from scratch or, more commonly, reusing and fine-tuning a well-known algorithm to rationalize the search for potential solutions. All the research efforts identified can be situated within this last line of work, essentially exploring and exploiting general problem-solving techniques, namely, (i) greedy algorithms [ 71 , 78 , 80 , 81 , 82 , 85 , 89 ], which recursively build up the pursued global optimum by picking the best partial solution at each iteration, for instance, partitions with the least computational complexity [ 71 ] or with the smallest total prediction time consumption [ 78 ], and also devices that accomplish the best latency with the maximum residual computation [ 82 ] or produce the slightest increase of the maximum task completion time [ 81 ]; (ii) exhaustive search algorithms [ 72 , 75 , 88 ], which sequentially evaluate each potential solution in order to eventually obtain the global final solution; (iii) heuristic algorithms [ 76 , 79 ], which lay down rules to allow a more efficient exploration of the search space, e.g., grouping partitions with the same latency [ 76 ] and pruning the DNN’s computationally light nodes [ 79 ], yielding a faster solution at the cost of sacrificing optimality or accuracy; and (iv) classic optimization methods [ 77 , 86 , 89 ], such as dynamic programming [ 77 ] and linear programming [ 86 , 89 ], which leverage mathematical models to solve problems directly, deriving solutions by optimizing, i.e., maximizing or minimizing, depending on the case, the objective function of interest while satisfying the set of constraints considered, and, in the particular case of dynamic programming, even simplifying the decision making by breaking it down into a sequence of decision steps over time.…”