The continuous data-flow application in the IoT integrates the functions of fog, edge, and cloud computing. Its typical paradigm is the E-Health system. Like other IoT applications, the energy consumption optimization of IoT devices in continuous data-flow applications is a challenging problem. Since the anomalous nodes in the network will cause the increase of energy consumption, it is necessary to make continuous data flows bypass these nodes as much as possible. At present, the existing research work related to the performance of continuous data-flow is often optimized from system architecture design and deployment. In this paper, a mathematical programming method is proposed for the first time to optimize the runtime performance of continuous data flow applications. A lightweight anomaly detection method is proposed to evaluate the reliability of nodes. Then the node reliability is input into the optimization algorithm to estimate the task latency. The latency-aware energy consumption optimization for continuous data-flow is modeled as a mixed integer nonlinear programming problem. A block coordinate descend-based max-flow algorithm is proposed to solve this problem. Based on the real-life datasets, the numerical simulation is carried out. The simulation results show that the proposed strategy has better performance than the benchmark strategy.Sensors 2020, 20, 122 2 of 20 aggregation site, etc. [5]. On the other side, the fog computing nodes are not necessarily deployed around these access points. To distinguish these two concepts, in this paper, nodes with sufficient communication and computing resources scattered among IoT devices are referred to as fog nodes. The nano-server clusters deployed at the access points are referred to as MEC nodes, which usually have more resources than fog nodes. The collaborative computing of the IoT-end nodes, the fog nodes, the MEC nodes and the data center are referred to as IoT-Fog-Edge computing in this context.The studies on collaboration of Fog, MEC, and Cloud computing usually focus on finding an optimal solution to allocate the IoT tasks to appropriate virtual machines which are hosted on fog, MEC or cloud nodes. In [10], the authors propose to allocate the workload among local MEC servers, neighborhood MEC servers or cloud servers to minimize the energy consumption of MEC nodes subject to delay constraints. Lyapunov drift-plus-penalty-based dynamic queue evaluation is used for the online allocation algorithm. In [11], an optimal algorithm is put forward to determine that the tasks should be allocated to clouds near the end devices or to the one far from the end devices for the energy efficient big data processing. The delay constraints to tasks in near clouds and far cloud are taken into account. In [12], an optimal algorithm for joint task allocation among mobile devices, the computing access point and the remote cloud is proposed, where computing access point can be treated as an MEC node. The studies introduced above are not correlated with the continuous data-fl...