Computer simulation methods for models based on partial differential equations usually apply adaptive strategies that generate sequences of approximations for consequently refined meshes. In this process, error indicators play a crucial role because a new (refined) mesh is created by analysis of an approximate solution computed for the previous (coarser) mesh. Different error indicators exploit various analytical and heuristic arguments. The main goal of this paper is to show that effective indicators of approximation errors can be created by machine learning methods and presented by relatively simple networks. We use the "supervised learning" conception where sequences of teaching examples are constructed due to earlier developed tools of a posteriori error analysis known as "functional type error majorants". Insensitivity to specific features of approximations is an important property of error majorants, which allows us to generate arbitrarily long series of diverse training examples without restrictions on the type of approximate solutions. These new (network) error indicators are compared with known indicators. The results show that after a proper machine learning procedure we obtain a network with the same (or even better) quality of error indication level as the most efficient indicators used in classical computer simulation methods. The final trained network is approximately as effective as the gradient averaging error indicator, but has an important advantage because it is valid for a much wider set of approximate solutions.