This article carries out a comparative study of zero-finding neural networks for nonlinear functions. By taking the view that artificial recurrent neural nets are dynamical systems described by ordinary differential equations (ODEs), new neural nets were recently derived in the literature using a unified control Liapunov function (CLF) approach, after interpreting the zero finding problem as a regulation problem for a closed-loop continuoustime dynamical system. The resulting neural net or continuous-time ODE is discretized by Euler's method and the discretization step size interpreted as a control which is chosen so as to optimize the decrement in the chosen CLF, along system trajectories. Given the viewpoint adopted in this article, the words dynamical system, ODE and neural net are used interchangeably. For standard test functions of two variables, the basins of attraction are found by numerical simulation, starting from a uniformly distributed grid of initial points. For the chosen test functions, analysis of the basins shows a correlation between regularity of the basin boundaries and the predictability of convergence to a zero. In addition, this analysis suggests how to construct a team algorithm with favorable convergence properties.