This paper shows the application of Wang's Recurrent Neural Network with the 'Winner Takes All' (WTA) principle in a soft version to solve the Traveling Salesman Problem. In soft WTA principle the winner neuron is updated at each iteration with part of the value of each competing neuron and some comparisons with the hard WTA are made in this work with instances of the TSPLIB (Traveling Salesman Problem Library). The results show that the soft WTA guarantees equal or better results than the hard WTA in most of the problems tested.