In this paper we evaluate distributed node coloring algorithms for wireless networks using the network simulator Sinalgo [1]. All considered algorithms operate in the realistic signal-to-interference-and-noise-ratio (SINR) model of interference. We evaluate two recent coloring algorithms, Rand4DColor and ColorReduction (in the following ColorRed), proposed by Fuchs and Prutkin in [2], the MW-Coloring algorithm introduced by Moscibroda and Wattenhofer [3] and transferred to the SINR model by Derbel and Talbi [4], and a variant of the coloring algorithm of Yu et al. [5]. We additionally consider several practical improvements to the algorithms and evaluate their performance in both static and dynamic scenarios.Our experiments show that Rand4DColor is very fast, computing a valid (4∆)-coloring in less than one third of the time slots required for local broadcasting, where ∆ is the maximum node degree in the network. Regarding other O(∆)-coloring algorithms Rand4DColor is at least 4 to 5 times faster. Additionally, the algorithm is robust even in networks with mobile nodes and an additional listening phase at the start of the algorithm makes Rand4DColor robust against the late wake-up of large parts of the network.Regarding (∆+1)-coloring algorithms, we observe that ColorRed it is significantly faster than the considered variant of the Yu et al. coloring algorithm, which is the only other (∆ + 1)-coloring algorithm for the SINR model. Further improvement can be made with an error-correcting variant that increases the runtime by allowing some uncertainty in the communication and afterwards correcting the introduced conflicts. 1 arXiv:1511.04303v1 [cs.DS] 13 Nov 2015Distributed node coloring is the underlying problem for many fundamental issues related to establishing efficient communication in wireless ad hoc and sensor networks. We can, for example, reduce the problems of establishing a time-, code-, or frequency-division-multiple-access (TDMA, CDMA, FDMA) schedule to a node coloring problem [6]. In this work we study and experimentally evaluate distributed node coloring algorithms that were designed for the realistic signal-to-interference-annoise-ratio (SINR) model of interference. This model is widely used for decades in the electrical engineering community and was adopted by the algorithmic community after a seminal work by Gupta and Kumar [7]. In contrast to graph-based models, the SINR model reflects both the local and the global nature of wireless transmissions. However, to analytically prove guarantees on the runtime and show an algorithms correctness becomes relatively complex. Thus, over the past years techniques were developed to tackle the complexity of the model. This, however, led to the introduction of several constant factors in different parts of the algorithms. In this paper we study four distributed node coloring algorithms in a more practical setting. We use the network simulator Sinalgo [1] to execute the algorithms in a variety of deployment scenarios in the static and the dynamic setting.Let us...