“…As datasets and analytic functions increase in complexity, nonlinearity, and size, many calculus-based optimization techniques fail, necessitating the use of enumerative techniques, such as the Expectation-Maximization algorithm or evolutionary algorithms (Tang et al, 1996;Whitley, 1994). Genetic algorithms (GAs), evolutionary strategies in computing based on the principles of evolutionary biology and population genetics created by Holland in 1975, offer quick and efficient means of solving difficult or analytically impossible problems in function optimization (such as variable selection or identification of optimal parameter weightings), ordering problems (permutation problems including the infamous Traveling Salesman Problem), and automatic programming (such as genetic programming or grammatical evolution, based off of transcription, translation, and protein folding) (Forrest, 1993;Tang et al, 1996;Harik et al, 1999;Fan et al, 2007;Wang et al, 2006;Hassan et al, 2004). Genetic algorithms, with built-in mechanisms to avoid local optima and search through very large solution spaces for global optima, thrive in situations in which other enumerative and machinelearning techniques stall or fail to converge upon global solutions (as the search space is of dimension R N , where N represents the number of parameters in the dataset) and have been successfully employed in such fields as statistical physics (Somma et al, 2008;Ceperly & Alder, 1986), quantum chromodynamics (Temme et al, 2011), aerospace engineering (Hassan et al, 2004), molecular chemistry (Deaven & Ho, 1995;Najofi et al, 2011), spline-fitting within function estimation (Pittman, 2001), and parametric statistics (Najafi et al, 2011;Gayou et al, 2008;Broadhurst et al, 1997;Paterlini & Minerva, 2010).…”