“…Multilayer perceptrons (MLPs) utilizing a backpropagation learning algorithm were originated (NeuroSolutions version 6.0 software; NeuroDimension, Inc., Gainesville, Florida USA), as follows: [CHL a] or biovolumesϭ f{W P 1 ,P 3 [ f(W X 1 ,P 1 ·X 1 ϩW X 2 ,P 1 ·X 2 …W X i ,P 1 ·X i ϩ 1 )]} ϩ f{W P 2 ,P 3 [ f(W X 1 ,P 2 ·X 1 ϩW X 2 ,P 2 ·X 2 …W X i ,P 2 ·X i ϩ 2 )]} ϩ f{W P j ,P 3 [f(W X 1 ,P j ·X 1 ϩW X 2 ,P j ·X 2 …W X i ,P j ·X i ϩ j )]} where X 1,2,…,i are predictor variables, P 1,2,3,…,j are processing elements, W X 1,2,...,i ,P 1,2,3,...,j are scalar weights, and 1,2,…,j is the error (after Principe et al 2000). Topologies were optimized for the number of processing elements within hidden layers and the types of transfer functions (e.g., sigmoid, hyperbolic tangent) and learning rules (e.g., conjugate gradient, momentum; see Millie et al 2012Millie et al , 2013. Data vectors were assigned randomly to subsets for network training (to "fit" the data), cross-validation (to provide unbiased estimation of prediction), and testing (to assess performance) of 60%, 15%, and 25% of data, respectively.…”