“…The maximum entropy principle is extensively applied in many disciplines [ 28 , 29 , 30 , 31 , 32 ]. In information theory, if there is a discrete random variable, X , with possible values of { x1, x2, …, xn }, and probability mass function, P(X ), the entropy, H, of X is defined as follows: where E is the expectation operator, r is the logarithmic base, which generally takes a value of two [ 33 , 34 , 35 ] (in this study, r = 2). When the probability of each random variable, x i , is the same, i.e., p 1 = p 2 =... = p n = , the maximum value of the entropy function, H (X) , is obtained, and the corresponding maximum value can be calculated as…”