“…The maximum entropy principle is extensively applied in many disciplines [ 28 , 29 , 30 , 31 , 32 ]. In information theory, if there is a discrete random variable, X , with possible values of { x1, x2, …, xn }, and probability mass function, P(X ), the entropy, H, of X is defined as follows: where E is the expectation operator, r is the logarithmic base, which generally takes a value of two [ 33 , 34 , 35 ] (in this study, r = 2).…”