“…The model is estimated by means of the expectation maximization (EM; Dempster et al, 1977) algorithm that uses the so-called completedata loglikelihood ( log L c ), that is, assuming the state assignments of all time-points to be known and thus replacing the difficult maximization by a sequence of easier maximization problems. In the expectation step (E-step, see, for example, Bishop, 2006;Dias, Vermunt, & Ramos, 2008), we assume the parameters of interest b θ (i.e., transition probabilities, initial probabilities, and state-specific MMs) to be given (i.e., by a set of initial values or estimates from the previous iteration b θ old , see De Roover et al, 2017;Vermunt & Magidson, 2016) and calculate the posterior probabilities (i.e., conditional on the data) to belong to each of the states and to make transitions between the states, by means of the forward-backward algorithm (Baum, Petrie, Soules, & Weiss, 1970). The obtained posterior probabilities are used as expected values of the state assignments to obtain the expected logL c E logL c ð Þ ð Þ : Next, in the maximization step (M-step), the parameters b θ are updated such 572 VOGELSMEIER ET AL.…”