Macrolevel facial muscle variations, as used for building models of seven discrete facial expressions, suffice when distinguishing between macrolevel human affective states but won't discretise continuous and dynamic microlevel variations in facial expressions. We present a hierarchical separation and classification network (HSCN) for discovering dynamic, continuous, and macro-and microlevel variations in facial expressions of affective states. In the HSCN, we first invoke an unsupervised cosine similarity-based separation method on continuous facial expression data to extract twenty-one dynamic facial expression classes from the seven common discrete affective states. The between-clusters separation is then optimized for discovering the macrolevel changes resulting from facial muscle activations. A following step in the HSCN separates the upper and lower facial regions for realizing changes pertaining to upper and lower facial muscle activations. Data from the two separated facial regions are then clustered in a linear discriminant space using similarities in muscular activation patterns. Next, the actual dynamic expression data are mapped onto discriminant features for developing a rule-based expert system that facilitates classifying twenty-one upper and twenty-one lower microexpressions. Invoking the random forest algorithm would classify twenty-one macrolevel facial expressions with 76.11% accuracy. A support vector machine (SVM), used separately on upper and lower facial regions in tandem, could classify them with respective accuracies of 73.63% and 87.68%. This work demonstrates a novel and effective method of dynamic assessment of affective states. The HSCN further demonstrates that facial muscle variations gathered from either upper, lower, or full-face would suffice classifying affective states. We also provide new insight into discovery of microlevel facial muscle variations and their utilization in dynamic assessment of facial expressions of affective states.