In this study, we introduce a hierarchical and modular computational model to explain how the CNS (Central Nervous System) controls arm reaching movement (ARM) in the frontal plane and under different conditions. The proposed hierarchical organization was established at three levels: 1) motor planning, 2) command production, and 3) motor execution. Since in this work we are not discussing motion learning, no learning procedure was considered in the model. Previous models mainly assume that the motor planning level produces the desired trajectories of the joints and feeds it to the next level to be tracked. In the proposed model, the motion control is described based on a regulatory control policy, that is, the output of the motor planning level is a step function defining the initial and final desired position of the hand. For the command production level, a nonlinear predictive model was developed to explain how the time-invariant muscle synergies (MSs) are recruited. We used the same computational model to explain the arm reaching motion for a combined ARM task. The combined ARM is defined as two successive ARM such that it starts from point A and reaches to point C via point B. To develop the model, kinematic and kinetic data from six subjects were recorded and analyzed during ARM task performance. The subjects used a robotic manipulator while moving their hand in the frontal plane. The EMG data of 15 muscles were also recorded. The MSs used in the model were extracted from the recorded EMG data. The proposed model explains two aspects of the motor control system by a novel computational approach: 1) the CNS reduces the dimension of the control space using the notion of MSs and thereby, avoids immense computational loads; 2) at the level of motor planning, the CNS generates the desired position of the hand at the starting, via and the final points, and this amounts to a regulatory and non-tracking structure.