Many studies provide evidence that information from different modalities is integrated following the maximum likelihood estimation model (MLE). For instance, we recently found that visual and proprioceptive path trajectories are optimally combined (Reuschel et al. in Exp Brain Res 201:853-862, 2010). However, other studies have failed to reveal optimal integration of such dynamic information. In the present study, we aim to generalize our previous findings to different parts of the workspace (central, ipsilateral, or contralateral) and to different types of judgments (relative vs. absolute). Participants made relative judgments by judging whether an angular path was acute or obtuse, or they made absolute judgments by judging whether a one-segmented straight path was directed to left or right. Trajectories were presented in the visual, proprioceptive, or combined visual-proprioceptive modality. We measured the bias and the variance of these estimates and predicted both parameters using the MLE. In accordance with the MLE model, participants linearly combined and weighted the unimodal angular path information by their reliabilities irrespective of the side of workspace. However, the precision of bimodal estimates was not greater than that for unimodal estimates, which is inconsistent with the MLE. For the absolute judgment task, participants' estimates were highly accurate and did not differ across modalities. Thus, we were unable to test whether the bimodal percept resulted as a weighted average of the visual and proprioceptive input. Additionally, participants were not more precise in the bimodal compared with the unimodal conditions, which is inconsistent with the MLE. Current findings suggest that optimal integration of visual and proprioceptive information of path trajectory only applies in some conditions.