With the investigation of speech-related biosignals we can enhance traditional speech synthesis which might be useful for future brain-computer interfaces. In a recent previous research, from the brain signal measured with EEG, we predicted directly measured articulation, i.e., ultrasound images of the tongue, with a fully connected deep neural network. The results showed that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e., the network can differentiate articulated speech and neutral (resting state) tongue position. In the current study, we extend this with a focus on acoustic-to-articulatory inversion (AAI), and estimate articulatory movement from the speech signal. After that, we analyze the similarities between AAI-estimated articulation and EEGestimated articulation. We compare the original articulatory data with DNN-predicted ultrasound and show that EEG input is only suitable to distinguish neutral tongue position and articulated speech, whereas melspectrogram-to-ultrasound can also predict articulatory trajectories of the tongue.