Tinnitus is a hearing disorder that is characterized by the perception of sounds in the absence of an external source. Currently, there is no pharmaceutical cure for tinnitus, however, multiple therapies and interventions have been developed that improve or control associated distress and anxiety. We propose a new Artificial Intelligence (AI) algorithm as a digital prognostic health system that models electroencephalographic (EEG) data in order to predict patients’ responses to tinnitus therapies. The EEG data was collected from patients prior to treatment and 3-months following a sound-based therapy. Feature selection techniques were utilised to identify predictive EEG variables with the best accuracy. The patients’ EEG features from both the frequency and functional connectivity domains were entered as inputs that carry knowledge extracted from EEG into AI algorithms for training and predicting therapy outcomes. The AI models differentiated the patients’ outcomes into either therapy responder or non-responder, as defined by their Tinnitus Functional Index (TFI) scores, with accuracies ranging from 98%–100%. Our findings demonstrate the potential use of AI, including deep learning, for predicting therapy outcomes in tinnitus. The research suggests an optimal configuration of the EEG sensors that are involved in measuring brain functional changes in response to tinnitus treatments. It identified which EEG electrodes are the most informative sensors and how the EEG frequency and functional connectivity can better classify patients into the responder and non-responder groups. This has potential for real-time monitoring of patient therapy outcomes at home.