It is very difficult for a patient with severe disabilities to communicate with others or devices, greatly reducing the quality of their lives. In this study, a steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) is proposed to make it easy for patients with severe disabilities to communicate. To precisely represent the characteristic of an elicited SSVEP, the four features extracted by fast Fourier transform, canonical correlation analysis, magnitude-squared coherence, and power cepstrum analysis are used. To fuse the decision results obtained by using the different features, a modular neural network (MNN), which includes input models and a decision model, is adopted to improve the recognition performance. To balance the recognition performance and computational complexity, an artificial neural network based on multilayer perceptrons is selected as the basic unit of the MNN. On the basis of the different features, the input model of the MNN can quickly find the decision results. To effectively fuse the decision results, the decision model of the MNN is adopted to obtain a precise decision. The experimental results demonstrated that the MNN has a higher accuracy than other approaches. Therefore, the proposed SSVEP-based BCI with the MNN can effectively help patients interact with their surroundings.