Many severe neurological diseases, such as stroke and amyotrophic lateral sclerosis, can lead to patients completely losing their ability to communicate1,2. Several language brain-computer interface systems have been shown to have the potential to help these patients regain their communication abilities by decoding speech or movement-related neural signals3. However, the aforementioned language brain-computer interfaces (BCIs) were all developed based on alphabetic linguistic systems without one specially designed for logosyllabic languages like Mandarin Chinese. Here, we established the first language BCI specifically designed for Chinese, decoding speech-related stereoelectroencephalography (sEEG) signals into sentences. Firstly, based on the acoustic features of full-spectrum Chinese syllable pronunciation, we constructed prediction models for three syllable elements (initials, tones, and finals). Subsequently, a language model transformed all the predicted syllable elements, integrating them with semantic information, to generate the most probable sentence. Ultimately, we achieved a high-performance decoder with a median character error rate of 29%, demonstrating preliminary potential for clinical application. Our research fills the gap in language BCI for logosyllabic languages and leverages a powerful language model to enhance the performance of the language BCI, offering new insights for future logosyllabic language neuroprosthesis studies.