Spoken language understanding (SLU) is a fundamental to service robot handling of natural language task requests. There are two main basic problems in SLU, namely, intent determination (ID) and slot filling (SF). The slot-gated recurrent neural network joint model for the two tasks has been proven to be superior to the single model, and has achieved the most advanced performance. However, in the context of task requests for home service robots, there exists a phenomenon that the information about a current word is strongly dependent on key verbs in the sentence, and it is difficult to capture this relation well with current methods. In this paper, we extract the key instructional verb containing greater core task information based on dependency parsing, and construct a feature that combines the key verb with its contextual information to solve this problem. To further improve the performance of the slot-gated model, we consider the strong relations between intent and slot. By introducing intent attention vectors into the slot attention vectors through the global-level gate and element-level gate, a novel dual slot-gated mechanism is proposed to explicitly model the complex relations between the results of the ID tasks and SF prediction tasks and optimize the global prediction results. Our experimental results on the ATIS dataset and an extended home service task (SRTR) dataset based on FrameNet show that the proposed method outperforms the most advanced methods in both tasks. Especially, for SRTR, the results of SF, ID, and sentence-level semantic frame-filling are improved by 1.7%, 1.1%, and 1.7%, respectively.INDEX TERMS Human-robot interaction, service robots, slot-gated mechanism, spoken language understanding, verb context feature. ambiguity of verbs and incomplete natural language description scenarios [14]. In recent years, with the development of recurrent neural networks, some joint training models based on Long Short-Term Memory(LSTM) and its variants have become widely used in natural language understanding. These models are different from traditional verb matching and other context-free grammar models. They can learn the remote dependencies in sentences through the gating mechanism, and improve the overall performance through joint training of ID and SF. However, this kind of model only adds training loss for the joint training, which cannot model the complex correlation between the two tasks well. Moreover, in the field of service robots, people usually interact with robots in a command language. Unlike auxiliary query languages, many commands in this field contain key verbs that are directly related to behavioral intent. There is a strong information complement relation between many chunks in the command and such key verbs, while there is a weak dependence relation with other chunks. In the example of ''After five minutes, please wipe the table with a paper towel.'' shown in Figure 1 (a), the information of ''Time'', ''Tool'' and ''Target Object'' of ''wipe'' are complemented by ''After five minutes...