Solving math word problems (MWP) is a challenging task for natural language processing systems, as it requires to not only identify and comprehend the problem description within the context, but also to deduce a solution in accordance with the posed question. Previous solvers have been found to prioritize the context over the question, resulting in low performance when solving multiple questions under the same context. In this paper, we present a question-oriented strategy to address this issue and improve the generalizability of MWP solvers. Our approach features an entity-aware encoder that enhances the connection between MWP context and question via entities in established dependency graphs, aiming at obtaining better problem representations. Then, a question-guided decoder is trained using a contrastive learning strategy to enhance the question representations. Empirical evaluations on four benchmarks demonstrate that our method outperforms previous solvers and exhibits a favorable balance between efficacy and efficiency in MWP solving. In addition, our solver is not reliant on any specific pre-trained model and demonstrates seamless compatibility with different pre-trained model backbones. Our codes are released at https://github.com/ Zhenwen-NLP/QoS_AACL Context: Mr. Wang rides a bicycle from home to school at 16 kilometers per hour, and can reach the school in 0.2 hours. He walks 4 kilometers per hour. Question 1: How far is Mr. Wang's home from the school? Solutions: MWP-BERT: (16 × 0.2)/4 (Wrong) Ground Truth: 16 × 0.2 Question 2: How many times faster is Mr. Wang walking than cycling? Solutions: MWP-BERT: 16/4 (Wrong) Ground Truth: 4/16 Question 3: How long does it take Mr. Wang to walk to school?