Multi-hop QA requires the machine to answer complex questions through finding multiple clues and reasoning, and provide explanatory evidence to demonstrate the machine's reasoning process. We propose Relation Extractor-Reader and Comparator (RERC), a three-stage framework based on complex question decomposition. The Relation Extractor decomposes the complex question, and then the Reader answers the sub-questions in turn, and finally the Comparator performs numerical comparison and summarizes all to get the final answer, where the entire process itself constitutes a complete reasoning evidence path. In the 2WikiMultiHopQA dataset, our RERC model has achieved the state-of-the-art performance, with a winning joint F1 score of 53.58 on the leaderboard. All indicators of our RERC are close to human performance, with only 1.95 behind the human level in F1 score of support fact. At the same time, the evidence path provided by our RERC framework has excellent readability and faithfulness.
The multi‐hop question answering (QA) task requires the machine to answer the question correctly and at the same time provide evidence clues. Some pipeline methods have achieved great results in answer accuracy and interpretability, while a obvious drawback of these pipeline methods is the error accumulation issue. In this letter, we propose a NA‐Reviewer method to improve the error accumulation issue in multi‐hop QA task. It consists of two parts: an NA‐Discriminator and a Reviewer, where the NA‐Discriminator transforms the error accumulation issue into an unanswered discrimination task, and the Reviewer corrects the error by reviewing the context according the error clues. We conducted experiments on the 2WikiMultiHopQA dataset. Compared with the previous state‐of‐the‐art works, the proposed NA‐Reviewer method have increased the F1 value by 4.27 percentage points, and compared to the ablation model without the NA‐Reviewer method, it has increased the F1 value by 9.58 percentage points, which significantly improves the error accumulation issue.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.