In this paper, we introduce IBP, an algorithm that combines reasoning with an abstract domain model and case-based reasoning techniques to predict the outcome of case-based legal arguments. Unlike the predictions generated by statistical or machine-learning techniques, IBP's predictions are accompanied by explanations.We describe an empirical evaluation of IBP, in which we compare our algorithm to prediction based on Hypo's and CATO's relevance criteria, and to a number of widely used machine learning algorithms. IBP reaches higher accuracy than all competitors, and hypothesis testing shows that the observed differences are statistically significant. An ablation study indicates that both sources of knowledge in IBP contribute to the accuracy of its predictions.
Work on a computer program called SMILE ? IBP (SMart Index Learner Plus Issue-Based Prediction) bridges case-based reasoning and extracting information from texts. The program addresses a technologically challenging task that is also very relevant from a legal viewpoint: to extract information from textual descriptions of the facts of decided cases and apply that information to predict the outcomes of new cases. The program attempts to automatically classify textual descriptions of the facts of legal problems in terms of Factors, a set of classification concepts that capture stereotypical fact patterns that effect the strength of a legal claim, here trade secret misappropriation. Using these classifications, the program can evaluate and explain predictions about a problem's outcome given a database of previously classified cases. This paper provides an extended example illustrating both functions, prediction by IBP and text classification by SMILE, and reports empirical evaluations of each. While IBP's results are quite strong, and SMILE's much weaker, SMILE ? IBP still has some success predicting and explaining the outcomes of case scenarios input as texts. It marks the first time to our knowledge that a program can reason automatically about legal case texts.
This commentary provides a definition of textual case-based reasoning (TCBR) and surveys research contributions according to four research questions. We also describe how TCBR can be distinguished from text mining and information retrieval. We conclude with potential directions for TCBR research.
The prohibitive cost of assigning indices to textual cases is a major obstacle for the practical use of AI and Law systems supporting reasoning and arguing with cases. While progress has been made toward extracting certain facts from well-structured case texts or classifying case abstracts under Key Number concepts, these methods still do not suffice for the complexity of indexing concepts in CBR systems.In this paper, we lay out how a better example representation may facilitate classification-based indexing. Our hypotheses are that (1) abstracting from the individual actors and events in cases, (2) capturing actions in multi-word features, and (3) recognizing negation, can lead to a better representation of legal case texts for automatic indexing. We discuss how to implement these techniques with state-of-the-art NLP tools. Preliminary experimental results suggest that a combination of domain-specific knowledge and information extraction techniques can be used to generalize from the examples and derive more powerful features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.