Proceedings of the 5th Workshop on Argument Mining 2018
DOI: 10.18653/v1/w18-5209
|View full text |Cite
|
Sign up to set email alerts
|

Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment

Abstract: This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or "soft rules" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Although these properties are limited in what they tell us about the overall argumentative structure, they provide valuable information about the role that a particular text span is playing in the argument as a whole. For example, knowing that a claim is verifiable suggests a link to a piece of evidence in the text supporting this claim (Park and Cardie 2014); knowing that a clause is increasing the author's ethos suggests that it is supporting a specific claim that they are making (Duthie, Budzynska, and Reed 2016); and knowing the type of evidence provided can be used to assign different weights to statements in clinical trials (Mayer, Cabrio, and Villata 2018), or help understand rulings in disability benefits claims (Walker et al 2018).…”
Section: Intrinsic Clausal Propertiesmentioning
confidence: 99%
“…Although these properties are limited in what they tell us about the overall argumentative structure, they provide valuable information about the role that a particular text span is playing in the argument as a whole. For example, knowing that a claim is verifiable suggests a link to a piece of evidence in the text supporting this claim (Park and Cardie 2014); knowing that a clause is increasing the author's ethos suggests that it is supporting a specific claim that they are making (Duthie, Budzynska, and Reed 2016); and knowing the type of evidence provided can be used to assign different weights to statements in clinical trials (Mayer, Cabrio, and Villata 2018), or help understand rulings in disability benefits claims (Walker et al 2018).…”
Section: Intrinsic Clausal Propertiesmentioning
confidence: 99%
“…Evidence can be categorized into many different types, such as expert opinion, anecdote, or study data [171], or, with slightly different wording, study, expert or anecdotal [3]. Walker et al [199] distinguish lay testimony, medical records, performance evaluations, other service records, other expert opinions, other records. Niculae et al [125] include references such as URLs or citations as pointers to evidence.…”
Section: Facts Vs Evidencementioning
confidence: 99%
“…Legal texts in other languages [23] are also worth exploring in the future study of Legal AM. [18] PSI2020 CA, AD 42 doc Niculae et al [15] NPC2017 CDCP CA, AD 731 rec, 3,800 set Park and Cardie [17] PC2018 CA 731 rec, 3,800 set Galassi et al [7] GLT2021 AD Walker et al [26] WCDL2011 VICP CA 30 doc Grabmair et al [8] GACS2015 AD Walker et al [27] WHNY2017 BVA CA 20doc, 5,674 set Walker et al [24] WFPR2018 CA 30doc, 8,149 set Walker et al [28] WPDL2019 CA, AD 50doc, 6,153 set Westermann et al [31] WSWA2019 AD Walker et al [29] WSW2020 CA, AD 75 doc, 623 set Xu et al [32] XSA2020 CanLII CA, AD 683 doc, 30,374 set Xu et al [34] XSA2021a CA, AD 1,148 doc, 127,330 set Xu et al [33] XSA2021b CA, AD 2,098 doc, 226,576 set [12] provided the initial study on computational argumentation in legal text. In MM2011, they produced a corpus including 47 English-language cases (judgments and decisions) from the HUDOC 3 open-source database of the European Court of Human Rights (ECHR), a common resource for legal text processing research.…”
Section: Creating Annotated Legal Corporamentioning
confidence: 99%